Basic Topological and Algebraic Considerations
It is often the case that one desires to consider the complex numbers denoted as ℂ. However, most of the
time in the first part of the book, we have in mind ℝ. Nevertheless, it is convenient to have the theory
pertain just as well to ℂp. Therefore, we will denote by F that which could be either ℝ or ℂ. The symbol
will refer to the norm of
x ∈ Fp
, defined as
but we could also use the norm
I assume the reader has seen most of this before in multivariable calculus.
From linear algebra or calculus,
is called a norm if the following conditions are satisfied.
First note that if z,w ∈ ℂ. Then
. To see this, let
This also follows from the first part of the following proposition.
Proposition 2.0.1 Each of
satisfy the axioms of a norm, 2.1 - 2.3.
Proof: Consider first
the usual norm. Define
Then it is obvious that
Also the following come directly from the definition.
For a,b ∈ F,
Thus the comma goes across + and you can factor out scalars in first argument and conjugates of the
scalars in the second. The properties 2.4-2.6 are called the inner product axioms.
The Cauchy Schwarz inequality, Theorem 1.10.1 says that
which shows 2.3
. The other two axioms are obvious.
is not obvious.
Note that the above two norms are equivalent in the sense that
Thus in what follows, it will not matter which norm is being used. Actually, any two norms are equivalent,
which will be shown later. The significance of the Euclidean norm
is geometrical. See the problems. The
fundamental result pertaining to the inner product just discussed is the Gram Schmidt process presented
Definition 2.0.2 A set of vectors
is called orthonormal if
Then there is a very easy proposition which follows this.
Proposition 2.0.3 Suppose
is an orthonormal set of vectors. Then it is linearly
Proof: Suppose ∑
i=1kcivi = 0. Then taking dot products with vj,
Since j is arbitrary, this shows the set is linearly independent as claimed. ■
It turns out that if X is any subspace of Fm, then there exists an orthonormal basis for X. This
follows from the use of the next lemma applied to a basis for X. Recall first that from linear
algebra, every subspace of Fm has a basis. If this is not familiar, see the appendix on linear
Lemma 2.0.4 Let
be a linearly independent subset of Fp, p ≥ n. Then there exist
which have the property that for each k ≤ n, span
Proof: Let u1 ≡ x1∕
Thus for k
= 1, span
is an orthonormal
set. Now suppose for some
k < n, u1
have been chosen such that
. Then define
where the denominator is not equal to zero because the xj form a basis, and so
Thus by induction,
Also, xk+1 ∈ span
which is seen easily by solving
and it follows
If l ≤ k,
generated in this way are therefore orthonormal because each vector has unit
The following lemma is a fairly simple observation about the Gram Schmidt process which
says that if you start with orthonormal vectors, the process will not undo what you already
Lemma 2.0.5 Suppose
is a linearly independent set of vectors such that
is an orthonormal set of vectors. Then when the Gram Schmidt process is applied to
the vectors in the given order, it will not change any of the w1,
be the orthonormal set delivered by the Gram Schmidt process. Then
because by definition, u1 ≡ w1∕
Now suppose uj
for all j ≤ k ≤ r.
Then if k < r,
consider the definition of uk+1.
By induction, uj = wj and so this reduces to wk+1∕
Note how the argument depended only on the axioms of the inner product. Thus it will hold with no
change in any inner product space, not just Fn. However, our main interest is the special case. An inner
produce space is just a vector space which has an inner product, that which satisfies 2.4 -
Lemma 2.0.6 Suppose V,W are subspaces of Fn and they have orthonormal bases,
respectively. Let A map V to W be defined by
. That is, A preserves Euclidean norms.
Proof: This follows right away from a computation. If
is orthonormal, then