As mentioned above, F^{n} is an example of a vector space and this is what is studied in
linear algebra. The concept of linear combination is fundamental in all of linear
algebra. When one considers only algebraic considerations, it makes no difference
what field of scalars you are using. It could be ℝ, ℂ, ℚ or even a field of residue
classes. However, go ahead and think ℝ or ℂ since the subject of interest here is
analysis.
Definition 3.2.1Let
{x1,⋅⋅⋅,xp}
be vectors in a vector space, Y having the field ofscalars F. Alinear combination is any expression of the form
∑p
cixi
i=1
where the c_{i}are scalars. The set of all linear combinations of these vectors is calledspan
(x1,⋅⋅⋅,xn)
.If V ⊆ Y, then V is called asubspace if whenever α,β are scalars and uand v are vectors of V, it follows αu + βv ∈ V . That is, it is “closed under the algebraicoperations of vector addition and scalar multiplication” and is therefore, a vector space. Alinear combination of vectors is said to betrivial if all the scalars in the linear combinationequal zero. A set of vectors is said to be linearly independent if the only linear combination ofthese vectors which equals the zero vector is the trivial linear combination. Thus
{x1,⋅⋅⋅,xn}
is called linearly independentif whenever
∑p
ckxk = 0
k=1
it follows that all the scalars, c_{k}equal zero. A set of vectors,
{x1,⋅⋅⋅,xp }
, is called linearlydependent if it is not linearly independent. Thus the set of vectors islinearly dependent ifthere exist scalars, c_{i},i = 1,
⋅⋅⋅
,n, not all zero such that∑_{k=1}^{p}c_{k}x_{k} = 0.
Lemma 3.2.2A set of vectors
{x1,⋅⋅⋅,xp}
is linearly independent if and only ifnone of the vectors can be obtained as a linear combination of the others.
Proof:Suppose first that
{x1,⋅⋅⋅,xp}
is linearly independent. If
x = ∑ cx ,
k j⁄=k j j
then
∑
0 = 1xk + (− cj)xj,
j⁄=k
a nontrivial linear combination, contrary to assumption. This shows that if the
set is linearly independent, then none of the vectors is a linear combination of the
others.
Now suppose no vector is a linear combination of the others. Is
{x1,⋅⋅⋅,xp}
linearly
independent? If it is not, there exist scalars, c_{i}, not all zero such that
∑p
cixi = 0.
i=1
Say c_{k}≠0. Then you can solve for x_{k} as
∑
xk = (− cj)∕ckxj
j⁄=k
contrary to assumption. This proves the lemma. ■
The following is called the exchange theorem.
Theorem 3.2.3If
span (u1,⋅⋅⋅,ur) ⊆ span (v1,⋅⋅⋅,vs) ≡ V
and {u_{1},
⋅⋅⋅
,u_{r}} are linearly independent, then r ≤ s.
Proof: Suppose r > s. Let E_{p} denote a finite list of vectors of
{v1,⋅⋅⋅,vs}
and let
|Ep|
denote the number of vectors in the list. Let F_{p} denote the first p vectors in
{u1,⋅⋅⋅,ur}
. In case p = 0,F_{p} will denote the empty set. For 0 ≤ p ≤ s, let E_{p} have the
property
span (Fp,Ep ) = V
and
|Ep|
is as small as possible for this to happen. I claim
|Ep|
≤ s − p if E_{p} is
nonempty.
Here is why. For p = 0, it is obvious. Suppose true for some p < s. Then
up+1 ∈ span (Fp,Ep )
and so there are constants, c_{1},
⋅⋅⋅
,c_{p} and d_{1},
⋅⋅⋅
,d_{m} where m ≤ s − p such that
∑p ∑m
up+1 = ciui + dizj
i=1 j=1
for
{z1,⋅⋅⋅,zm} ⊆ {v1,⋅⋅⋅,vs}.
Then not all the d_{i} can equal zero because this would violate the linear independence of the
{u1,⋅⋅⋅,ur}
. Therefore, you can solve for one of the z_{k} as a linear combination of
{u1,⋅⋅⋅,up+1}
and the other z_{j}. Thus you can change F_{p} to F_{p+1} and include one fewer
vector in E_{p}. Thus
|Ep+1 |
≤ m − 1 ≤ s − p − 1. This proves the claim.
Therefore, E_{s} is empty and span
(u1,⋅⋅⋅,us)
= V. However, this gives a contradiction
because it would require
us+1 ∈ span(u1,⋅⋅⋅,us)
which violates the linear independence of these vectors.
Alternate proof:Recall from linear algebra that if you have A an m×n matrix where
m < n so there are more columns than rows, then there exists a nonzero solution x to the
equation Ax = 0. Recall why this was. You must have free variables. Then by assumption,
you have
∑s
uj = aijvi
i=1
If s < r, then the matrix
(aij)
has more columns than rows and so there exists a nonzero
vector x ∈ F^{r} such that ∑_{j=1}^{r}a_{ij}x_{j} = 0. Then consider the following.
read as the dimensionof V is the number of vectors in a basis.
Of course you should wonder right now whether an arbitrary subspace of a finite
dimensional vector space even has a basis. In fact it does and this is in the next theorem.
First, here is an interesting lemma.
Lemma 3.2.8Suppose v
∕∈
span
(u1,⋅⋅⋅,uk)
and
{u1,⋅⋅⋅,uk}
is linearlyindependent. Then
{u1,⋅⋅⋅,uk,v}
is also linearly independent.
Proof:Suppose ∑_{i=1}^{k}c_{i}u_{i} + dv = 0. It is required to verify that each c_{i} = 0 and that
d = 0. But if d≠0, then you can solve for v as a linear combination of the vectors,
{u1,⋅⋅⋅,uk}
,
∑k (ci)
v = − -d ui
i=1
contrary to assumption. Therefore, d = 0. But then ∑_{i=1}^{k}c_{i}u_{i} = 0 and the linear
independence of
{u1,⋅⋅⋅,uk}
implies each c_{i} = 0 also. ■
Theorem 3.2.9Let V be a nonzero subspace of Y a finite dimensional vectorspace having dimension n. Then V has a basis.
Proof:Let v_{1}∈ V where v_{1}≠0. If span
{v1}
= V, stop.
{v1}
is a basis for V .
Otherwise, there exists v_{2}∈ V which is not in span
is a
larger linearly independent set of vectors. Continuing this way, the process must
stop before n + 1 steps because if not, it would be possible to obtain n + 1 linearly
independent vectors contrary to the exchange theorem and the assumed dimension of Y .
■
In words the following corollary states that any linearly independent set of vectors can be
enlarged to form a basis.
Corollary 3.2.10Let V be a subspace of Y, a finite dimensional vector space ofdimensionn and let
{v1,⋅⋅⋅,vr}
be a linearly independent set of vectors in V . Then eitherit is a basis for V or there exist vectors, v_{r+1},
⋅⋅⋅
,v_{s}such that
{v1,⋅⋅⋅,vr,vr+1,⋅⋅⋅,vs}
is a basis for V.
Proof:This follows immediately from the proof of Theorem 3.2.9. You do exactly the
same argument except you start with
{v1,⋅⋅⋅,vr}
rather than
{v1}
. ■
It is also true that any spanning set of vectors can be restricted to obtain a
basis.
Theorem 3.2.11Let V be a subspace of Y, a finite dimensional vector space ofdimension n and supposespan
(u1⋅⋅⋅,up)
= V where the u_{i}are nonzero vectors. Thenthere exist vectors,
{v1⋅⋅⋅,vr}
such that
{v1⋅⋅⋅,vr}
⊆
{u1 ⋅⋅⋅,up}
and
{v1⋅⋅⋅,vr}
is a basis for V .
Proof:Let r be the smallest positive integer with the property that for some set,
{v1,⋅⋅⋅,vr}
⊆
{u1,⋅⋅⋅,up}
,
span(v1,⋅⋅⋅,vr) = V.
Then r ≤ p and it must be the case that
{v1⋅⋅⋅,vr}
is linearly independent because if it were
not so, one of the vectors, say v_{k} would be a linear combination of the others. But
then you could delete this vector from
{v1 ⋅⋅⋅,vr}
and the resulting list of r − 1
vectors would still span V contrary to the definition of r. This proves the theorem.
■