4.2. SUBSPACES SPANS AND BASES 101

Proof: Suppose first that {x1, · · · ,xn} is linearly independent. If

xk = ∑j ̸=k

c jx j,

then 0 = 1xk +∑ j ̸=k (−c j)x j, a nontrivial linear combination, contrary to assumption.This shows that if the set is linearly independent, then none of the vectors is a linear com-bination of the others.

Now suppose no vector is a linear combination of the others. Is {x1, · · · ,xn} linearlyindependent? If it is not, there exist scalars, ci, not all zero such that ∑

ni=1 cixi = 0. Say

ck ̸= 0. Then you can solve for xk as xk = ∑ j ̸=k (−c j/ck)x j contrary to assumption. Thisproves the lemma. ■

The following is called the exchange theorem.

Theorem 4.2.3 If

span(u1, · · · ,ur)⊆ span(v1, · · · ,vs)≡V

and {u1, · · · ,ur} are linearly independent, then r ≤ s.

Proof: Suppose r > s. Let Fp denote the first p vectors in {u1, · · · ,ur}. Let F0 denotethe empty set. Let Ep denote a finite list of vectors of {v1, · · · ,vs} and let

∣∣Ep∣∣ denote the

number of vectors in the list. Note that, by assumption, span(F0,Es) =V . For 0≤ p≤ s, letEp have the property span(Fp,Ep) =V and

∣∣Ep∣∣ is as small as possible for this to happen.

If∣∣Ep∣∣ = 0, then span(Fp) = V which would imply that, since r > s ≥ p,ur ∈ span(Fs)

contradicting the linear independence of {u1, · · · ,ur}. Assume then that∣∣Ep∣∣ > 0. Then

up+1 ∈ span(Fp,Ep) and so there are constants, c1, · · · ,cp and d1, · · · ,dm such that up+1 =

∑pi=1 ciui +∑

mj=1 diz j for {z1, · · · ,zm} ⊆ {v1, · · · ,vs} . Then not all the di can equal zero

because this would violate the linear independence of the {u1, · · · ,ur} . Therefore, you cansolve for one of the zk as a linear combination of

{u1, · · · ,up+1

}and the other z j. Thus

you can change Fp to Fp+1 and include one fewer vector in Ep+1 with span(Fp+1,Ep+1)=Vand so

∣∣Ep+1∣∣< ∣∣Ep

∣∣ contrary to the claim that∣∣Ep∣∣was as small as possible. Thus

∣∣Ep∣∣= 0

after all and so a contradiction results.Alternate proof: Recall from linear algebra that if you have A an m×n matrix where

m < n so there are more columns than rows, then there exists a nonzero solution x to theequation Ax= 0. Recall why this was. You must have free variables. Then by assumption,you have u j = ∑

si=1 ai jvi. If s < r, then the matrix (ai j) has more columns than rows and so

there exists a nonzero vector x ∈ Fr such that ∑rj=1 ai jx j = 0. Then consider the following.

r

∑j=1

x ju j =r

∑j=1

x j

s

∑i=1

ai jvi = ∑i

∑j

ai jx jvi = ∑i

0v j = 0

and since not all x j = 0, this contradicts the independence of {u1, · · · ,ur}. ■

Definition 4.2.4 A finite set of vectors, {x1, · · · ,xr} is a basis for a vector space Vif

span(x1, · · · ,xr) =V

and {x1, · · · ,xr} is linearly independent. Thus if v ∈V there exist unique scalars, v1, · · · ,vrsuch that v= ∑

ri=1 vixi. These scalars are called the components of v with respect to the

basis {x1, · · · ,xr} and {x1, · · · ,xr} are said to “span” V .

4.2. SUBSPACES SPANS AND BASES 101Proof: Suppose first that {a ,--- ,a,} is linearly independent. IfLe= y CjXj,J#kthen 0 = lay +Yj4¢(—cj) xj, a nontrivial linear combination, contrary to assumption.This shows that if the set is linearly independent, then none of the vectors is a linear com-bination of the others.Now suppose no vector is a linear combination of the others. Is {a ,--- ,a,} linearlyindependent? If it is not, there exist scalars, c;, not all zero such that )°"_ | cjw; = 0. Saycx #0. Then you can solve for a, as @% = Yj4% (—c;/cx) &; contrary to assumption. Thisproves the lemma.The following is called the exchange theorem.Theorem 4.2.3 7span (u),--- ,u,) C span(v1,--- ,Us) =Vand {u,,:+: ,u,} are linearly independent, then r < s.Proof: Suppose r > s. Let F,, denote the first p vectors in {u),--- ,u,}. Let Fo denotethe empty set. Let E, denote a finite list of vectors of {v1,--- ,vs} and let |E P| denote thenumber of vectors in the list. Note that, by assumption, span (Fo, £;) =V. For0 < p<s, letE, have the property span (F,,E,) = V and |ED| is as small as possible for this to happen.If |E»| = 0, then span(F,) = V which would imply that, since r > s > p,u, € span (Fs)contradicting the linear independence of {w,--- ,u,}. Assume then that E,| > 0. ThenUp+1 © span (Fy, Ep) and so there are constants, c),--- ,Cp and d},-++ ,dj such that up+1 =ve citi +27 diz; for {Z1,°++, 2m} C {v1,--+, vs}. Then not all the d; can equal zerobecause this would violate the linear independence of the {u1,--- ,u,}. Therefore, you cansolve for one of the z, as a linear combination of {ur, tee Up+i} and the other z;. Thusyou can change F, to F,,,; and include one fewer vector in E,,,; with span (Fp+41,Ep41) =Vand so |Ep+1 | < |Ep| contrary to the claim that |ED| was as small as possible. Thus |ED| =0after all and so a contradiction results.Alternate proof: Recall from linear algebra that if you have A an m x n matrix wherem <n so there are more columns than rows, then there exists a nonzero solution x to theequation Ax = 0. Recall why this was. You must have free variables. Then by assumption,you have wu; = )3_ a;v;. If s <r, then the matrix (a;;) has more columns than rows and sothere exists a nonzero vector x € F” such that Y= aj jx; = 0. Then consider the following.r rYi xjuj= yi xjj=1 j=l iand since not all x; = 0, this contradicts the independence of {u1,--- ,u,}.SsQyjVvi = VY aijx jv; = y'0v; =01 J i= iDefinition 4.2.4 A finite set of vectors, {x1,--- ,a,} is a basis for a vector space Vifspan (a@1,---,@,) =Vand {2 ,--- ,#,} is linearly independent. Thus if v € V there exist unique scalars, v,,--+ ,V;such that v = Y)_, vjx;. These scalars are called the components of v with respect to thebasis {a1,--- ,@,} and {a,,--- ,a,} are said to “span” V.