60 CHAPTER 2. LINEAR TRANSFORMATIONS

where b is chosen to satisfy the equation

a1b+

n∑k=2

ak = 0

Suppose now that the theorem is true for any m × n matrix with n > m and consider an(m× 1) × n matrix A where n > m + 1. If the first column of A is 0, then you could letx = e1 as above. If the first column is not the zero vector, then by doing row operations,the equation Ax = 0 can be reduced to the equivalent system

A1x = 0

where A1 is of the form

A1 =

(1 aT

0 B

)where B is an m × (n− 1) matrix. Since n > m + 1, it follows that (n− 1) > m and soby induction, there exists a nonzero vector y ∈ Fn−1 such that By = 0. Then consider thevector

x =

(b

y

)

A1x has for its top entry the expression b + aTy. Letting B =

bT1

...

bTm

 , the ith entry of

A1x for i > 1 is of the form bTi y = 0. Thus if b is chosen to satisfy the equation b+aTy = 0,

then A1x = 0.■

2.6 Subspaces and Spans

Definition 2.6.1 Let {x1, · · · ,xp} be vectors in Fn. A linear combination is any expressionof the form

p∑i=1

cixi

where the ci are scalars. The set of all linear combinations of these vectors is calledspan (x1, · · · ,xn) . A nonempty V ⊆ Fn, is is called a subspace if whenever α, β are scalarsand u and v are vectors of V, it follows αu + βv ∈ V . That is, it is “closed under thealgebraic operations of vector addition and scalar multiplication”. The empty set is never asubspace by definition. A linear combination of vectors is said to be trivial if all the scalarsin the linear combination equal zero. A set of vectors is said to be linearly independent ifthe only linear combination of these vectors which equals the zero vector is the trivial linearcombination. Thus {x1, · · · ,xn} is called linearly independent if whenever

p∑k=1

ckxk = 0

it follows that all the scalars ck equal zero. A set of vectors, {x1, · · · ,xp} , is called linearlydependent if it is not linearly independent. Thus the set of vectors is linearly dependent ifthere exist scalars ci, i = 1, · · · , n, not all zero such that

∑pk=1 ckxk = 0.