3.1 Linear Combinations Of Vectors, Independence
The fundamental idea in linear algebra is the following notion of a linear combination.
Definition 3.1.1 Let x1,
,xn be vectors in a vector space. A finite linear combination of these
vectors is a vector which is of the form ∑
j=1najxj where the aj are scalars. In short, it is a
sum of scalars times vectors. span
denotes the set of all linear combinations of the
,xn. More generally, if S is any set of vectors, span
consists of all finite linear
combinations of vectors from S.
Definition 3.1.2 Let
be a vector space and its field of scalars. Then S ⊆ V is said to be linearly
independent if whenever
⊆ V with the vi distinct, then there is only one way to have a linear
and this is to have each ci = 0. More succinctly, if ∑
i=1ncivi = 0 then each ci = 0. A set S ⊆ V is linearly
dependent if it is not linearly independent. That is, there is some subset of S
and scalars ci
not all zero such that ∑
The following is a useful equivalent description of what it means to be independent.
Proposition 3.1.3 A set of vectors S is independent if and only if no vector is a linear combination
of the others.
Proof: ⇒ Suppose S is linearly independent. Could you have for some
No. This is not possible because if the above holds, then you would have
in contradiction to the assumption that
is linearly independent.
⇐ Suppose now that no vector in S is a linear combination of the others. Suppose ∑
i=1nciui = 0
where each ui ∈ S. It is desired to show that whenever this happens, each ci = 0. Could any of the ci be
non zero? No. If ck≠0, then you would have
showing that one can obtain uk as a linear combination of the other vectors after all. It follows that all
ci = 0 and so
is linearly independent.
Example 3.1.4 Determine whether the real valued functions defined on ℝ given by the polynomials
then differentiate both sides to obtain
Now differentiate again.
In the second equation, let x = −1. Then −c = 0 so c = 0. Thus
Now let x
= 0 in the top equation to find that a
Then from the bottom equation, it follows that b
also. Thus the three functions are linearly independent.
The main theorem is the following theorem, called the replacement or exchange theorem. It uses the
argument of the second half of the above proposition repeatedly.
Theorem 3.1.5 Let
be subsets of a vector space V with field of scalars
F and suppose each ui ∈ span
. Then r ≤ s. In words, linearly independent sets are no
longer than spanning sets.
Proof: Say r > s. By assumption, u1 = ∑
ibivi. Not all of the bi can equal 0 because if this were so,
you would have u1 = 0 which would violate the assumption that
is linearly independent. You
since u1 = 0. Thus some vi say vi1 is a linear combination of the vector u1 along with the
vj for j≠i. It follows that the span of
includes each of the
the hat indicates that vi1
has been omitted from the list of vectors. Now suppose each ui
where the vectors
have been omitted for k ≤ s
. Then there are scalars ci
By the assumption that
is linearly independent, not all of the
can equal 0. Why? Therefore,
there exists ik+1
Hence one can solve for vik+1
as a linear combination of
. Thus we can replace this
by a linear combination of
these vectors, and so the uj
Continuing this replacement process, it follows that since r > s, one can eliminate all of the vectors
and obtain that the
are contained in span
But this is impossible because then
you would have us+1 ∈ span
which is impossible since these vectors
independent. It follows that
r ≤ s
Next is the definition of dimension and basis of a vector space.
Definition 3.1.6 Let V be a vector space with field of scalars F. A subset S of V is a basis for V
- span =
- S is linearly independent.
The plural of basis is bases. It is this way to avoid hissing when referring to it.
The dimension of a vector space is the number of vectors in a basis. A vector space is finite dimensional
if it equals the span of some finite set of vectors.
Lemma 3.1.7 Let S be a linearly independent set of vectors in a vector space V . Suppose
is also a linearly independent set of vectors.
is a finite subset of
are scalars. Does it follow that each of the bi
equals zero and that a
= 0? If so, then this
is indeed linearly independent. First note that
= 0 since if not, you could
contrary to the assumption that v
. Hence you have
= 0 and also
But S is linearly independent and so by assumption each bi = 0. ■
Proposition 3.1.8 Let V be a finite dimensional nonzero vector space with field of scalars F. Then
it has a basis and also any two bases have the same number of vectors so the above definition of a
basis is well defined.
Proof: Pick u1≠0. If span
then this is a basis. If not, there exists u2
. Then by
is linearly independent. If
stop. You have a basis. Otherwise,
there exists u3
Then by Lemma 3.1.7
is linearly independent. Continue this
way. Eventually the process yields
which is linearly independent and
Otherwise there would exist a linearly independent set of k
vectors for all k.
However, by assumption, there
is a finite set of vectors
. Therefore, k ≤ s.
Thus there is a basis
are two bases, then since they both span
and are both linearly
independent, it follows from Theorem 3.1.5
that r ≤ s
and s ≤ r
As a specific example, consider Fn as the vector space. As mentioned above, these are the mappings
to the field
. It was shown in Example 3.0.3
that this is indeed a vector space with field of
. We usually think of this Fn
as the set of ordered n
with addition and scalar mutiplication defined as
Also, when referring to vectors in Fn, it is customary to denote them as bold faced letters, which is a
convention I will begin to observe at this point. It is also more convenient to write these vectors in Fn as
columns of numbers. Thus
There is a fundamental concept known as linear independence.
Observation 3.1.9 Fn has dimension n. To see this, note that a basis is e1,
the vector in Fn which has a 1 in the ith position and a zero everywhere else.
To see this, note that
and that if
so each xi is zero. Thus this set of vectors is a spanning set and is linearly independent so it is a basis.
There are n of these vectors and so the dimension of Fn is indeed n.
There is a fundamental observation about linear combinations of vectors in Fn which is stated
Theorem 3.1.10 Let a1,
,an be vectors in Fm where m < n. Then there exist scalars x1,
all equal to zero such that
Proof: If the conclusion were not so, then by definition,
would be independent. However,
there is a spanning set with only
contrary to Theorem
. Since these
vectors cannot be independent, they must be dependent which is the conclusion of the theorem.