A subspace is a set of vectors with the property that linear combinations of these vectors remain in the set. Geometrically, subspaces are like lines and planes which contain the origin. More precisely, the following definition is the right way to think of this.
Definition 6.1.1 Let V be a nonempty collection of vectors in Fn. Then V is called a subspace if whenever α,β are scalars and u,v are vectors in V, the linear combination αu + βv is also in V .
There is no substitute for the above definition or equivalent algebraic definition! However, it is sometimes helpful to look at pictures at least initially. The following are four subsets of ℝ2. The first is the shaded area between two lines which intersect at the origin, the second is a line through the origin, the third is the union of two lines through the origin, and the last is the region between two rays from the origin. Note that in the last, multiplication of a vector in the set by a nonnegative scalar results in a vector in the set as does the sum of two vectors in the set. However, multiplication by a negative scalar does not take a vector in the set to another in the set.
Observe how the above definition indicates that the claims posted on the picture are valid. Now here are the two main examples of subspaces.
Theorem 6.1.2 Let A be an m × n matrix. Then Im
|
Then ker
Proof: Suppose Axi is in Im
|
this because of the above properties of matrix multiplication. Note that A0 = 0 so 0 ∈ Im
Now suppose x,y are both in N
|
Thus the condition is satisfied. Of course N
Subspaces are exactly those subsets of Fn which are themselves vector spaces. Recall that a vector space is something which satisfies the vector space axioms on Page 56.
Proposition 6.1.3 Let V be a nonempty collection of vectors in Fn. Then V is a subspace if and only if V is itself a vector space having the same operations as those defined on Fn.
Proof: Suppose first that V is a subspace. It is obvious all the algebraic laws hold on V because it is a subset of Fn and they hold on Fn. Thus u + v = v + u along with the other axioms. Does V contain 0? Yes because it contains 0u = 0. Are the operations defined on V ? That is, when you add vectors of V do you get a vector in V ? When you multiply a vector in V by a scalar, do you get a vector in V ? Yes. This is contained in the definition. Does every vector in V have an additive inverse? Yes because −v =
Next suppose V is a vector space. Then by definition, it is closed with respect to linear combinations. Hence it is a subspace. ■
There is a fundamental result in the case where m < n. In this case, the matrix A of the linear transformation looks like the following.
Proof: First consider the case where A is a 1 × n matrix for n > 1. Say
|
If a1 = 0, consider the vector x = e1. If a1≠0, let
|
where b is chosen to satisfy the equation
|
Suppose now that the theorem is true for any m × n matrix with n > m and consider an
|
where A1 is of the form
|
where B is an m×
|
A1x has for its top entry the expression b + aTy. Letting B =
Now here is a very fundamental definition.
|
is x = 0. In other words the vectors are independent means that whenever
|
it follows that each xi = 0. The set of vectors is dependent if it is not independent. Thus Theorem 6.1.4 says that if you have more than n vectors in Fn this set of vectors will be dependent.
With this preparation, here is a major theorem.
Theorem 6.1.6 Suppose you have vectors
Proof: Let ui = ∑ j=1sajivj. This is merely giving names to the scalars in the linear combination which yields ui. Now suppose that s < r. Then if A is the matrix which has aji in the jth row and the ith column, it follows from Theorem 6.1.4 that there exists a vector in Fr such that Ax = 0 but x≠0. However, then
Definition 6.1.7 Let V be a subspace of Fn. Then
Proof: From Theorem 6.1.6, r ≤ s because each ui is in the span of
Definition 6.1.9 Let V be a subspace of Fn. Then the dimension of V is the number of vectors in a basis. This is well defined by Theorem 6.1.8.
Observation 6.1.10 The dimension of Fn is n. This is obvious because if x ∈ Fn, where x =
The next lemma says that if you have a vector not in the span of a linearly independent set, then you can add it in and the resulting longer list of vectors will still be linearly independent.
Proof: Suppose ∑ i=1kciui + dv = 0. It is required to verify that each ci = 0 and that d = 0. But if d≠0, then you can solve for v as a linear combination of the vectors,
|
contrary to assumption. Therefore, d = 0. But then ∑ i=1kciui = 0 and the linear independence of
It turns out that every subspace equals the span of some vectors. This is the content of the next theorem.
Proof: Pick a nonzero vector of V,u1. If V = span
The following is a fundamental result.
Theorem 6.1.13 If V is a subspace of Fn and the dimension of V is m, then m ≤ n and also if
Proof: If the dimension of V is m, then it has a basis of m vectors. It follows m ≤ n because if not, you would have an independent set of vectors which is longer than a spanning set of vectors
Next, if
Finally, if k = m, the vectors
Definition 6.1.14 The rank of a matrix A is the dimension of Im
Observation 6.1.15 When you have a matrix A and you do row operations to it. The solutions to the system of equations having augmented matrix
The row reduced echelon form for the above matrix is
| (6.2) |
and so its rank is 2 because every column is in the span of the first two columns. You can think of the above as the row reduced version of several systems of equations, those which have the following augmented matrices.
|
In each case, you can obtain the third column as a linear combination of the first two. Thus the last three columns in 6.1 are linear combinations of the first two columns in 6.1. Therefore, any linear combination of the columns of 6.1 can also be written as a linear combination of the first two columns of 6.1. In other words, the span of the columns of 6.1 equals the span of the first two columns of 6.1. Also, from 6.2, we can see that the first two columns of 6.1 are independent. Therefore, these columns are a basis for Im
Similar considerations apply to determining whether some vectors are independent. Remember the definition. To determine whether some vectors are independent, make them the columns of a matrix A and determine the solution set to Ax = 0. If there is only the zero solution, then the vectors are independent. If there are more solutions then these vectors are not independent.