Suppose you have the following system of equations.

You could write it in terms of matrix multiplication as follows.

You could also write it in terms of vector addition as follows.

When you find a solution to the system of equations, you are really finding the scalars so that the vector on the right is the above linear combination. In other words, you are finding a linear relationship between the last column and those before it.
We considered writing this as an augmented matrix

and then row reducing it to get a matrix in row reduced echelon form from which it was easy to see the solution, finding the last column as a linear combination of the preceding columns. However, this process of row reduction works just as well to find the fourth column as a linear combination of the first three and the third as a linear combination of the first two, so when you reduce to row reduced echelon form, you are really solving many systems of equations at the same time. The important thing was the observation that the row operations did not change the solution set of the system. However, this could be said differently. The row operations did not change the available scalars in forming the last column as a linear combination of the first four. Similarly, the row operations did not change the available scalars to obtain the fourth column as a linear combination of the first three, and so forth. In other words, if a column is a linear combination of the preceding columns, then after doing row operations, that column will still be the same linear combination of the preceding columns. Thus we have the following significant observation which is stated here as a theorem.
Theorem 4.3.1 Row operations preserve all linear relationships between columns.
Now here is a slightly different description of the row reduced echelon form.
Definition 4.3.2 Let e_{i} denote the column vector which has all zero entries except for the i^{th} slot which is one. An m×n matrix is said to be in row reduced echelon form if, in viewing successive columns from left to right, the first nonzero column encountered is e_{1} and if you have encountered e_{1},e_{2},
Earlier an algorithm was presented which will produce a matrix in row reduced echelon form. A natural question is whether there is only one row reduced echelon form. In fact, there is only one and this follows easily from the above definition.
Suppose you had two B,C in row reduced echelon form and these came from the same matrix A through row operations. Then they have zero columns in the same positions because row operations preserve all zero columns. Also B,C have e_{1} in the same position because its position is that of the first column of A which is not zero. Similarly e_{2},e_{3} and so forth must be in the same positions because of the above definition where these positions are defined in terms of a column being the first in A when viewed from the left to the right which is not a linear combination of the columns before it. As to a column after e_{k} and before e_{k+1} if there is such, these are determined by the scalars which give this column in A as a linear combination of the columns to its left because all linear relationships between columns are preserved by doing row operations. Thus B,C must be exactly the same. This is why there is only one row reduced echelon form for a given matrix and justifies the use of the definite article when referring to it.
This proves the following theorem.
Now from this theorem, we can obtain the following.
Theorem 4.3.4 Let A be an n × n matrix. Then it is invertible if and only if there is a sequence of row operations which produces I.
Proof: ⇒ Since A is invertible, it follows from Theorem 4.2.3 that the columns of A must be independent. Hence, in the row reduced echelon form for A, the columns must be e_{1},e_{2},
⇐ Now suppose such a sequence of row operations produces I. Then since row operations preserve linear combinations between columns, it follows that no column is a linear combination of the others and consequently the columns are linearly independent. By Theorem 4.2.3 again, A is invertible. ■
It would be possible to define things like rank in terms of the row reduced echelon form and this is often done. However, in this book, these things will be defined in terms of vector space language.
Definition 4.3.5 Let A be an m×n matrix, the entries being in F a field. Then rank
Now here is a very useful result.
Proposition 4.3.6 Let A be an m×n matrix. Then rank
Proof: This is obvious if the matrix is already in row reduced echelon form. In this case, the pivot columns consist of e_{1},e_{2},
Note that from the description of the row reduced echelon form, the rank is also equal to the number of nonzero rows in the row reduced echelon form.