A little later a formula is given for the inverse of a matrix. However, it is not a good way to find
the inverse for a matrix. There is a much easier way and it is this which is presented here. It is
also important to note that not all matrices have inverses.
Example 2.1.24Let A =
( )
1 1
1 1
. Does A have an inverse?
One might think A would have an inverse because it does not equal zero. However,
( ) ( ) ( )
1 1 − 1 0
1 1 1 = 0
and if A^{−1} existed, this could not happen because you could multiply on the left by the inverse A
and conclude the vector
(− 1,1)
^{T} =
(0,0)
^{T}. Thus the answer is that A does not have an
inverse.
Suppose you want to find B such that AB = I. Let
( )
B = b1 ⋅⋅⋅ bn
Also the i^{th} column of I is
( )T
ei = 0 ⋅⋅⋅ 0 1 0 ⋅⋅⋅ 0
Thus, if AB = I, b_{i}, the i^{th} column of B must satisfy the equation Ab_{i} = e_{i}. The augmented
matrix for finding b_{i} is
(A|ei)
. Thus, by doing row operations till A becomes I, you end up with
(I|bi)
where b_{i} is the solution to Ab_{i} = e_{i}. Now the same sequence of row operations works
regardless of the right side of the agumented matrix
(A|ei)
and so you can save trouble by simply
doing the following.
row operations
(A|I) → (I|B )
and the i^{th} column of B is b_{i}, the solution to Ab_{i} = e_{i}. Thus AB = I.
This is the reason for the following simple procedure for finding the inverse of a matrix. This
procedure is called the Gauss Jordan procedure. It produces the inverse if the matrix has one.
Actually, it produces the right inverse.
Procedure 2.1.25Suppose A is an n×n matrix. To find A^{−1}if it exists, form the augmentedn × 2n matrix,
(A|I)
and then do row operations until you obtain an n × 2n matrix of the form
(I|B) (2.18)
(2.18)
if possible. When this has been done, B = A^{−1}. The matrix A has an inverse exactly when it ispossible to do row operations and end up with one like 2.18.
As described above, the following is a description of what you have just done.
RqRq−1⋅⋅⋅R1
A RqRq→−1⋅⋅⋅R1 I
I → B
where those R_{i} sympolize row operations. It follows that you could undo what you did by doing
the inverse of these row operations in the opposite order. Thus
R−1⋅⋅⋅R−1 R−1
I 1 →q−1 q A
R −11 ⋅⋅⋅R−q1−1R−q1
B → I
Here R^{−1} is the row operation which undoes the row operation R. Therefore, if you form
(B|I)
and do the inverse of the row operations which produced I from A in the reverse order, you would
obtain
(I|A)
. By the same reasoning above, it follows that A is a right inverse of B and so
BA = I also. It follows from Proposition 2.1.23 that B = A^{−1}. Thus the procedure produces the
inverse whenever it works.
If it is possible to do row operations and end up with A
row operations
→
I, then the above
argument shows that A has an inverse. Conversely, if A has an inverse, can it be found by the
above procedure? In this case there exists a unique solution x to the equation Ax = y. In fact it
is just x = Ix = A^{−1}y. Thus in terms of augmented matrices, you would expect to
obtain
( )
(A |y ) → I|A−1y
That is, you would expect to be able to do row operations to A and end up with I.
The details will be explained fully when a more careful discussion is given which is based on
more fundamental considerations. For now, it suffices to observe that whenever the above
procedure works, it finds the inverse.
At this point, you can see there will be no inverse because you have obtained a row of zeros in the
left half of the augmented matrix
(A|I)
. Thus there will be no way to obtain I on the left. In
other words, the three systems of equations you must solve to find the inverse have no
solution. In particular, there is no solution for the first column of A^{−1} which must
solve
( ) ( )
x 1
A |( y |) = |( 0 |)
z 0
because a sequence of row operations leads to the impossible equation, 0x + 0y + 0z = −1.