All of this depends on the minimum polynomial. It was shown above that this polynomial exists, but how
can you find it? In fact, it is not all that hard to find. Recall that if L ∈ℒ
(V,V)
where the dimension of V
is n, then I,L^{2},
⋅⋅⋅
,L^{n2
} is linearly independent. Thus some linear combination equals zero. The minimum
polynomial was the polynomial p
(λ)
of smallest degree which is monic and which has p
(L)
= 0. At this
point, we only know that this degree is no more than n^{2}. However, it will be shown later in the proof of the
Cayley Hamilton theorem that there exists a polynomial q
Another observation to make is that it suffices to find the minimum polynomial for the matrix of the
linear transformation taken with respect to any basis. Recall the relationship of this matrix and
L.
L
V → V
q ↑ ∘ ↑ q
Fn → Fn
A
where q is a one to one and onto linear map from F^{n} to V . Thus if p
(L )
is a polynomial in
L,
p (L ) = p(q−1Aq )
A typical term on the right is of the form
( k times )
| ◜(−-1--)(-−1--)◞(◟−1---)---−1--◝| −1( k)
ck( q Aq q Aq q Aq ⋅⋅⋅q Aq ) = q ckA q
Thus, applying this to each term and factoring out q^{−1} and q,
p(L) = q−1p(A )q
Recall the convention that A^{0} = I the identity matrix and L^{0} = I, the identity linear transformation. Thus
p
(L )
= 0 if and only if p
(A)
= 0 and so the minimum polynomial for A is exactly the same as the
minimum polynomial for L. However, in case of A, the multiplication is just matrix multiplication so we
can compute with it easily.
This shows that it suffices to learn how to find the minimum polynomial for an n × n matrix. I will
show how to do this with some examples. The process can be made much more systematic, but I will try to
keep it pretty short because it is often the case that it is easy to find it without going through a long
computation.
Example 6.2.1Find the minimum polynomial of
( )
| − 1 0 6 |
( 1 1 − 3 )
− 1 0 4
Go right to the definition and use the fact that you only need to have three powers of this matrix in
order to get things to work which will be shown later. Thus the minimum polynomial involves finding
a,b,c,d scalars such that
You could include all nine powers if you want, but there is no point in doing so from what will be
presented later. You will be able to find a polynomial of degree no larger than 3 which will
work.
There is such a solution from the above theory and it is only a matter of finding it. Thus you need to
find scalars such that
We can take d = 0 and c = 1 and find that a = 2,b = −3. Thus a candidate for minimum polynomial
is
2
λ − 3λ + 2.
Could you have a smaller degree polynomial? No you could not because if you took both c and d equal to 0,
then you would be forced to have a,b both be zero as well. Hence this must be the minimum polynomial
provided the matrix satisfies this equation. However, you can just plug it in to the equation and see
that it works. If it didn’t work, you would simply include another equation in the above computation for
a,b,c,d.
It is a little tedious, but completely routine to find this minimum polynomial. To be more systematic,
you would take the powers of the matrix and string each of them out into a long n^{2}× 1 vector and make
these the columns of a matrix which would then be row reduced. However, as shown above, you can get
away with less as in the above example, but you need to be sure to check that the matrix satisfies the
equation you come up with.
Now here is an example where F = ℤ_{5} and the arithmetic is in F.
If we pick the top left corners, the middle entry, the bottom right corner and the entries in the middle of
the bottom row, an appropriate augmented matrix is
so it would seem a possible minimum polynomial is obtained by a = 1,b = −1 = 4,c = 0,d = 1. Thus it has
degree 3. There cannot be any polynomial of smaller degree because of the first three columns so it would
seem that this should be the minimum polynomial,
3
1 + 4λ + λ
Does it send the matrix to 0? This just involves checking whether it does and in fact, this is the case using
the arithmetic in the residue class.