The set of all eigenvalues of an n × n matrix M, is denoted by σ
(M )
and is referred to as thespectrumof M.
Eigenvectors are vectors which are shrunk, stretched or reflected upon multiplication by a
matrix. How can they be identified? Suppose x satisfies 6.1. Then
(λI − M )x = 0
for some x≠0. Therefore, the matrix M − λI cannot have an inverse and so by Theorem
3.3.18
det(λI − M ) = 0. (6.2)
(6.2)
In other words, λ must be a zero of the characteristic polynomial. Since M is an n×n matrix, it
follows from the theorem on expanding a matrix by its cofactor that this is a polynomial equation
of degree n. As such, it has a solution, λ ∈ ℂ. Is it actually an eigenvalue? The answer is yes and
this follows from Theorem 3.3.26 on Page 292. Since det
(λI − M )
= 0 the matrix λI −M cannot
be one to one and so there exists a nonzero vector, x such that
(λI − M )
x = 0. This proves the
following corollary.
Corollary 6.1.2Let M be an n × n matrix and det
(M − λI)
= 0. Then there existsx ∈ ℂ^{n}such that
(M − λI )
x = 0.
As an example, consider the following.
Example 6.1.3Find the eigenvalues and eigenvectors for the matrix
( 5 − 10 − 5 )
| |
A = ( 2 14 2 ) .
− 4 − 8 6
You first need to identify the eigenvalues. Recall this requires the solution of the
equation
When you expand this determinant, you find the equation is
( 2 )
(λ − 5) λ − 20λ+ 100 = 0
and so the eigenvalues are
5,10,10.
I have listed 10 twice because it is a zero of multiplicity two due to
2 2
λ − 20λ +100 = (λ− 10) .
Having found the eigenvalues, it only remains to find the eigenvectors. First find the
eigenvectors for λ = 5. As explained above, this requires you to solve the equation,
where z ∈ F. You would obtain the same collection of vectors if you replaced z with 4z. Thus a
simpler description for the solutions to this system of equations whose augmented matrix is in 6.3
is
( )
| 5 |
z( − 2 ) (6.4)
4
(6.4)
where z ∈ F. Now you need to remember that you can’t take z = 0 because this would result in
the zero vector and
Other than this value, every other choice of z in 6.4 results in an eigenvector. It is a good idea to
check your work! To do so, I will take the original matrix and multiply by this vector and see if I
get 5 times this vector.
so it appears this is correct. Always check your work on these problems if you care about getting
the answer right.
The variable, z is called a free variable or sometimes a parameter. The set of vectors in 6.4 is
called the eigenspace and it equals ker
(λI − A)
. You should observe that in this case the
eigenspace has dimension 1 because there is one vector which spans the eigenspace. In general, you
obtain the solution from the row echelon form and the number of different free variables gives you
the dimension of the eigenspace. Just remember that not every vector in the eigenspace is
an eigenvector. The vector, 0 is not an eigenvector although it is in the eigenspace
because
However, every other choice of z and y does result in an eigenvector for the eigenvalue
λ = 10. As in the case for λ = 5 you should check your work if you care about getting it
right.
so it worked. The other vector will also work. Check it.
The above example shows how to find eigenvectors and eigenvalues algebraically. You may have
noticed it is a bit long. Sometimes students try to first row reduce the matrix before looking for
eigenvalues. This is a terrible idea because row operations destroy the value of the eigenvalues.
The eigenvalue problem is really not about row operations. A general rule to remember about the
eigenvalue problem is this.
If it is not long and hard it is usually wrong!
The eigenvalue problem is the hardest problem in algebra and people still do research on
ways to find eigenvalues. Now if you are so fortunate as to find the eigenvalues as in
the above example, then finding the eigenvectors does reduce to row operations and
this part of the problem is easy. However, finding the eigenvalues is anything but easy
because for an n × n matrix, it involves solving a polynomial equation of degree n and
none of us are very good at doing this. If you only find a good approximation to the
eigenvalue, it won’t work. It either is or is not an eigenvalue and if it is not, the only
solution to the equation,
(λI − M )
x = 0 will be the zero solution as explained above
and
Now find the eigenvectors. For λ = 0 the augmented matrix for finding the solutions
is
( )
2 2 − 2 0
|( 1 3 − 1 0 |)
− 1 1 1 0
and the row reduced echelon form is
( )
1 0 − 1 0
|( 0 1 0 0 |)
0 0 0 0
Therefore, the eigenvectors are of the form
( )
| 1 |
z( 0 )
1
where z≠0.
Next find the eigenvectors for λ = 2. The augmented matrix for the system of equations needed
to find these eigenvectors is
( )
0 − 2 2 0
|( − 1 − 1 1 0 |)
1 − 1 1 0
and the row reduced echelon form is
( )
| 1 0 0 0 |
( 0 1 − 1 0 )
0 0 0 0
and so the eigenvectors are of the form
( )
0
z|( 1 |)
1
where z≠0.
Finally find the eigenvectors for λ = 4. The augmented matrix for the system of equations
needed to find these eigenvectors is
( )
2 − 2 2 0
|( − 1 1 1 0 |)
1 − 1 3 0
and the row reduced echelon form is
( )
| 1 − 1 0 0 |
( 0 0 1 0 ) .
0 0 0 0
Therefore, the eigenvectors are of the form
( )
| 1 |
y( 1 )
0
where y≠0.
Example 6.1.5Let
( )
| 2 − 2 − 1 |
A = ( − 2 − 1 − 2 ) .
14 25 14
Find the eigenvectors and eigenvalues.
In this case the eigenvalues are 3,6,6 where I have listed 6 twice because it is a zero of
algebraic multiplicity two, the characteristic equation being
2
(λ− 3)(λ − 6) = 0.
It remains to find the eigenvectors for these eigenvalues. First consider the eigenvectors for λ = 3.
You must solve
and using the usual procedures yields the eigenvectors for λ = 6 are of the form
( 1 )
| −81 |
z( −4 )
1
or written more simply,
( )
| − 1 |
z( − 2 )
8
where z ∈ F.
Note that in this example the eigenspace for the eigenvalue λ = 6 is of dimension 1 because
there is only one parameter which can be chosen. However, this eigenvalue is of multiplicity two as
a root to the characteristic equation.
Definition 6.1.6If A is an n × n matrix with the property that some eigenvalue hasalgebraic multiplicity as a root of the characteristic equation which is greater than thedimension of the eigenspace associated with this eigenvalue, then the matrix is calleddefective.
There may be repeated roots to the characteristic equation, 6.2 and it is not known whether
the dimension of the eigenspace equals the multiplicity of the eigenvalue. However, the following
theorem is available.
Theorem 6.1.7Suppose Mv_{i} = λ_{i}v_{i},i = 1,
⋅⋅⋅
,r , v_{i}≠0, and that if i≠j, then λ_{i}≠λ_{j}.Then the set of eigenvectors,
{v1,⋅⋅⋅,vr}
is linearly independent.
Proof. Suppose the claim of the lemma is not true. Then there exists a subset of this set of
vectors
{w1,⋅⋅⋅,wr } ⊆ {v1,⋅⋅⋅,vk}
such that
∑r
cjwj = 0 (6.5)
j=1
(6.5)
where each c_{j}≠0. Say Mw_{j} = μ_{j}w_{j} where
{μ1,⋅⋅⋅,μr} ⊆ {λ1,⋅⋅⋅,λk },
the μ_{j} being distinct eigenvalues of M. Out of all such subsets, let this one be such that r is as
small as possible. Then necessarily, r > 1 because otherwise, c_{1}w_{1} = 0 which would imply w_{1} = 0,
which is not allowed for eigenvectors.
Next pick μ_{k}≠0 and multiply both sides of 6.5 by μ_{k}. Such a μ_{k} exists because r > 1.
Thus
∑r
cjμkwj = 0 (6.7)
j=1
(6.7)
Subtract the sum in 6.7 from the sum in 6.6 to obtain
∑r
cj(μk − μj)wj = 0
j=1
Now one of the constants c_{j}
(μk − μj)
equals 0, when j = k. Therefore, r was not as small as
possible after all. ■
In words, this theorem says that eigenvectors associated with distinct eigenvalues are linearly
independent.
Sometimes you have to consider eigenvalues which are complex numbers. This occurs in
differential equations for example. You do these problems exactly the same way as you do the ones
in which the eigenvalues are real. Here is an example.
Example 6.1.8Find the eigenvalues and eigenvectors of the matrix