Here is how you use this method to find the eigenvalue closest to α and thecorresponding eigenvector.
Find
(A− αI )
^{−1}.
Pick u_{1}. If you are not phenomenally unlucky, the iterations will converge.
If u_{k} has been obtained,
−1
uk+1 = (A-−-αI)---uk
sk+1
where s_{k+1} is the entry of
(A − αI)
^{−1}u_{k} which has largest absolute value.
When the scaling factors, s_{k} are not changing much and the u_{k} are not changing much, find
the approximation to the eigenvalue by solving
sk+1 = --1--
λ − α
for λ. The eigenvector is approximated by u_{k+1}.
Check your work by multiplying by the original matrix to see how well what you have found
works.
Thus this amounts to the power method for the matrix
(A − αI )
^{−1} but you are free to pick
α.
Example 14.1.4Find the eigenvalue of A =
( )
| 5 − 14 11 |
( − 4 4 − 4 )
3 6 − 3
which is closest to −7.Also find an eigenvector which goes with this eigenvalue.
In this case the eigenvalues are −6,0, and 12 so the correct answer is −6 for the eigenvalue.
Then from the above procedure, I will start with an initial vector, u_{1} =
These scaling factors are pretty close after these few iterations. Therefore, the predicted eigenvalue
is obtained by solving the following for λ.
--1-- = 1.01
λ + 7
which gives λ = −6.01. You see this is pretty close. In this case the eigenvalue closest to −7 was
−6.
How would you know what to start with for an initial guess? You might apply Gerschgorin’s
theorem. However, sometimes you can begin with a better estimate.
Example 14.1.5Consider the symmetric matrix A =
( )
| 1 2 3 |
( 2 1 4 )
3 4 2
. Find the middleeigenvalue and an eigenvector which goes with it.
Since A is symmetric, it follows it has three real eigenvalues which are solutions to
If you use your graphing calculator to graph this polynomial, you find there is an eigenvalue
somewhere between −.9 and −.8 and that this is the middle eigenvalue. Of course you could zoom
in and find it very accurately without much trouble but what about the eigenvector which goes
with it? If you try to solve
there will be only the zero solution because the matrix on the left will be invertible and the same
will be true if you replace −.8 with a better approximation like −.86 or −.855. This
is because all these are only approximations to the eigenvalue and so the matrix in
the above is nonsingular for all of these. Therefore, you will only get the zero solution
and
and after finding the solution, divide by the largest entry −67.944, to obtain
( )
1.0
u2 = |( − .58921 |)
− .23044
After a couple more iterations, you obtain
( 1.0 )
| |
u3 = ( − .58777 ) (14.4)
− .22714
(14.4)
Then doing it again, the scaling factor is −513.42 and the next iterate is
( )
1.0
u4 = |( − .58778 |)
− .22714
Clearly the u_{k} are not changing much. This suggests an approximate eigenvector for this
eigenvalue which is close to −.855 is the above u_{3} and an eigenvalue is obtained by
solving
Thus the vector of 14.4 is very close to the desired eigenvector, just as −.8569 is very close to the
desired eigenvalue. For practical purposes, I have found both the eigenvector and the
eigenvalue.
Example 14.1.6Find the eigenvalues and eigenvectors of the matrix A =
( 2 1 3)
| 2 1 1|
( )
3 2 1
.
This is only a 3×3 matrix and so it is not hard to estimate the eigenvalues. Just get the
characteristic equation, graph it using a calculator and zoom in to find the eigenvalues. If you do
this, you find there is an eigenvalue near −1.2, one near −.4, and one near 5.5. (The characteristic
equation is 2 + 8λ + 4λ^{2}− λ^{3} = 0.) Of course I have no idea what the eigenvectors
are.
Lets first try to find the eigenvector and a better approximation for the eigenvalue near −1.2.
In this case, let α = −1.2. Then
It works pretty well. For practical purposes, the eigenvalue and eigenvector have now been found.
If you want better accuracy, you could just continue iterating. One can find the eigenvector
corresponding to the eigenvalue nearest 5.5 the same way.