This chapter discusses numerical methods for finding eigenvalues. However, to do this correctly, you must
include numerical analysis considerations which are distinct from linear algebra. The purpose of this
chapter is to give an introduction to some numerical methods without leaving the context of linear algebra.
In addition, some examples are given which make use of computer algebra systems. For a more thorough
discussion, you should see books on numerical methods in linear algebra like some listed in the
references.
Let A be a complex p × p matrix and suppose that it has distinct eigenvalues
{λ1,⋅⋅⋅,λm}
and that
|λ1|
>
|λk|
for all k. Also let the Jordan form of A be
( )
| J1 |
J = |( ... |)
J
m
with J1 an m1× m1 matrix.
Jk = λkIk + Nk
where Nkrk≠0 but Nkrk+1 = 0. Also let
−1 − 1
P AP = J,A = P JP .
Now fix x ∈ Fp. Take Ax and let s1 be the entry of the vector Ax which has largest absolute value. Thus
Ax∕s1 is a vector y1 which has a component of 1 and every other entry of this vector has
magnitude no larger than 1. If the scalars
{s1,⋅⋅⋅,sn−1}
and vectors
{y1,⋅⋅⋅,yn− 1}
have been
obtained, let yn≡ Ayn−1∕sn where sn is the entry of Ayn−1 which has largest absolute value.
Thus
( n )
1 | J1 |
= ----------P |( ... |) P−1x
snsn−1⋅⋅⋅s1 Jn
( m )
n λ−1nJn1
= ----λ1----P || ... || P −1x (16.2)
snsn−1⋅⋅⋅s1 ( −n n )
λ 1 Jm
Consider one of the blocks in the Jordan form. First consider the kth of these blocks, k > 1. It
equals
rk ( )
λ−nJn = ∑ n λ−nλn−iN i
1 k i=0 i 1 k k
which clearly converges to 0 as n →∞ since
|λ1|
>
|λk|
. An application of the ratio test or root test for
each term in the sum will show this. When k = 1, this block is
( ) ( )
− n n −n n r∑1 n −n n−i i n [− r1 r1 ]
λ1 J1 = λ 1 Jk = i λ1 λ1 N1 = r1 λ1 N 1 + en
i=0
where limn→∞en = 0 because it is a sum of bounded matrices which are multiplied by
(n)
i
∕
(n)
r1
. This
quotient converges to 0 as n →∞ because i < r1. It follows that 16.2 is of the form
λn ( n ) ( λ− r1N r1+ e 0 ) λn ( n)
yn = -----1----- P 1 1 n P−1x ≡ -----1----- wn
snsn−1⋅⋅⋅s1 r1 0 En snsn− 1⋅⋅⋅s1 r1
where En→ 0,en→ 0. Let
( )
P−1x
m1 denote the first m1 entries of the vector P−1x. Unless a very
unlucky choice for x was picked, it will follow that
( )
P− 1x
m1
∈∕
ker
(N1r1)
. Then for large n, yn is close to
the vector
( ) ( −r r ) ( )
----λn1----- n P λ1 1N11 0 P −1x ≡----λn1---- n w ≡ z ⁄= 0
snsn−1⋅⋅⋅s1 r1 0 0 snsn−1⋅⋅⋅s1 r1
However, this is an eigenvector because
A−λ1I ( )
◜-----◞◟----−◝1 λ−1 r1N r11 0 −1
(A− λ1I)w = P (J − λ1I)P P 0 0 P x =
( N ) ( λ−r1N r1 )
| 1 . | −1 | 1 1 . | −1
P |( .. |) P P |( .. |) P x
Jm − λ1I 0
( −r r )
= P N1 λ1 1N11 0 P−1x = 0
0 0
Recall N1r1+1 = 0. Now you could recover an approximation to the eigenvalue as follows.
(Ayn,-yn)≈ (Az,-z)= λ
(yn,yn) (z,z) 1
Here ≈ means “approximately equal”. However, there is a more convenient way to identify the eigenvalue
in terms of the scaling factors sk.
∥ ( ) ∥
∥∥--λn1--- n (w − w )∥∥ ≈ 0
∥sn⋅⋅⋅s1 r1 n ∥ ∞
Pick the largest nonzero entry of w,wl. Then for large n, it is also likely the case that the largest entry of
wn will be in the lth position because wm is close to w. From the construction,
n ( ) n ( )
---λ1-- n wnl = 1 ≈--λ1--- n wl
sn ⋅⋅⋅s1 r1 sn⋅⋅⋅s1 r1
In other words, for large n
n ( )
---λ1-- n ≈ 1∕wl
sn ⋅⋅⋅s1 r1
Therefore, for large n,
( ) ( )
--λn1-- n ---λn+11---- n + 1
sn⋅⋅⋅s1 r1 ≈ sn+1sn⋅⋅⋅s1 r1
and so
( ) ( )
n ∕ n + 1 ≈ -λ1-
r1 r1 sn+1
But limn→∞
( )
nr1
∕
( )
n+r11
= 1 and so, for large n it must be the case that λ1≈ sn+1.
This has proved the following theorem which justifies the power method.
Theorem 16.1.1Let A be a complex p × p matrix such that the eigenvalues are
{λ1,λ2,⋅⋅⋅,λr}
with
|λ1|
>
|λj|
for all j≠1. Then for x a given vector, let
Ax
y1 = -s-
1
where s1is an entry of Ax which has the largest absolute value. If the scalars
{s1,⋅⋅⋅,sn−1}
and vectors
{y1,⋅⋅⋅,yn−1}
have been obtained, let
Ayn−1-
yn ≡ sn
where snis the entry of Ayn−1which haslargest absolute value. Then it is probably the case that
{sn}
willconverge to λ1and
{yn}
will converge to an eigenvector associated with λ1. If it doesn’t, you picked anincredibly inauspicious initial vector x.
In summary, here is the procedure.
Finding the largest eigenvalue with its eigenvector.
Start with a vector, u1 which you hope is not unlucky.
If uk is known,
Au
uk+1 = ---k
sk+1
where sk+1 is the entry of Auk which has largest absolute value.
When the scaling factors sk are not changing much, sk+1 will be close to the eigenvalue and uk+1 will
be close to an eigenvector.
Check your answer to see if it worked well. If things don’t work well, try another u1. You were
miraculously unlucky in your choice.
Example 16.1.2Find the largest eigenvalue of A =
( )
5 − 14 11
|( − 4 4 − 4 |)
3 6 − 3
.
You can begin with u1=
(1,⋅⋅⋅,1)
T and apply the above procedure. However, you can accelerate the
process if you begin with Anu1 and then divide by the largest entry to get the first approximate
eigenvector. Thus
Another iteration will reveal that the scaling factor is still 12. Thus this is an approximate eigenvalue. In
fact, it is the largest eigenvalue and the corresponding eigenvector is