350 CHAPTER 15. NUMERICAL METHODS, EIGENVALUES

if and only if1

λ k−αxk = (A−αI)−1 xk. ■

In explaining why the method works, we will assume A is nondefective. This is notnecessary! One can use Gelfand’s theorem on the spectral radius which is presented in [13]and invariance of (A−αI)−1 on generalized eigenspaces to prove more general results. Itsuffices to assume that the eigenspace for λ k has dimension equal to the multiplicity of theeigenvalue λ k but even this is not necessary to obtain convergence of the method. Thismethod is better than might be supposed from the following explanation.

Pick u1, an initial vector and let Axk = λ kxk, where {x1, · · · ,xn} is a basis of eigen-vectors which exists from the assumption that A is nondefective. Assume α is closer to λ nthan to any other eigenvalue. Since A is nondefective, there exist constants, ak such that

u1 =n

∑k=1

akxk.

Possibly λ n is a repeated eigenvalue. Then combining the terms in the sum which involveeigenvectors for λ n, a simpler description of u1 is

u1 =m

∑j=1

a jx j +y

where y is an eigenvector for λ n which is assumed not equal to 0. (If you are unluckyin your choice for u1, this might not happen and things won’t work.) Now the iterationprocedure is defined as

uk+1 ≡(A−αI)−1 uk

Sk+1

where Sk+1 is the element of (A−αI)−1 uk which has largest absolute value. From Lemma15.2.1,

uk+1 =∑

mj=1 a j

(1

λ j−α

)kx j +

(1

λ n−α

)ky

S2 · · ·Sk+1

=

(1

λ n−α

)k

S2 · · ·Sk+1

(m

∑j=1

a j

(λ n−α

λ j−α

)k

x j +y

).

Now it is being assumed that λ n is the eigenvalue which is closest to α and so for large k,the term,

m

∑j=1

a j

(λ n−α

λ j−α

)k

x j ≡ Ek

is very small, while for every k ≥ 1, uk is a moderate sized vector because every entry hasabsolute value less than or equal to 1. Thus

uk+1 =

(1

λ n−α

)k

S2 · · ·Sk+1(Ek +y)≡Ck (Ek +y)

350 CHAPTER 15. NUMERICAL METHODS, EIGENVALUESif and only ifai _ a = (A _— al)! x;,.In explaining why the method works, we will assume A is nondefective. This is notnecessary! One can use Gelfand’s theorem on the spectral radius which is presented in [13]and invariance of (A — a 7 on generalized eigenspaces to prove more general results. Itsuffices to assume that the eigenspace for A; has dimension equal to the multiplicity of theeigenvalue A, but even this is not necessary to obtain convergence of the method. Thismethod is better than might be supposed from the following explanation.Pick u,, an initial vector and let Ax; = A,X,, where {x),--- ,x,} is a basis of eigen-vectors which exists from the assumption that A is nondefective. Assume @ is closer to A,than to any other eigenvalue. Since A is nondefective, there exist constants, a, such thatnuj = y: AkXk.k=1Possibly 2, is a repeated eigenvalue. Then combining the terms in the sum which involveeigenvectors for 2,,, a simpler description of uy ismuy = y\ ajxj+yj=lwhere y is an eigenvector for A, which is assumed not equal to 0. (If you are unluckyin your choice for u,, this might not happen and things won’t work.) Now the iterationprocedure is defined as(A—al) | uyyy = —Sk41where S;., is the element of (A— al) ~'15.2.1,uy which has largest absolute value. From Lemmam 1 k 1 kj=l 4 Aj-a xjt Aa) ¥So-+- Seyk1(az) m (e-2)S20 Sept (é '\aj-a) 7!Now it is being assumed that /,, is the eigenvalue which is closest to @ and so for large k,the term,Uggs. =y (3 —o ) xj=Eaj j= Exj=l Aj-ais very small, while for every k > 1, ux, is a moderate sized vector because every entry hasabsolute value less than or equal to 1. Thus