364 CHAPTER 15. NUMERICAL METHODS, EIGENVALUES

and RR∗ = R∗R = I. It follows easily that R must equal I and so R1 = R2 which requiresQ1 = Q2.

Now in the above corollary, you know that

A = Q1 · · ·QkAkQ∗k · · ·Q∗1 = Q(k)Ak

(Q(k)

)∗Also, from this corollary, you know that

Ak = Q(k)R(k)

You could also simply take the QR factorization of Ak to obtain Ak = QR. Then fromwhat was just pointed out, if A−1 exists,

Q(k) = Q

Thus from the above corollary,

Ak =(

Q(k))∗

AQ(k) = Q∗AQ

Therefore, in using the QR algorithm in the case where A has an inverse, it suffices to take

Ak = QR

and then consider the matrixQ∗AQ = Ak.

This is so theoretically. In practice it might not work out all that well because of round offerrors.

There is also an interesting relation to the power method. Let

A =(

a1 · · · an

)Then from the way we multiply matrices,

Ak+1 =(

Aka1 · · · Akan

)and for large k, Akai would be expected to point roughly in the direction of the eigenvectorcorresponding to the largest eigenvalue. Then if you form the QR factorization,

Ak+1 = QR

the columns of Q are an orthonormal basis obtained essentially from the Gram Schmidtprocedure. Thus the first column of Q has roughly the direction of an eigenvector associatedwith the largest eigenvalue of A. It follows that the first column of Q∗AQ is approximatelyequal to λ 1q1 and so the top entry will be close to λ 1q∗1q1 = λ 1 and the entries below it areclose to 0. Thus the eigenvalues of the matrix should be close to this top entry of the firstcolumn along with the eigenvalues of the (n−1)× (n−1) matrix in the lower right corner.If this is a 2×2 you can find the eigenvalues using the quadratic formula. If it is larger, youcould just use the same procedure for finding its eigenvalues but now you are dealing witha smaller matrix.