- Explain why it is typically impossible to compute the upper triangular matrix whose existence is guaranteed by Schur’s theorem.
- Now recall the QR factorization of Theorem 5.7.5 on Page 395. The QR algorithm is a
technique which does compute the upper triangular matrix in Schur’s theorem. There
is much more to the QR algorithm than will be presented here. In fact, what I am
about to show you is not the way it is done in practice. One first obtains what is called
a Hessenburg matrix for which the algorithm will work better. However, the idea is as
follows. Start with A an n×n matrix having real eigenvalues. Form A = QR where Q
is orthogonal and R is upper triangular. (Right triangular.) This can be done using the
technique of Theorem 5.7.5 using Householder matrices. Next take A
_{1}≡ RQ. Show that A = QA_{1}Q^{T}. In other words these two matrices, A,A_{1}are similar. Explain why they have the same eigenvalues. Continue by letting A_{1}play the role of A. Thus the algorithm is of the form A_{n}= QR_{n}and A_{n+1}= R_{n+1}Q. Explain why A = Q_{n}A_{n}Q_{n}^{T}for some Q_{n}orthogonal. Thus A_{n}is a sequence of matrices each similar to A. The remarkable thing is that often these matrices converge to an upper triangular matrix T and A = QTQ^{T}for some orthogonal matrix, the limit of the Q_{n}where the limit means the entries converge. Then the process computes the upper triangular Schur form of the matrix A. Thus the eigenvalues of A appear on the diagonal of T. You will see approximately what these are as the process continues. - ↑Try the QR algorithm on
which has eigenvalues 3 and 2. I suggest you use a computer algebra system to do the computations.

- ↑Now try the QR algorithm on
Show that the algorithm cannot converge for this example. Hint: Try a few iterations of the algorithm. Use a computer algebra system if you like.

- ↑Show the two matrices A ≡and B ≡are similar; that is there exists a matrix S such that A = S
^{−1}BS but there is no orthogonal matrix Q such that Q^{T}BQ = A. Show the QR algorithm does converge for the matrix B although it fails to do so for A. - Let F be an m×n matrix. Show that F
^{∗}F has all real eigenvalues and furthermore, they are all nonnegative. - If A is a real n × n matrix and λ is a complex eigenvalue λ = a + ib,b≠0, of A having eigenvector z + iw, show that w≠0.
- Suppose A = Q
^{T}DQ where Q is an orthogonal matrix and all the matrices are real. Also D is a diagonal matrix. Show that A must be symmetric. - Suppose A is an n × n matrix and there exists a unitary matrix U such that
where D is a diagonal matrix. Explain why A must be normal.

- If A is Hermitian, show that detmust be real.
- Show that every unitary matrix preserves distance. That is, if U is unitary,
- Show that if a matrix does preserve distances, then it must be unitary.
- ↑Show that a complex normal matrix A is unitary if and only if its eigenvalues have magnitude equal to 1.
- Suppose A is an n × n matrix which is diagonally dominant. Recall this means
show A

^{−1}must exist. - Give some disks in the complex plane whose union contains all the eigenvalues of the
matrix
- Show a square matrix is invertible if and only if it has no zero eigenvalues.
- Using Schur’s theorem, show the trace of an n × n matrix equals the sum of the eigenvalues and the determinant of an n × n matrix is the product of the eigenvalues.
- Using Schur’s theorem, show that if A is any complex n × n matrix having eigenvalues
listed according to multiplicity, then ∑
_{i,j}^{2}≥∑_{i=1}^{n}^{2}. Show that equality holds if and only if A is normal. This inequality is called Schur’s inequality. [20] - Here is a matrix.
I know this matrix has an inverse before doing any computations. How do I know?

- Show the critical points of the following function are
and classify them as local minima, local maxima or saddle points.

f

= −x^{4}+ 6x^{3}− 6x^{2}+ zx^{2}− 2zx − 2y^{2}− 12y − 18 −z^{2}. - Here is a function of three variables.
change the variables so that in the new variables there are no mixed terms, terms involving xy,yz etc. Two eigenvalues are 12 and 18.

- Here is a function of three variables.
change the variables so that in the new variables there are no mixed terms, terms involving xy,yz etc. The eigenvalues of the matrix which you will work with are −

,,−1. - Here is a function of three variables.
change the variables so that in the new variables there are no mixed terms, terms involving xy,yz etc.

- Show the critical points of the function,
are points of the form,

for t ∈ ℝ and classify them as local minima, local maxima or saddle points.

- Show the critical points of the function
are

,, andand classify them as local minima, local maxima or saddle points. - Let f= 3 x
^{4}− 24x^{2}+ 48 − yx^{2}+ 4y. Find and classify the critical points using the second derivative test. - Let f= 3 x
^{4}− 5x^{2}+ 2 − y^{2}x^{2}+ y^{2}. Find and classify the critical points using the second derivative test. - Let f= 5 x
^{4}− 7x^{2}− 2 − 3y^{2}x^{2}+ 11y^{2}− 4y^{4}. Find and classify the critical points using the second derivative test. - Let f= −2x
^{4}− 3yx^{2}+ 3x^{2}+ 5x^{2}z + 3y^{2}− 6y + 3 − 3zy + 3z + z^{2}. Find and classify the critical points using the second derivative test. - Let f= 3 yx
^{2}− 3x^{2}− x^{2}z − y^{2}+ 2y − 1 + 3zy − 3z − 3z^{2}. Find and classify the critical points using the second derivative test. - Let Q be orthogonal. Find the possible values of det.
- Let U be unitary. Find the possible values of det.
- If a matrix is nonzero can it have only zero for eigenvalues?
- A matrix A is called nilpotent if A
^{k}= 0 for some positive integer k. Suppose A is a nilpotent matrix. Show it has only 0 for an eigenvalue. - If A is a nonzero nilpotent matrix, show it must be defective.
- Suppose A is a nondefective n × n matrix and its eigenvalues are all either 0 or 1. Show
A
^{2}= A. Could you say anything interesting if the eigenvalues were all either 0,1,or −1? By DeMoivre’s theorem, an n^{th}root of unity is of the formCould you generalize the sort of thing just described to get A

^{n}= A? Hint: Since A is nondefective, there exists S such that S^{−1}AS = D where D is a diagonal matrix. - This and the following problems will present most of a differential equations course. Most of
the explanations are given. You fill in any details needed. To begin with, consider the scalar
initial value problem
When a is real, show the unique solution to this problem is y = y

_{0}e^{a}. Next suppose(6.21) where y

= u+ iv. Show there exists a unique solution and it is given by y=(6.22) Next show that for a real or complex there exists a unique solution to the initial value problem

and it is given by

Hint: For the first part write as y

^{′}−ay = 0 and multiply both sides by e^{−at}. Then explain why you getNow you finish the argument. To show uniqueness in the second part, suppose

and verify this requires y

= 0. To do this, noteand that

^{2}= 0 andThus from the first part

^{2}= 0e^{−2at}= 0. Finally observe by a simple computation that 6.21 is solved by 6.22. For the last part, write the equation asand multiply both sides by e

^{−at}and then integrate from t_{0}to t using the initial condition. - Now consider A an n × n matrix. By Schur’s theorem there exists unitary Q such
that
where T is upper triangular. Now consider the first order initial value problem

Show there exists a unique solution to this first order system. Hint: Let y = Q

^{−1}x and so the system becomes(6.23) Now letting y =

^{T}, the bottom equation becomesThen use the solution you get in this to get the solution to the initial value problem which occurs one level up, namely

Continue doing this to obtain a unique solution to 6.23.

- Now suppose Φis an n × n matrix of the form
(6.24) where

Explain why

if and only if Φ

is given in the form of 6.24. Also explain why if c ∈ F^{n},y≡ Φc solves the equation y^{′}= Ay. - In the above problem, consider the question whether all solutions to
(6.25) are obtained in the form Φ

c for some choice of c ∈ F^{n}. In other words, is the general solution to this equation Φc for c ∈ F^{n}? Prove the following theorem using linear algebra.Theorem 6.10.1 Suppose Φ

is an n × n matrix which satisfies Φ^{′}= AΦ. Then the general solution to 6.25 is Φc if and only if Φ^{−1}exists for some t. Furthermore, if Φ^{′}= AΦ, then either Φ^{−1}exists for all t or Φ^{−1}never exists for any t.(det

is called the Wronskian and this theorem is sometimes called the Wronskian alternative.)Hint: Suppose first the general solution is of the form Φ

c where c is an arbitrary constant vector in F^{n}. You need to verify Φ^{−1}exists for some t. In fact, show Φ^{−1}exists for every t. Suppose then that Φ^{−1}does not exist. Explain why there exists c ∈ F^{n}such that there is no solution x to the equation c = Φx. By the existence part of Problem 38 there exists a solution tobut this cannot be in the form Φ

c. Thus for every t, Φ^{−1}exists. Next suppose for some t_{0},Φ^{−1}exists. Let z^{′}= Az and choose c such thatThen both z

,Φc solveApply uniqueness to conclude z = Φ

c. Finally, consider that Φc for c ∈ F^{n}either is the general solution or it is not the general solution. If it is, then Φ^{−1}exists for all t. If it is not, then Φ^{−1}cannot exist for any t from what was just shown. - Let Φ
^{′}= AΦ. Then Φis called a fundamental matrix if Φ^{−1}exists for all t. Show there exists a unique solution to the equation(6.26) and it is given by the formula

Now these few problems have done virtually everything of significance in an entire undergraduate differential equations course, illustrating the superiority of linear algebra. The above formula is called the variation of constants formula.

Hint: Uniquenss is easy. If x

_{1},x_{2}are two solutions then let u= x_{1}− x_{2}and argue u^{′}= Au,u= 0. Then use Problem 38. To verify there exists a solution, you could just differentiate the above formula using the fundamental theorem of calculus and verify it works. Another way is to assume the solution in the formand find c

to make it all work out. This is called the method of variation of parameters. - Show there exists a special Φ such that Φ
^{′}= AΦ,Φ= I, and suppose Φ^{−1}exists for all t. Show using uniqueness thatand that for all t,s ∈ ℝ

Explain why with this special Φ, the solution to 6.26 can be written as

Hint: Let Φ

be such that the j^{th}column is x_{j}whereUse uniqueness as required.

- You can see more on this problem and the next one in the latest version of Horn and
Johnson, [17]. Two n × n matrices A,B are said to be congruent if there is an invertible P
such that
Let A be a Hermitian matrix. Thus it has all real eigenvalues. Let n

_{+}be the number of positive eigenvalues, n_{−}, the number of negative eigenvalues and n_{0}the number of zero eigenvalues. For k a positive integer, let I_{k}denote the k × k identity matrix and O_{k}the k × k zero matrix. Then the inertia matrix of A is the following block diagonal n × n matrix.Show that A is congruent to its inertia matrix. Next show that congruence is an equivalence relation on the set of Hermitian matrices. Finally, show that if two Hermitian matrices have the same inertia matrix, then they must be congruent. Hint: First recall that there is a unitary matrix, U such that

where the D

_{n+}is a diagonal matrix having the positive eigenvalues of A, D_{n−}being defined similarly. Now letdenote the diagonal matrix which replaces each entry of D_{n−}with its absolute value. Consider the two diagonal matricesNow consider D

^{∗}U^{∗}AUD. - Show that if A,B are two congruent Hermitian matrices, then they have the same inertia
matrix. Hint: Let A = SBS
^{∗}where S is invertible. Show that A,B have the same rank and this implies that they are each unitarily similar to a diagonal matrix which has the same number of zero entries on the main diagonal. Therefore, letting V_{A}be the span of the eigenvectors associated with positive eigenvalues of A and V_{B}being defined similarly, it suffices to show that these have the same dimensions. Show that> 0 for all x ∈ V_{A}. Next consider S^{∗}V_{A}. For x ∈ V_{A}, explain why^{∗}V_{A}is a subspace of V_{B}and so the dimension of V_{B}is at least as large as the dimension of V_{A}. Hence there are at least as many positive eigenvalues for B as there are for A. Switching A,B you can turn the inequality around. Thus the two have the same inertia matrix. - Let A be an m × n matrix. Then if you unraveled it, you could consider it as a vector in
ℂ
^{nm}. The Frobenius inner product on the vector space of m × n matrices is defined asShow that this really does satisfy the axioms of an inner product space and that it also amounts to nothing more than considering m × n matrices as vectors in ℂ

^{nm}. - ↑Consider the n × n unitary matrices. Show that whenever U is such a matrix, it follows
that
Next explain why if

is any sequence of unitary matrices, there exists a subsequence_{m=1}^{∞}such that lim_{m→∞}U_{km}= U where U is unitary. Here the limit takes place in the sense that the entries of U_{km}converge to the corresponding entries of U. - ↑Let A,B be two n × n matrices. Denote by σthe set of eigenvalues of A. Define
Explain why dist

is small if and only if every eigenvalue of A is close to some eigenvalue of B. Now prove the following theorem using the above problem and Schur’s theorem. This theorem says roughly that if A is close to B then the eigenvalues of A are close to those of B in the sense that every eigenvalue of A is close to an eigenvalue of B.Theorem 6.10.2 Suppose lim

_{k→∞}A_{k}= A. Then - Let A = be a 2 × 2 matrix which is not a multiple of the identity. Show that A is similar to a 2 × 2 matrix which has at least one diagonal entry equal to 0. Hint: First note that there exists a vector a such that Aa is not a multiple of a. Then consider
Show B has a zero on the main diagonal.

- ↑ Let A be a complex n×n matrix which has trace equal to 0. Show that A is similar to a matrix which has all zeros on the main diagonal. Hint: Use Problem 30 on Page 366 to argue that you can say that a given matrix is similar to one which has the diagonal entries permuted in any order desired. Then use the above problem and block multiplication to show that if the A has k nonzero entries, then it is similar to a matrix which has k − 1 nonzero entries. Finally, when A is similar to one which has at most one nonzero entry, this one must also be zero because of the condition on the trace.
- ↑An n × n matrix X is a comutator if there are n × n matrices A,B such that
X = AB − BA. Show that the trace of any comutator is 0. Next show that if a complex
matrix X has trace equal to 0, then it is in fact a comutator. Hint: Use the above problem
to show that it suffices to consider X having all zero entries on the main diagonal. Then
define

Download PDFView PDF