- Let m < n and let A be an m×n matrix. Show that A is not one to one. Hint: Consider the n×n
matrix A
_{1}which is of the formwhere the 0 denotes an

× n matrix of zeros. Thus detA_{1}= 0 and so A_{1}is not one to one. Now observe that A_{1}x is the vector,which equals zero if and only if Ax = 0.

- Let v
_{1},,v_{n}be vectors in F^{n}and let Mdenote the matrix whose i^{th}column equals v_{i}. DefineProve that d is linear in each variable, (multilinear), that

(8.15) and

(8.16) where here e

_{j}is the vector in F^{n}which has a zero in every position except the j^{th}position in which it has a one. - If A,B are similar matrices, show that they have the same determinant. Also show that they have the same characteristic polynomial.
- Suppose f : F
^{n}×× F^{n}→ F satisfies 8.15 and 8.16 and is linear in each variable. Show that f = d. - Show that if you replace a row (column) of an n×n matrix A with itself added to some multiple of another row (column) then the new matrix has the same determinant as the original one. It was done in the chapter but go over it yourself.
- Use the result of Problem 5 to evaluate by hand the determinant
- Find the inverse if it exists of the matrix
- Let Ly = y
^{(n) }+ a_{n−1}y^{(n− 1) }++ a_{1}y^{′}+ a_{0}y where the a_{i}are given continuous functions defined on an interval,and y is some function which has n derivatives so it makes sense to write Ly. Suppose Ly_{k}= 0 for k = 1,2,,n. The Wronskian of these functions, y_{i}is defined asShow that for W

= Wto save space,Now use the differential equation, Ly = 0 which is satisfied by each of these functions, y

_{i}and properties of determinants presented above to verify that W^{′}+ a_{n−1}W = 0. Give an explicit solution of this linear differential equation, Abel’s formula, and use your answer to verify that the Wronskian of these solutions to the equation, Ly = 0 either vanishes identically onor never. - Show that the identity matrix is not similar to any other matrix.
- Two n×n matrices, A and B, are similar if B = S
^{−1}AS for some invertible n×n matrix S. Prove a theorem which is illustrated by the following picture.Give an example of two matrices which are not similar but they have the same trace, characteristic polynomial and determinant. - Suppose the characteristic polynomial of an n × n matrix A is of the form
and that a

_{0}≠0. Find a formula A^{−1}in terms of powers of the matrix A. Show that A^{−1}exists if and only if a_{0}≠0. In fact, show that a_{0}=^{n}det. Note how similar this is to what we did with algebraic numbers earlier on. - ↑Letting pdenote the characteristic polynomial of A, show that p
_{ε}≡ pis the characteristic polynomial of A + εI. Then show that if det= 0 , it follows that det≠0 wheneveris sufficiently small. - In constitutive modeling of the stress and strain tensors, one sometimes considers sums of the form
∑
_{k=0}^{∞}a_{k}A^{k}where A is a 3×3 matrix. Show using the Cayley Hamilton theorem that if such a thing makes any sense, you can always obtain it as a finite sum having no more than 3 terms. - Recall you can find the determinant from expanding along the j
^{th}column.Think of det

as a function of the entries, A_{ij}. Explain why the ij^{th}cofactor is really just - Let U be an open set in ℝ
^{n}and let g :U → ℝ^{n}be such that all the first partial derivatives of all components of g exist and are continuous. Under these conditions form the matrix Dggiven byThe best kept secret in calculus courses is that the linear transformation determined by this matrix Dg

is called the derivative of g and is the correct generalization of the concept of derivative of a function of one variable. Suppose the second partial derivatives also exist and are continuous. Then show that ∑_{j}_{ij,j}= 0. Hint: First explain why ∑_{i}g_{i,k}cof_{ij}= δ_{jk}det. Next differentiate with respect to x_{j}and sum on j using the equality of mixed partial derivatives. Assume det≠0 to prove the identity in this special case. Then explain using Problem 12 why there exists a sequence ε_{k}→ 0 such that for g_{εk}≡ g+ ε_{k}x, det≠0 and so the identity holds for g_{εk}. Then take a limit to get the desired result in general. This is an extremely important identity which has surprising implications. One can build degree theory on it for example. It also leads to simple proofs of the Brouwer fixed point theorem from topology. - A determinant of the form
is called a Vandermonde determinant. Show it equals ∏

_{0≤i<j≤n}. By this is meant to take the product of all terms of the formsuch that j > i. Hint: Show it works if n = 1 so you are looking at. Then suppose it holds for n − 1 and consider the case n. Consider the polynomial in t,pwhich is obtained from the above by replacing the last column with the column^{T}. Explain why p= 0 for i = 0,,n − 1. Explain why p= c∏_{i=0}^{n−1}. Of course c is the coefficient of t^{n}. Find this coefficient from the above description of pand the induction hypothesis. Then plug in t = a_{n}and observe you have the formula valid for n. - The example in this exercise was shown to me by Marc van Leeuwen and it helped to
correct a misleading proof of the Cayley Hamilton theorem presented in this chapter. If
p= qfor all λ or for all λ large enough where p,qare polynomials having matrix coefficients, then it is not necessarily the case that p= qfor A a matrix of an appropriate size. The proof in question read as though it was using this incorrect argument. Let
Show that for all λ,

=I =. However,Explain why this can happen. In the proof of the Cayley-Hamilton theorem given in the chapter, show that the matrix A does commute with the matrices C

_{i}in that argument. Hint: Multiply both sides out with N in place of λ. Does N commute with E_{i}? - Explain why the proof of the Cayley-Hamilton theorem given in this chapter cannot possibly hold for arbitrary fields of scalars.
- Suppose A is m × n and B is n × m. Letting I be the identity of the appropriate size, is it the case
that det= det? Explain why or why not.
- Suppose A is a linear transformation and let the characteristic polynomial be
where the ϕ

_{j}are irreducible. Explain using Corollary 1.12.10 why the irreducible factors of the minimum polynomial are ϕ_{j}and why the minimum polynomial is of the form ∏_{j=1}^{q}ϕ_{j}^{rj}where r_{j}≤ n_{j}. You can use the Cayley Hamilton theorem if you like. - Use the existence of the Jordan canonical form for a linear transformation whose minimum
polynomial factors completely to give a proof of the Cayley Hamilton theorem which is valid for any
field of scalars. Hint: First assume the minimum polynomial factors completely into linear factors. In
this case, note that the characteristic polynomial is of degree n and is the product of where μ is an eigenvalue and listed according to algebraic multiplicity. However, if there are multiple blocks corresponding to some μ, then the minimum polynomial will have such terms but fewer of them. If the minimum polynomial does not split, consider a splitting field of the minimum polynomial. Then consider the minimum polynomial with respect to this larger field. How will the two minimum polynomials be related? The two characteristic polynomials will be exactly the same, being defined in terms of the determinant of λI − A. Show the minimum polynomial always divides the characteristic polynomial for any field F.
- Let qbe a polynomial and C its companion matrix. Show the characteristic and minimum polynomial of C are the same and both equal q.
- ↑Use the existence of the rational canonical form to give a proof of the Cayley Hamilton theorem
valid for any field, even fields like the integers mod p for p a prime. The proof in this chapter on
determinants was fine for fields like ℚ or ℝ where you could let λ →∞ but it is not clear the same
result holds in general. Hint: Recall that for a linear transformation, it has a rational
canonical form M which is block diagonal. If M
_{k}is one of the blocks, it corresponds to kerand this M_{k}is itself block diagonal with the blocks given by companion matrices and that at least one of these must have size m_{k}d × m_{k}d. Thus ϕ_{k}^{mk}must divide det. Now use block multiplication to show that the minimum polynomial divides the characteristic polynomial. You might want to observe that the determinant of a block diagonal matrix is the product of the determinants of the blocks. To see this last thing, observe thatThat is, for the terms of the product, you keep B

_{k}in the k^{th}position and fill in the rest with identity matrices of the right size to correspond with the other blocks. - Show that to find the eigenvalues of a matrix, it suffices to consider the roots of the characteristic polynomial. Hint: Use Cayley Hamilton theorem. This gives another way to find eigenvalues.
- Recall that a matrix was diagonalizable if it was similar to a diagonal matrix. Suppose you have a matrix A whose entries are in F and the characteristic polynomial is the same as the minimum polynomial but the characteristic polynomial of the matrix has a repeated root. Can you show that the matrix cannot be diagonalizable in any field containing F?
- For W a subspace of V, W is said to have a complementary subspace [18] W
^{′}if W ⊕ W^{′}= V. Suppose that both W,W^{′}are invariant with respect to A ∈ℒ. Show that for any polynomial f, if fx ∈ W, then there exists w ∈ W such that fx = fw. A subspace W is called A admissible if it is A invariant and the condition of this problem holds. - ↑ Return to Theorem 7.1.6 about the existence of a basis β = for V where A ∈ℒ. Adapt the statement and proof to show that if W is A admissible, then it has a complementary subspace which is also A invariant. Hint:
The modified version of the theorem is: Suppose A ∈ℒ

and the minimum polynomial of A is ϕ^{m}where ϕis a monic irreducible polynomial. Also suppose that W is an A admissible subspace. Then there exists a basis for V which is of the form β =whereis a basis of W. Thus spanis the A invariant complementary subspace for W. You may want to use the fact that ϕ∩ W = ϕwhich follows easily because W is A admissible. Then use this fact to show that ϕis also A admissible. - When you have an Abelian group V and a commutative ring with unity K such that the usual vector
space operations hold
then you call this V a K module. Thus, it is just a vector space except you have a ring of scalars rather than a field of scalars. Now suppose K = ℤ the integers and V = ℤ

_{m}where m is some positive integer. Then if k ∈ K and ∈ ℤ_{m}, you define k in the usual way. Just add to itself k times or if k is negative, you just add= m − a to itselftimes. Explain why this is a ℤ module. More generally, explain why an arbitrary Abelian group is a ℤ module. However, show that in general, there is no linearly independent set of elements of ℤ_{m}which spans ℤ_{m}, although it is certainly true that spans ℤ_{m}. Thus, when you replace a field with a ring, you loose the theorem that gives you a linearly independent subset of a spanning set. Hint: If is in the span of your supposed basis, you have problems. If not in the span of your supposed basis, then you don’t have a spanning set. - Now suppose you have K a commutative ring with unity and consider K
^{n}. Show thatspans K

^{n}and that if you havefor m < n, thendoes not span K^{n}. Hint: If it does span, then explain why you could get the followingThen consider something like this:

Now consider Theorem 8.4.4 which still works if the entries of the matrix are from a commutative ring with unity. Is

also linearly independent? By this is meant one of the definitions given earlier that if you have a linear combination of these vectors equal to 0, then all of the scalars are zero. Since the scalars only come from a ring, you can’t conclude that this is the same thing as saying that no vector is a linear combination of the others. - If Ais an n × n matrix and
show that

det=where A

_{i}has the same columns except for the i^{th}column which is a_{i}^{′}. - You have vectors x
_{i}^{′}= Jx_{i}where J is a Jordan canonical form and is n×n. Form the matrix Φ≡. Explain why Φ^{′}= JΦ. Now consider y_{i}^{T}to be the i^{th}row of this matrix Φ. Explain why(*) Now, using the result of Problem 30 explain why

(**) where a

_{i}= 0 or 1 and a_{n}= 0. Next explain, using elementary ODE whyfor some constant C, showing that det

either vanishes for all t or for no t. - Obtain exactly the same result as the above Problem 31 for an arbitrary A an n×n matrix. Use the result on the existence of Jordan canonical form along with properties of determinants to make this easy. Recall also the earlier problem that the trace is an invariant meaning that it does not change under similarity transformations. The formula you get is Abel’s formula for first order systems. A few other simple problems using Jordan form will wipe out almost the entire typical undergraduate differential equations course. There is actually an easier, but trickier way to get this result of this problem.

Download PDFView PDF