- Solve the system
using the Gauss Seidel method and the Jacobi method. Check your answer by also solving it using row operations.

- Solve the system
using the Gauss Seidel method and the Jacobi method. Check your answer by also solving it using row operations.

- Solve the system
using the Gauss Seidel method and the Jacobi method. Check your answer by also solving it using row operations.

- If you are considering a system of the form Ax = b and A
^{−1}does not exist, will either the Gauss Seidel or Jacobi methods work? Explain. What does this indicate about finding eigenvectors for a given eigenvalue? - For
_{∞}≡ max, the parallelogram identity does not hold. Explain. - A norm is said to be strictly convex if whenever=,x≠y, it follows
Show the norm

which comes from an inner product is strictly convex. - A norm is said to be uniformly convex if whenever,are equal to 1 for all n ∈ ℕ and lim
_{n→∞}= 2 , it follows lim_{n→∞}= 0 . Show the normcoming from an inner product is always uniformly convex. Also show that uniform convexity implies strict convexity which is defined in Problem 6. - Suppose A : ℂ
^{n}→ ℂ^{n}is a one to one and onto matrix. DefineShow this is a norm.

- If X is a finite dimensional normed vector space and A,B ∈ℒsuch that<, can it be concluded that< 1?
- Let X be a vector space with a norm and let V = spanbe a finite dimensional subspace of X such thatis a basis for V. Show V is a closed subspace of X. This means that if w
_{n}→ w and each w_{n}∈ V, then so is w. Next show that if wV,is a continuous function of w and

Next show that if w

V, there exists z such that= 1 and dist> 1∕2. For those who know some advanced calculus, show that if X is an infinite dimensional vector space having norm, then the closed unit ball in X cannot be compact. Thus closed and bounded is never compact in an infinite dimensional normed vector space. - Suppose ρ< 1 for A ∈ℒwhere V is a p dimensional vector space having a norm. You can use ℝ
^{p}or ℂ^{p}if you like. Show there exists a new normsuch that with respect to this new norm,< 1 wheredenotes the operator norm of A taken with respect to this new norm on V ,Hint: You know from Gelfand’s theorem that

provided n is large enough, this operator norm taken with respect to

. Show there exists 0 < λ < 1 such thatYou can do this by arguing the eigenvalues of A∕λ are the scalars μ∕λ where μ ∈ σ

. Now let ℤ_{+}denote the nonnegative integers.First show this is actually a norm. Next explain why

- Establish a similar result to Problem 11 without using Gelfand’s theorem. Use an argument which depends directly on the Jordan form or a modification of it.
- Using Problem 11 give an easier proof of Theorem 13.6.6 without having to use Corollary 13.6.5. It would suffice to use a different norm of this problem and the contraction mapping principle of Lemma 13.6.4.
- A matrix A is diagonally dominant if > ∑
_{j≠i}. Show that the Gauss Seidel method converges if A is diagonally dominant. - Suppose f= ∑
_{n=0}^{∞}a_{n}λ^{n}converges if< R. Show that if ρ< R where A is an n × n matrix, thenconverges in ℒ

. Hint: Use Gelfand’s theorem and the root test. - Referring to Corollary 13.4.4, for λ = a + ib show
Hint: Let y

= expand let z= e^{−at}y. ShowNow letting z = u + iv where u,v are real valued, show

= cosand v= sinwork in the above and that there is at most one solution toThus z

= cos+ isinand so y= e^{at}. To show there is at most one solution to the above problem, suppose you have two, w_{1},w_{2}. Subtract them. Let f = w_{1}− w_{2}. Thusand f is real valued. Multiply both sides by f

^{′}and concludeThus the expression in parenthesis is constant. Explain why this constant must equal 0.

- Let A ∈ℒ. Show the following power series converges in ℒ.
This was done in the chapter. Go over it and be sure you understand it. This is how you can define exp

. Next show that Ψ^{′}= AΨ,Ψ= I. Next let Φ= ∑_{k=0}^{∞}. Show each Φ,Ψeach commute with A. Next show that ΦΨ= I for all t. Finally, solve the initial value problemin terms of Φ and Ψ. This yields most of the substance of a typical differential equations course.

- In Problem 17 Ψis defined by the given series. Denote by expthe numbers expwhere λ ∈ σ. Show exp= σ. This is like Lemma 13.4.8. Letting J be the Jordan canonical form for A, explain why
and you note that in J

^{k}, the diagonal entries are of the form λ^{k}for λ an eigenvalue of A. Also J = D + N where N is nilpotent and commutes with D. Argue then thatis an upper triangular matrix which has on the diagonal the expressions e

^{λt}where λ ∈ σ. Thus concludeNext take e

^{tλ}∈ expand argue it must be in σ. You can do this as follows:converges to something in ℒ

. To do this, use the ratio test and Lemma 13.4.2 after first using the triangle inequality. Since λ ∈ σ, Ψ− e^{tλ}I is not one to one and so this establishes the other inclusion. You fill in the details. This theorem is a special case of theorems which go by the name “spectral mapping theorem”. - Suppose Ψ∈ℒwhere V,W are finite dimensional inner product spaces and t → Ψis continuous for t ∈: For every ε > 0 there there exists δ > 0 such that if< δ then< ε. Show t →is continuous. Here it is the inner product in W. Also define what it means for t → Ψv to be continuous and show this is continuous. Do it all for differentiable in place of continuous. Next show t →is continuous.
- If z∈ W, a finite dimensional inner product space, what does it mean for t → zto be continuous or differentiable? If z is continuous, define
as follows.

Show that this definition is well defined and furthermore the triangle inequality,

and fundamental theorem of calculus,

hold along with any other interesting properties of integrals which are true.

- For V,W two inner product spaces, define
as follows.

Show this is well defined and does indeed give ∫

_{a}^{b}Ψdt ∈ℒ. Also show the triangle inequalitywhere

is the operator norm and verify the fundamental theorem of calculus holds.Also verify the usual properties of integrals continue to hold such as the fact the integral is linear and

and similar things. Hint: On showing the triangle inequality, it will help if you use the fact that

You should show this also.

- Prove Gronwall’s inequality. Suppose u≥ 0 and for all t ∈,
where K is some nonnegative constant. Then

Hint: w

= ∫_{0}^{t}uds. Then using the fundamental theorem of calculus, wsatisfies the following.Now use the usual techniques you saw in an introductory differential equations class. Multiply both sides of the above inequality by e

^{−Kt}and note the resulting left side is now a total derivative. Integrate both sides from 0 to t and see what you have got. If you have problems, look ahead in the book. This inequality is proved later in Theorem D.4.3. - With Gronwall’s inequality and the integral defined in Problem 21 with its properties listed
there, prove there is at most one solution to the initial value problem
Hint: If there are two solutions, subtract them and call the result z. Then

It follows

and so

Now consider Gronwall’s inequality of Problem 22.

- Suppose A is a matrix which has the property that whenever μ ∈ σ, Reμ < 0. Consider the initial value problem
The existence and uniqueness of a solution to this equation has been established above in preceding problems, Problem 17 to 23. Show that in this case where the real parts of the eigenvalues are all negative, the solution to the initial value problem satisfies

Hint: A nice way to approach this problem is to show you can reduce it to the consideration of the initial value problem

where J

_{ε}is the modified Jordan canonical form where instead of ones down the main diagonal, there are ε down the main diagonal (Problem 19). Thenwhere D is the diagonal matrix obtained from the eigenvalues of A and N

_{ε}is a nilpotent matrix commuting with D which is very small provided ε is chosen very small. Now let Ψbe the solution ofdescribed earlier as

Thus Ψ

commutes with D and N_{ε}. Tell why. Next argueand integrate from 0 to t. Then

It follows

It follows from Gronwall’s inequality

Now look closely at the form of Ψ

to get an estimate which is interesting. Explain whyand now observe that if ε is chosen small enough,

is so small that each component of zconverges to 0. - Using Problem 24 show that if A is a matrix having the real parts of all eigenvalues less than
0 then if
it follows

Hint: Consider the columns of Ψ

? - Let Ψbe a fundamental matrix satisfying