using the Gauss Seidel method and the Jacobi method. Check your answer by also solving it using
row operations.
Solve the system
( ) ( ) ( )
5 1 1 x 1
|( 1 7 2 |) |( y |) = |( 2 |)
0 2 4 z 3
using the Gauss Seidel method and the Jacobi method. Check your answer by also solving it using
row operations.
If you are considering a system of the form Ax = b and A−1 does not exist, will either the Gauss
Seidel or Jacobi methods work? Explain. What does this indicate about finding eigenvectors for a
given eigenvalue?
For
||x||
∞≡ max
{|xj| : j = 1,2,⋅⋅⋅,n}
, the parallelogram identity does not hold. Explain.
A norm
||⋅||
is said to be strictly convex if whenever
||x||
=
||y||
,x≠y, it follows
|| ||
||||x+-y||||
|| 2 || < ||x|| = ||y||.
Show the norm
|⋅|
which comes from an inner product is strictly convex.
A norm
||⋅||
is said to be uniformly convex if whenever
||xn||
,
||yn||
are equal to 1 for all n ∈ ℕ and
limn→∞
||xn + yn||
= 2, it follows limn→∞
||xn − yn||
= 0. Show the norm
|⋅|
coming from an inner
product is always uniformly convex. Also show that uniform convexity implies strict convexity which
is defined in Problem 6.
Suppose A : ℂn→ ℂn is a one to one and onto matrix. Define
||x|| ≡ |Ax |.
Show this is a norm.
If X is a finite dimensional normed vector space and A,B ∈ℒ
(X,X )
such that
||B ||
<
||A||
and
A−1 exists, can it be concluded that
|||| −1 ||||
A B
< 1? Either give a counter example or a
proof.
Let X be a vector space with a norm
||⋅||
and let V = span
(v1,⋅⋅⋅,vm)
be a finite dimensional
subspace of X such that
{v1,⋅⋅⋅,vm}
is a basis for V. Show V is a closed subspace of X. This means
that if wn→ w and each wn∈ V, then so is w. Next show that if w
∕∈
V,
dist(w,V ) ≡ inf{||w− v|| : v ∈ V } > 0
is a continuous function of w and
|dist(w, V)− dist(w1,V)| ≤ ∥w1 − w ∥
Next show that if w
∕∈
V, there exists z such that
||z||
= 1 and dist
(z,V )
> 1∕2. For those who know
some advanced calculus, show that if X is an infinite dimensional vector space having norm
||⋅||
, then
the closed unit ball in X cannot be compact. Thus closed and bounded is never compact in an infinite
dimensional normed vector space.
Suppose ρ
(A )
< 1 for A ∈ℒ
(V,V)
where V is a p dimensional vector space having a norm
||⋅||
. You
can use ℝp or ℂp if you like. Show there exists a new norm
|||⋅|||
such that with respect to this new
norm,
|||A |||
< 1 where
|||A|||
denotes the operator norm of A taken with respect to this new norm
on V ,
|||A||| ≡ sup{|||Ax||| : |||x||| ≤ 1}
Hint:You know from Gelfand’s theorem that
||An||1∕n < r < 1
provided n is large enough, this operator norm taken with respect to
||⋅||
. Show there exists 0 < λ < 1
such that
( )
ρ A- < 1.
λ
You can do this by arguing the eigenvalues of A∕λ are the scalars μ∕λ where μ ∈ σ
(A)
. Now let ℤ+
denote the nonnegative integers.
||||An ||||
|||x ||| ≡ sup ||||-nx||||
n∈ℤ+ λ
First show this is actually a norm. Next explain why
Establish a similar result to Problem 11 without using Gelfand’s theorem. Use an argument which
depends directly on the Jordan form or a modification of it.
Using Problem 11 give an easier proof of Theorem 15.4.6 without having to use Corollary 15.4.5. It
would suffice to use a different norm of this problem and the contraction mapping principle of Lemma
15.4.4.
A matrix A is diagonally dominant if
|aii|
>∑j≠i
|aij|
. Show that the Gauss Seidel method
converges if A is diagonally dominant.
Suppose f
(λ)
= ∑n=0∞anλn converges if
|λ|
< R. Show that if ρ
(A)
< R where A is an n × n
matrix, then
∞∑ n
f (A) ≡ anA
n=0
converges in ℒ
(Fn,Fn)
. Hint: Use Gelfand’s theorem and the root test.
Referring to Corollary 15.3.5, for λ = a + ib show
exp (λt) = eat(cos(bt)+ isin (bt)).
Hint: Let y
(t)
= exp
(λt)
and let z
(t)
= e−aty
(t)
. Show
z′′ + b2z = 0,z(0) = 1,z′(0) = ib.
Now letting z = u + iv where u,v are real valued, show
u′′ + b2u = 0,u(0) = 1,u′(0) = 0
′′ 2 ′
v + b v = 0,v(0) = 0,v (0) = b.
Next show u
(t)
= cos
(bt)
and v
(t)
= sin
(bt)
work in the above and that there is at most one
solution to
w ′′ + b2w = 0w(0) = α,w ′(0) = β.
Thus z
(t)
= cos
(bt)
+ isin
(bt)
and so y
(t)
= eat
(cos(bt)+ isin(bt))
. To show there is at most one
solution to the above problem, suppose you have two, w1,w2. Subtract them. Let f = w1− w2.
Thus
f ′′ + b2f = 0
and f is real valued. Multiply both sides by f′ and conclude
( )
d- (f′)2- 2 f2
dt 2 + b 2 = 0
Thus the expression in parenthesis is constant. Explain why this constant must equal
0.
Let A ∈ℒ
(ℝn,ℝn)
. Show the following power series converges in ℒ
(ℝn,ℝn)
.
∑∞ k k
Ψ (t) ≡ tA--
k=0 k!
This was done in the chapter. Go over it and be sure you understand it. This is how you can define
exp
(tA)
. Next show that Ψ′
(t)
= AΨ
(t)
,Ψ
(0)
= I. Next let Φ
(t)
= ∑k=0∞
k k
t(−kA!)-
. Show each
Φ
(t)
,Ψ
(t)
each commute with A. Next show that Φ
(t)
Ψ
(t)
= I for all t. Finally, solve the initial
value problem
x′ = Ax + f, x (0) = x0
in terms of Φ and Ψ. This yields most of the substance of a typical differential equations
course.
. This is like Lemma 15.3.9. Letting J be the Jordan
canonical form for A, explain why
∑∞ k k ∞∑ k k
Ψ (t) ≡ t-A- = S t-J-S−1
k=0 k! k=0 k!
and you note that in Jk, the diagonal entries are of the form λk for λ an eigenvalue of A. Also
J = D + N where N is nilpotent and commutes with D. Argue then that
∑∞ k k
t-J-
k=0 k!
is an upper triangular matrix which has on the diagonal the expressions eλt where λ ∈ σ
(A)
. Thus
conclude
σ (Ψ (t)) ⊆ exp (tσ (A ))
Next take etλ∈ exp
(tσ(A))
and argue it must be in σ
(Ψ (t))
. You can do this as follows:
∞ ∞ ∞
tλ ∑ tkAk- ∑ tkλk- ∑ tk ( k k )
Ψ (t)− e I = k! − k! I = k! A − λ I
k(=0 k=0 ) k=0
( ∑∞ tk k−∑ 1 k−j j)
= k! A λ (A− λI)
k=0 j=1
Now you need to argue
∑∞ k k∑−1
t- Ak− jλj
k=0 k!j=1
converges to something in ℒ
(ℝn, ℝn)
. To do this, use the ratio test and Lemma 15.3.2 after first using
the triangle inequality. Since λ ∈ σ
(A)
, Ψ
(t)
−etλI is not one to one and so this establishes the other
inclusion. You fill in the details. This theorem is a special case of theorems which go by the name
“spectral mapping theorem” which was discussed in the text. However, go through it
yourself.
Suppose Ψ
(t)
∈ℒ
(V,W )
where V,W are finite dimensional inner product spaces and
t → Ψ
(t)
is continuous for t ∈
[a,b]
: For every ε > 0 there there exists δ > 0 such that if
|s− t|
< δ then
||Ψ (t) − Ψ (s)||
< ε. Show t →
(Ψ (t)v,w )
is continuous. Here it is the inner
product in W. Also define what it means for t → Ψ
(t)
v to be continuous and show this is
continuous. Do it all for differentiable in place of continuous. Next show t →
||Ψ (t)||
is
continuous.
If z
(t)
∈ W, a finite dimensional inner product space, what does it mean for t → z
(t)
to be
continuous or differentiable? If z is continuous, define
∫ b
z (t)dt ∈ W
a
as follows.
( ∫ ) ∫
b b
w, a z(t)dt ≡ a (w, z(t))dt.
Show that this definition is well defined and furthermore the triangle inequality,
||∫ b || ∫ b
|| z(t)dt||≤ |z (t)|dt,
|a | a
and fundamental theorem of calculus,
d ( ∫ t )
dt z (s)ds = z (t)
a
hold along with any other interesting properties of integrals which are true.
For V,W two inner product spaces, define
∫ b
Ψ (t)dt ∈ ℒ(V,W )
a
as follows.
( ∫ b ) ∫ b
w, Ψ(t)dt(v) ≡ (w,Ψ (t)v)dt.
a a
Show this is well defined and does indeed give ∫abΨ
(t)
dt ∈ℒ
(V,W )
. Also show the triangle
inequality
||||∫ b |||| ∫ b
|||| Ψ(t)dt||||≤ ||Ψ (t)||dt
|| a || a
where
||⋅||
is the operator norm and verify the fundamental theorem of calculus holds.
(∫ t )′
Ψ (s)ds = Ψ (t).
a
Also verify the usual properties of integrals continue to hold such as the fact the integral is linear
and
∫ ∫ ∫
b c c
a Ψ (t)dt+ b Ψ (t) dt = a Ψ (t)dt
and similar things. Hint: On showing the triangle inequality, it will help if you use the fact
that
|w|W = sup |(w,v)|.
|v|≤1
You should show this also.
Prove Gronwall’s inequality. Suppose u
(t)
≥ 0 and for all t ∈
[0,T ]
,
∫ t
u(t) ≤ u + Ku (s)ds.
0 0
where K is some nonnegative constant. Then
Kt
u(t) ≤ u0e .
Hint:w
(t)
= ∫0tu
(s)
ds. Then using the fundamental theorem of calculus, w
(t)
satisfies the
following.
u(t)− Kw (t) = w′(t)− Kw (t) ≤ u0,w(0) = 0.
Now use the usual techniques you saw in an introductory differential equations class. Multiply both
sides of the above inequality by e−Kt and note the resulting left side is now a total derivative.
Integrate both sides from 0 to t and see what you have got.
With Gronwall’s inequality and the integral defined in Problem 21 with its properties listed there,
prove there is at most one solution to the initial value problem
′
y = Ay,y (0) = y0.
Hint:If there are two solutions, subtract them and call the result z. Then
Suppose A is a matrix which has the property that whenever μ ∈ σ
(A )
,Reμ < 0. Consider the
initial value problem
y ′ = Ay,y (0) = y0.
The existence and uniqueness of a solution to this equation has been established above in preceding
problems, Problem 17 to 23. Show that in this case where the real parts of the eigenvalues are all
negative, the solution to the initial value problem satisfies
ltim→∞ y(t) = 0.
Hint:A nice way to approach this problem is to show you can reduce it to the consideration of the
initial value problem
z′ = Jεz,z(0) = z0
where Jε is the modified Jordan canonical form where instead of ones down the main diagonal, there
are ε down the main diagonal (Problem 14). Then
′
z = Dz + Nεz
where D is the diagonal matrix obtained from the eigenvalues of A and Nε is a nilpotent matrix
commuting with D which is very small provided ε is chosen very small. Now let Ψ
(t)
be the solution
of
Ψ′ = − D Ψ,Ψ (0) = I
described earlier as
∑∞ (−-1)k-tkDk-
k! .
k=0
Thus Ψ
(t)
commutes with D and Nε. Tell why. Next argue
(Ψ (t)z)′ = Ψ(t)Nεz(t)
and integrate from 0 to t. Then
∫
t
Ψ(t)z(t)− z0 = 0 Ψ (s)N εz(s)ds.
It follows
∫ t
||Ψ (t) z(t)|| ≤ ||z0||+ ||Nε||||Ψ(s)z(s)||ds.
0
It follows from Gronwall’s inequality
||Ψ (t)z (t)|| ≤ ||z0||e||Nε||t
Now look closely at the form of Ψ
(t)
to get an estimate which is interesting. Explain
why
( )
eμ1t 0
Ψ (t) = || ... ||
( μ t)
0 e n
and now observe that if ε is chosen small enough,
||N ε||
is so small that each component of z
(t)
converges to 0.
Using Problem 24 show that if A is a matrix having the real parts of all eigenvalues less than 0 then
if