∫ ∫
-1- −1 -1- −1
APk = A 2πi Γ k (λI − A) dλ = 2πi Γ k A(λI − A ) dλ
1 ∫ −1 1 ∫ −1
= 2πi (λI − A ) Adλ = 2πi (λI − A) dλA = PkA (18.9)
Γ k Γ k
With these introductory observations, the following is the main result about invariant subspaces. First
is some notation.
Definition 18.3.1Let X be a vector space and let X_{k}be a subspace. Then X = ∑_{k=1}^{n}X_{k}means thatevery x ∈ X can be written in the form x = ∑_{k=1}^{n}x_{k},x_{k}∈ X_{k}. We write
n
X =k=1 Xk
if whenever 0 = ∑_{k}x_{k}, it follows that each x_{k} = 0. In other words, we use the new notation when there is aunique way to write each vector in X as a sum of vectors in the X_{k}. When this uniqueness holds, the sumis called a direct sum.In case AX_{k}⊆ X_{k}, we say that X_{k}is A invariant and X_{k}is an invariant subspace.
Theorem 18.3.2Let σ
(A )
= ∪_{k=1}^{n}K_{k}where K_{j}∩ K_{i} = ∅, each K_{j}being compact. There existP_{k}∈ℒ
(X, X )
for each k = 1,
⋅⋅⋅
,n such that
I = ∑_{k=1}^{n}P_{k}
P_{i}P_{j} = 0 if i≠j
P_{i}^{2} = P_{i}for each i
X =_{ k=1}^{n}X_{k}where X_{k} = P_{k}X and each X_{k}is a Banach space.
AX_{k}⊆ X_{k}which says that X_{k}is A invariant.
P_{k}x = x if x ∈ X_{k}. If x ∈ X_{j}, then P_{k}x = 0 if k≠j.
Proof: Consider 1.
∫
∑n -1- ∑n −1
Pk = 2πi Γ fk (λ) (λI − A) dλ
k=1 ∫ k=1
= -1- 1 (λ − I)−1dλ = I
2πi Γ
from 18.6. Consider 2. Let λ be on Γ_{i} and μ on Γ_{j}. Then from Theorem 18.2.4
where X_{k}≡ P_{k}X. However, this is a direct sum because if 0 = ∑_{k}P_{k}x_{k}, then doing P_{j} to both sides and
using part 2.,
0 = P 2jxj = Pjxj
and so the summands are all 0 since j is arbitrary. As to X_{k} being a Banach space, suppose P_{k}x_{n}→ y. Is
y ∈ X_{k}? P_{k}x_{n} = P_{k}
(P x )
k n
and so, by continuity of P_{k}, this converges to P_{k}y ∈ X_{k}. Thus P_{k}x_{n}→ y and
P_{k}x_{n}→ P_{k}y so y = P_{k}y and y ∈ X_{k}. Thus X_{k} is a closed subspace of a Banach space and must therefore
be a Banach space itself.
5. follows from 18.9. If P_{k}x ∈ X_{k}, then AP_{k}x = P_{k}Ax ∈ X_{k}. Hence A : X_{k}→ X_{k}.
Finally, suppose x ∈ X_{k}. Then x = P_{k}y. Then P_{k}x = P_{k}^{2}y = P_{k}y = x so for x ∈ X_{k},P_{k}x = x and P_{k}
restricted to X_{k} is just the identity. If P_{j}x is a vector of X_{j}, and k≠j, then P_{k}P_{j}x = 0x = 0.
■
From the spectral mapping theorem, Theorem 18.2.6, σ
(P )
k
= σ
(f (A))
k
=
{0,1}
because f_{k}
(λ)
has
only these two values. Then the following is also obtained
Theorem 18.3.3Let n > 1,
σ (A) = ∪nk=1Kk
where the K_{k}are compact and disjoint. Let P_{k}be the projection map defined above and X_{k}≡ P_{k}X. Thendefine A_{k}≡ AP_{k}. The following hold
A_{k} : X_{k}→ X_{k},A_{k}x = Ax for all x ∈ X_{k}so A_{k}is just the restriction of A to X_{k}.
σ
(A )
k
=
{0,K }
k
.
A = ∑_{k=1}^{n}A_{k}.
If we regard A_{k}as a mapping A : X_{k}→ X_{k}, then σ
(Ak )
= K_{k}.
Proof:Letting f_{k}
(λ)
be the function in the above theorem, g
(λ)
= λ,
∫
A = -1- f (λ)g(λ)(λI − A )−1dλ
k 2πi Γ k
and so, by the spectral mapping theorem,
σ (Ak ) = σ (fk (A )g(A)) = fk(σ (A ))g(σ(A)) = {0,Kk }
because the possible values for f_{k}
(λ)
g
(λ)
for λ ∈ σ
(A)
are
{0,Kk }
. This shows 2. Part 1. is obvious from
Theorem 18.3.2. So is Part 3. Consider the last claim about A_{k}.
If μ
∕∈
K_{k} then in all of the above, U_{k} could have excluded μ. Assume this is the case. Thus
λ →
μ1−λ-
is analytic on U_{k}. Therefore, using Theorem 18.2.4 applied to the Banach space
X_{k},
∫
(μI − A )− 1 =-1 --1--(λI − A )−1dλ
2πi Γ kμ − λ
and so μ
∕∈
σ
(Ak )
. Therefore, K_{k}⊇ σ
(Ak)
. If μI −A fails to be onto, then this must be the case for some
A_{k}. Here is why. If y
∕∈
(μI − A)
(X )
, then there is no x such that
(μI − A )
x = y. However, y = ∑_{k}P_{k}y.
If each A_{k} is onto, then there would be x_{k}∈ X_{k} such that
(μI − Ak)
x_{k} = y_{k}. Recall that P_{k}x_{k} = x_{k}.
Therefore,
(μPk − APk )
x_{k} = P_{k}y,
(μI − A )(Pkxk) = Pky.
Summing this on k,
∑ ∑
(μI − A) xk = Pky = y
k k
and it would be the case that
(μI − A)
is onto. Thus if
(μI − A )
is not onto, then μI −A_{k} : X_{k}→ X_{k} is
not onto for some k. If μI − A fails to be one to one, then there exists x≠0 such that
(μI − A)
x = 0.
However, x = ∑_{k}x_{k} where x_{k}∈ X_{k}. Then, since A_{k} is just the restriction of A to X_{k} and P_{k} is the
restriction of I to X_{k},
∑
(μPk − Ak )xk = 0
k
where I refers to X_{k}. Now recall that this is a direct sum. Hence
(μP − A )x = (μI − A )x = 0
k k k k k
If each μI −A_{k} is one to one, then each x_{k} = 0 and so it follows that x = 0 also, it being the sum of the x_{k}.
It follows that σ
(A)
⊆∪_{k}σ
(Ak)
and so
σ (A ) ⊆ ∪kσ (Ak) ⊆ ∪kKk = σ(A), σ(Ak) ⊆ Kk,
and so you cannot have σ
(Ak)
⊊ K_{k}, proper inclusion, for any k since otherwise, the above could not hold.
■
It might be interesting to compare this with the algebraic approach to the same problem in the
appendix, Section A.11. That approach has the advantage of dealing with arbitrary fields of scalars and is
based on polynomials, the division algorithm, and the minimal polynomial where this is limited to the field
of complex numbers. However, the approach in this chapter based on complex analysis applies to arbitrary
Banach spaces whereas the algebraic methods only apply to finite dimensional spaces. Isn’t it interesting
how two totally different points of view lead to essentially the same result about a direct sum of invariant
subspaces?