There is a really nice service on the web which will do all of these things very easily. It is
www.bluebit.gr/matrix-calculator/ To get to it, you can use the address or google matrix
calculator.
When you go to this site, you enter a matrix row by row, placing a space between each
number. When you come to the end of a row, you press enter on the keyboard to start the next
row. After entering the matrix, you select what you want it to do. You will see that it also solves
systems of equations.
Appendix B Positive Matrices
Earlier theorems about Markov matrices were presented. These were matrices in which all the
entries were nonnegative and either the columns or the rows added to 1. It turns out that many of
the theorems presented can be generalized to positive matrices. When this is done, the resulting
theory is mainly due to Perron and Frobenius. I will give an introduction to this theory here
following Karlin and Taylor [19].
Definition B.0.1For A a matrix or vector, the notation, A >> 0 will mean every entryof A is positive. By A > 0 is meant that every entry is nonnegative and at least one ispositive. By A ≥ 0 is meant that every entry is nonnegative. Thus the matrix or vectorconsisting only of zeros is ≥ 0. An expression like A >> B will mean A − B >> 0 withsimilar modifications for > and ≥.
For the sake of this section only, define the following forx =
(x1,⋅⋅⋅,xn)
^{T}, a vector.
|x| ≡ (|x1|,⋅⋅⋅,|xn|)T .
Thus
|x|
is the vector which results by replacing each entry of x with its absolutevalue^{1}.Also define for x ∈ ℂ^{n},
∑
||x||1 ≡ |xk|.
k
Lemma B.0.2Let A >> 0 and letx > 0. Then Ax >> 0.
Proof:
(Ax)
_{i} = ∑_{j}A_{ij}x_{j}> 0 because all the A_{ij}> 0 and at least one x_{j}> 0.
Lemma B.0.3Let A >> 0. Define
S ≡ {λ : Ax > λx for some x >> 0} ,
and let
K ≡ {x ≥ 0 such that ||x||1 = 1} .
Now define
S1 ≡ {λ : Ax ≥ λx for some x ∈ K }.
Then
sup (S) = sup(S1).
Proof: Let λ ∈ S. Then there exists x >> 0 such that Ax > λx. Consider y ≡ x∕
||x||
_{1}. Then
||y||
_{1} = 1 and Ay > λy. Therefore, λ ∈ S_{1} and so S ⊆ S_{1}. Therefore, sup
(S )
≤ sup
(S1)
.
Now let λ ∈ S_{1}. Then there exists x ≥ 0 such that
||x||
_{1} = 1 so x > 0 and Ax > λx. Letting
y ≡ Ax, it follows from Lemma B.0.2 that Ay >> λy and y >> 0. Thus λ ∈ S and so S_{1}⊆ S
which shows that sup
(S1)
≤ sup
(S)
. ■
This lemma is significant because the set,
{x ≥ 0 such that ||x|| = 1}
1
≡ K is a compact set
in ℝ^{n}. Define
λ0 ≡ sup(S) = sup (S1). (2.1)
(2.1)
The following theorem is due to Perron.
Theorem B.0.4Let A >> 0 be an n × n matrix and let λ_{0}be given in 2.1. Then
λ_{0}> 0 and there exists x_{0}>> 0such that Ax_{0} = λ_{0}x_{0}so λ_{0}is an eigenvalue for A.
If Ax = μx wherex≠0,and μ≠λ_{0}. Then
|μ|
< λ_{0}.
The eigenspace for λ_{0}has dimension 1.
Proof: To see λ_{0}> 0, consider the vector, e ≡
(1,⋅⋅⋅,1)
^{T}. Then
∑
(Ae )i = Aij > 0
j
and so λ_{0} is at least as large as
∑
min Aij.
i j
Let
{λk}
be an increasing sequence of numbers from S_{1} converging to λ_{0}. Letting x_{k} be the vector
from K which occurs in the definition of S_{1}, these vectors are in a compact set. Therefore, there
exists a subsequence, still denoted by x_{k} such that x_{k}→ x_{0}∈ K and λ_{k}→ λ_{0}. Then passing to
the limit,
Ax0 ≥ λ0x0,x0 > 0.
If Ax_{0}> λ_{0}x_{0}, then letting y ≡ Ax_{0}, it follows from Lemma B.0.2 that Ay >> λ_{0}y and
y >> 0. But this contradicts the definition of λ_{0} as the supremum of the elements of S
because since Ay >> λ_{0}y, it follows Ay >>
(λ0 +ε)
y for ε a small positive number.
Therefore, Ax_{0} = λ_{0}x_{0}. It remains to verify that x_{0}>> 0. But this follows immediately
from
y_{0}, it follows x_{0}>> 0 and Ax_{0} = λ_{0}x_{0}.
Suppose now that Ax = μx for x≠0 and μ≠λ_{0}. Suppose
|μ|
≥ λ_{0}. Multiplying both sides by A,
it follows A^{m}x = μ^{m}x and
|μm|
=
|μ|
^{m}≥ λ_{0}^{m} = μ_{0} and so from Theorem B.0.4, since
|μm |
≥ μ_{0},
and μ^{m} is an eigenvalue of A^{m}, it follows that μ^{m} = μ_{0}. But by Theorem B.0.4 again, this implies
x = cy_{0} for some scalar, c and hence Ay_{0} = μy_{0}. Since y_{0}>>0, it follows μ ≥ 0 and so μ = λ_{0}, a
contradiction. Therefore,
|μ |
< λ_{0}.
Finally, if Ax = λ_{0}x, then A^{m}x = λ_{0}^{m}x and so x = cy_{0} for some scalar, c. Consequently,
( m−1 m−2 m −1 ) ( m−1 m−2 m −1 )
A +λ0A + ⋅⋅⋅+ λ0 I x = c A +λ0A + ⋅⋅⋅+ λ0 I y0
= cx0.
Hence
m− 1
m λ0 x = cx0
which shows the dimension of the eigenspace for λ_{0} is one. ■
The following corollary is an extremely interesting convergence result involving the powers of
positive matrices.
Corollary B.0.6Let A > 0 and A^{m}>> 0 for some m ∈ ℕ. Then for λ_{0}given in 2.1,there exists a rank one matrix P such that lim_{m→∞}
||( )m ||
|||| Aλ- − P ||||
0
= 0.
Proof:Considering A^{T}, and the fact that A and A^{T} have the same eigenvalues, Corollary
B.0.5 implies the existence of a vector, v >> 0 such that
ATv = λ0v.
Also let x_{0} denote the vector such that Ax_{0} = λ_{0}x_{0} with x_{0}>> 0. First note that x_{0}^{T}v > 0
because both these vectors have all entries positive. Therefore, v may be scaled such
that
which implies λ_{0}μ is an eigenvalue of A. Therefore, by Corollary B.0.5 it follows that either
λ_{0}μ = λ_{0} in which case μ = 1, or λ_{0}
|μ|
< λ_{0} which implies
|μ|
< 1. But if μ = 1, then x is a
multiple of x_{0} and 2.6 would yield
( ( ) )
-A − P x = x
λ0 0 0
which says x_{0}− x_{0}v^{T}x_{0} = x_{0} and so by 2.2, x_{0} = 0 contrary to the property that x_{0}>> 0.
Therefore,
|μ|
< 1 and so this has shown that the absolute values of all eigenvalues of
( )
A-
λ0
−P
are less than 1. By Gelfand’s theorem, Theorem 13.3.3, it follows
||(( ) )m ||1∕m
|||| A- − P |||| < r < 1
|| λ0 ||
whenever m is large enough. Now by 2.5 this yields
||||( )m |||| ||||( ( ) )m ||||
|||| -A − P|||| = |||| -A − P |||| ≤ rm
λ0 λ0
whenever m is large enough. It follows
||||( A )m ||||
mli→m∞ |||| -- − P|||| = 0
λ0
as claimed.
What about the case when A > 0 but maybe it is not the case that A >> 0? As
before,
K ≡ {x ≥ 0 such that ||x||1 = 1} .
Now define
S1 ≡ {λ : Ax ≥ λx for some x ∈ K }
and
λ0 ≡ sup (S1) (2.7)
(2.7)
Theorem B.0.7Let A > 0 and let λ_{0}be defined in 2.7. Then there exists x_{0}> 0 suchthat Ax_{0} = λ_{0}x_{0}.
Proof: Let E consist of the matrix which has a one in every entry. Then from Theorem
B.0.4 it follows there exists x_{δ}>> 0 ,
||xδ||
_{1} = 1, such that
(A + δE)
x_{δ} = λ_{0δ}x_{δ}
where
λ ≡ sup{λ : (A + δE )x ≥ λx for some x ∈ K }.
0δ
Now if α < δ
{λ : (A + αE )x ≥ λx for some x ∈ K } ⊆
{λ : (A +δE )x ≥ λx for some x ∈ K }
and so λ_{0δ}≥ λ_{0α} because λ_{0δ} is the sup of the second set and λ_{0α} is the sup of the first. It follows
the limit, λ_{1}≡ lim_{δ→0+}λ_{0δ} exists. Taking a subsequence and using the compactness of
K, there exists a subsequence, still denoted by δ such that as δ → 0, x_{δ}→ x ∈ K.
Therefore,
Ax = λ1x
and so, in particular, Ax ≥ λ_{1}x and so λ_{1}≤ λ_{0}. But also, if λ ≤ λ_{0},
λx ≤ Ax < (A + δE )x
showing that λ_{0δ}≥ λ for all such λ. But then λ_{0δ}≥ λ_{0} also. Hence λ_{1}≥ λ_{0}, showing these two
numbers are the same. Hence Ax = λ_{0}x. ■
If A^{m}>> 0 for some m and A > 0, it follows that the dimension of the eigenspace for λ_{0} is
one and that the absolute value of every other eigenvalue of A is less than λ_{0}. If it is
only assumed that A > 0, not necessarily >> 0, this is no longer true. However, there
is something which is very interesting which can be said. First here is an interesting
lemma.
Lemma B.0.8Let M be a matrix of the form
( )
M = A 0
B C
or
( )
M = A B
0 C
where A is an r ×r matrix and C is an
(n− r)
×
(n − r)
matrix. Then det
(M )
= det
(A)
det
(B )
and σ
(M )
= σ
(A)
∪ σ
(C )
.
Proof:To verify the claim about the determinants, note
( ) ( ) ( )
A 0 = A 0 I 0
B C 0 I B C
Therefore,
( ) ( ) ( )
A 0 A 0 I 0
det B C = det 0 I det B C .
But it is clear from the method of Laplace expansion that
( A 0 )
det = detA
0 I
and from the multilinear properties of the determinant and row operations that
( ) ( )
det I 0 = det I 0 = detC.
B C 0 C
The case where M is upper block triangular is similar.
This immediately implies σ
(M )
= σ
(A)
∪ σ
(C )
.
Theorem B.0.9Let A > 0 and let λ_{0}be given in 2.7. If λ is an eigenvalue for A suchthat
|λ|
= λ_{0}, then λ∕λ_{0}is a root of unity. Thus
(λ ∕λ0)
^{m} = 1 for some m ∈ ℕ.
Proof: Applying Theorem B.0.7 to A^{T}, there exists v > 0 such that A^{T}v = λ_{0}v. In the first
part of the argument it is assumed v >>0. Now suppose Ax = λx,x≠0 and that