586 CHAPTER 30. LAPLACE TRANSFORM METHODS

30.2 First Order Systems, Constant CoefficientsYou want to find a matrix valued function Φ(t) such that

Φ′ (t) = AΦ(t) , Φ(0) = I, A is p× p (30.3)

Such a matrix is called a fundamental matrix. It turns out that if you can find Φ(t) , youcan always solve the first order system

x′ = Ax+f, x(0) = x0 (30.4)

I also want to have AΦ(t) = Φ(t)A.What is meant by the above symbols? The idea is that Φ(t) is a matrix whose entries

are differentiable functions of t. The meaning of Φ′ (t) is the matrix whose entries are thederivatives of the entries of Φ(t). For example, abusing notation slightly,(

t t2

sin(t) tan(t)

)′=

(1 2t

cos(t) sec2 (t)

).

What are some properties of this derivative? Does the product rule hold for example?

Lemma 30.2.1 Suppose Φ(t) is m×n and Ψ(t) is n× p and these are differentiable ma-trices. Then

(Φ(t)Ψ(t))′ = Φ′ (t)Ψ(t)+Φ(t)Ψ

′ (t)

Proof: By definition,

((Φ(t)Ψ(t))′

)i j =

((Φ(t)Ψ(t))i j

)′=

(∑k

Φ(t)ik Ψ(t)k j

)′= ∑

kΦ′ (t)ik Ψ(t)k j +∑

kΦ(t)ik Ψ

′ (t)k j

=(Φ′ (t)Ψ(t)

)i j +

(Φ(t)Ψ

′ (t))

i j

and so the conclusion follows. ■Now consider how to find the fundamental matrix Φ(t) to begin with. I will illustrate

with an example.

Example 30.2.2 Let A =

(−1 2−3 4

). Find the fundamental matrix.

I want Φ′ (t) = AΦ(t) ,Φ(0) = I. Take the Laplace transform of both sides. By this Imean replace each entry of the matrix with its Laplace transform. Then if F (s) is the nameof the Laplace transform of Φ(t) ,

sF (s)− I = AF (s) so (sI−A)F (s) = I

and so F (s) = (sI−A)−1 . Now this is easy to find using the formula for the inverse pre-sented earlier. Recall you took the transpose of the cofactor matrix and divided by thedeterminant to get the inverse. See Theorem 27.2.1. In this example,

F (s) = (sI−A)−1 =

(s

(1 00 1

)−

(−1 2−3 4

))−1

=

(s−4

s2−3s+22

s2−3s+2− 3

s2−3s+2s+1

s2−3s+2

)

586 CHAPTER 30. LAPLACE TRANSFORM METHODS30.2 First Order Systems, Constant CoefficientsYou want to find a matrix valued function ® (t) such that®’ (t) =A®(t), (0) =/, Ais px p (30.3)Such a matrix is called a fundamental matrix. It turns out that if you can find (rt), youcan always solve the first order systemx’ =Ax+t f, x(0) =29 (30.4)I also want to have A®(t) = B(r) A.What is meant by the above symbols? The idea is that ®(t) is a matrix whose entriesare differentiable functions of t. The meaning of ®’ (r) is the matrix whose entries are thederivatives of the entries of ® (t). For example, abusing notation slightly,t t? i 1 2tsin(t) tan(t) } \ cos(t) sec?(t) )’What are some properties of this derivative? Does the product rule hold for example?Lemma 30.2.1 Suppose ®(t) ism xn and ¥ (t) is nx p and these are differentiable ma-trices. Then(@ (1) P(t) =O! (1) ¥ (1) +O) P(r)Proof: By definition,((@)¥«O)), = ((@®O¥O),;) = [Lewesyo (tig B Ong + yo (t) BP (t)k;k k= (PW) ¥O),+(®OYO),,tJand so the conclusion follows. lfNow consider how to find the fundamental matrix ®(t) to begin with. I will illustratewith an example.—12Example 30.2.2 Let A = ( 4 . Find the fundamental matrix.I want ®’ (+) = A®(t) ,B(0) = J. Take the Laplace transform of both sides. By this Imean replace each entry of the matrix with its Laplace transform. Then if F (s) is the nameof the Laplace transform of ® (f) ,SF (s) —I=AF (s) so (sf —A)F (s) =Iand so F (s) = (s!—A)_' . Now this is easy to find using the formula for the inverse pre-sented earlier. Recall you took the transpose of the cofactor matrix and divided by thedeterminant to get the inverse. See Theorem 27.2.1. In this example,—! -4 2F (s)=(sI—A) '=(s 10\_( -1 2 _ Ta Paw01 —3 4 73 rn