There is a popular algebraic technique which may be the fastest way to find closed form
solutions to the initial value problem and to find the fundamental matrix. This method of
Laplace^{1}
transforms succeeds so well because of the algebraic technique of partial fractions and the fact that the
Laplace transform is a linear mapping.
This presentation will emphasize the algebraic procedures. The analytical questions are not trivial and
are given a discussion in Section A.2 of the appendix.
Definition 24.2.1Let f be a function defined on [0,∞) which has exponential growth,meaningthat
|f (t)| ≤ Ceλt
for some real λ. Then the Laplace transform of f, denoted by ℒ
(f)
is defined as
∫
ℒf (s) = ∞ e−tsf (t)dt
0
for all s sufficiently large. It is customary to write this transform as F
(s)
or ℒf
(s)
and the function asf
(t)
instead of f. In other words, t is considered a generic variable as is s and you tell the difference bywhether it is t or s. It is sloppy but convenient notation.
Lemma 24.2.2ℒ is a linear mapping in the sense that if f,g have exponential growth, then for all slarge enough and a,b scalars,
ℒ (af (t)+ bg(t))(s) = aℒf (s)+ bℒg (s)
Proof: Let f,g be two functions having exponential growth. Then for s large enough,
∫ ∞ −ts
ℒ (af (t)+ bg(t)) ≡ 0 e (af (t)+ bg(t))dt
∫ ∞ ∫ ∞
= a e−tsf (t)dt+ b e− tsg(t)dt = aℒf (s) + bℒg (s) ■
0 0
The usefulness of this method in solving differential equations, comes from the following
observation.
′ ∫ ∞ ′ − ts −st∞ ∫ ∞ −st
ℒ (x (t)) = 0 x (t) e dt = x (t)e |0 + 0 se x (t)dt = − x(0)+ sℒx (s).
In the table, Γ
(p + 1)
denotes the gamma function
∫ ∞
Γ (p+ 1) = e−ttpdt
0
The function u_{c}
(t)
denotes the step function which equals 1 for t > c and 0 for t < c.
PICT
The expression in Formula 20.) is defined as follows
∫
δ(t− c) f (t)dt = f (c)
It models an impulse and is sometimes called the Dirac delta function. There is no such function but it is
called this anyway. In the following, n will be a positive integer and f ∗g
(t)
≡∫_{0}^{t}f
(t− u)
g
(u)
du.Also,
F
(s)
will denote ℒ
{f (t)}
the Laplace transform of the function t → f
(t)
.
Table of Laplace Transforms
f
(t)
F
(s)
1.) 1
1∕s
2.) e^{at}
1∕
(s − a)
3.) t^{n}
n!
sn+1-
4.) t^{p},p > −1
Γ (p+1)
-sp+1-
5.) sinat
a
s2+a2-
6.) cosat
s
s2+a2-
7.) e^{ibt}
ss2++ibb2-
8.) sinhat
s2a−a2-
9.) coshat
s2s−a2-
10.) e^{at} sinbt
(s−ab)2+b2-
11.) e^{at} cosbt
(s−sa−)a2+b2-
f
(t)
F
(s)
12.) e^{at} sinhbt
(s−ab)2−b2
13.) e^{at} coshbt
---s−2a-2
(s−a)−b
14) t^{n}e^{at}
---n!--
(s−a)n+1
15.) u_{c}
(t)
e−cs
--s-
16.) u_{c}
(t)
f
(t− c)
e^{−cs}F
(s)
17.) e^{ct}f
(t)
F
(s− c)
18.) f
(ct)
1
c
F
(s)
c
19.) f ∗ g
(t)
F
(s)
G
(s)
20.) δ
(t− c)
e^{−cs}
21.) f^{′
}
(t)
sF
(s)
− f
(0)
22.)
(− t)
^{n}f
(t)
dnFn
ds
(s)
You should verify the claims in this table. It is best if you do it yourself. The fundamental result in
using Laplace transforms is this. If you have F
(s)
= G
(s)
then aside from finitely many jumps on each
bounded interval, it follows that f
(t)
= g
(t)
. Thus you just go backwards in the table to find the desired
functions. To see this shown, see Section A.2 on Page 1323. I will illustrate with a second order
differential equation having constant coefficients. Of course you can change to a first order
system and this will be the emphasis next, but you can also use the method directly. Note
∫ ∞ ∫ ∞
y′′(t)e− stdt = y′(t)e−st|∞0 + s y′(t)e− stdt
0 ∫ ∞ 0
= − y′(0)+ s y′(t)e−stdt
[0 ∫ ]
′ −st∞ ∞ −st
= − y (0)+ s y (t)e |0 + s 0 y(t)e dt
= − y′(0)− sy(0)+ s2Y (s)
A similar formula holds for higher derivatives. You can also get this by iterating 21.
Example 24.2.3Find all solutions to the equation y^{′′}− 2y^{′} + y = e^{−t}.
From the table, first go to y^{′′}. This gives −y^{′}
(0)
−sy
(0)
+ s^{2}Y
(s)
then you go to the next term which
gives −2sY
(s)
+ 2y
(0)
and finally, you get Y
(s)
from the y. On the right you get from formula 2.
1∕
(s +1)
. Therefore, you have
1
s2Y (s)− 2sY (s) +Y (s)− y′(0)− sy (0) + 2y(0) = s+-1
(s2 − 2s + 1) Y (s) = y′(0)+ (s− 2)y(0)+-1--
s + 1
Thus we find the Laplace transform of the function desired.
-1-
Y (s) = y′(0) ----1-----+ y(0)--s-−-2---+ ----s+1----
s2 − 2s+ 1 s2 − 2s+ 1 (s2 − 2s+ 1)
′ 1 s − 2 s1+1
= y (0) s2 −-2s+-1-+ y(0)s2 −-2s+-1-+ (s2-−-2s+-1)
Now you go backwards in the table. This typically involves doing partial fractions to get something which is
in the table. It may be tedious, but is completely routine. You can also get this from a computer algebra
system. More on this later. Thus we need
As before, say both a,b are 1. Then this reduces to
1( )
y (t) = 4 3e2t + 2te2t + 1 e−t
which is the same answer as before.
Using the table as just described really is a pretty good way to solve these kinds of equations, but there
is a much easier way to do it. You let the computer algebra system do the tedious work for you.
Here is the general idea for a first order system. Be patient. I will consider specific examples a
little later. However, if you are looking for something which will solve all first order systems in
closed form using known elementary functions, then you are looking for something which is not
there. You can indeed speak of it in general theoretical terms but the only problems which are
completely solvable in closed form are those for which you can exactly find the eigenvalues of the
matrix. Unfortunately, this involves solving polynomial equations and none of us can do these in
general.