24.2 Laplace Transform Methods
There is a popular algebraic technique which may be the fastest way to find closed form
solutions to the initial value problem and to find the fundamental matrix. This method of
transforms succeeds so well because of the algebraic technique of partial fractions and the fact that the
Laplace transform is a linear mapping.
This presentation will emphasize the algebraic procedures. The analytical questions are not trivial and
are given a discussion in Section A.2 of the appendix.
Definition 24.2.1 Let f be a function defined on [0,∞) which has exponential growth, meaning
for some real λ. Then the Laplace transform of f, denoted by ℒ
is defined as
for all s sufficiently large. It is customary to write this transform as F
and the function as
instead of f. In other words, t is considered a generic variable as is s and you tell the difference by
whether it is t or s. It is sloppy but convenient notation.
Lemma 24.2.2 ℒ is a linear mapping in the sense that if f,g have exponential growth, then for all s
large enough and a,b scalars,
Proof: Let f,g be two functions having exponential growth. Then for s large enough,
The usefulness of this method in solving differential equations, comes from the following
In the table, Γ
denotes the gamma function
The function uc
denotes the step
function which equals 1 for t > c
and 0 for t < c
The expression in Formula 20.) is defined as follows
It models an impulse and is sometimes called the Dirac delta function. There is no such function but it is
called this anyway. In the following, n will be a positive integer and f ∗g
the Laplace transform of the function
t → f
Table of Laplace Transforms
|1.) 1 ||1∕s
|2.) eat ||1∕
|3.) tn ||
|4.) tp,p > −1||
|5.) sinat ||
|6.) cosat ||
|7.) eibt ||
|8.) sinhat ||
|9.) coshat ||
|10.) eat sinbt ||
|11.) eat cosbt||
|12.) eat sinhbt ||
|13.) eat coshbt ||
|14) tneat ||
|19.) f ∗ g
You should verify the claims in this table. It is best if you do it yourself. The fundamental result in
using Laplace transforms is this. If you have F
then aside from finitely many jumps on each
bounded interval, it follows that
Thus you just go backwards in the table to find the desired
functions. To see this shown, see Section A.2
on Page 1323
. I will illustrate with a second order
differential equation having constant coefficients. Of course you can change to a first order
system and this will be the emphasis next, but you can also use the method directly. Note
A similar formula holds for higher derivatives. You can also get this by iterating 21.
Example 24.2.3 Find all solutions to the equation y′′− 2y′ + y = e−t.
From the table, first go to y′′. This gives −y′
then you go to the next term which
and finally, you get
. On the right you get from formula 2.
Therefore, you have
Thus we find the Laplace transform of the function desired.
Now you go backwards in the table. This typically involves doing partial fractions to get something which is
in the table. It may be tedious, but is completely routine. You can also get this from a computer algebra
system. More on this later. Thus we need
Now you go backwards in the table to find that this comes from
Next consider the other two terms.
These are in the table.
Therefore, our solution is
If you specify y′
then you will find the unique solution to the differential equation with
initial conditions. It is
You can check that this satisfies the initial conditions and the equation.
To present this in a unified manner write as a first order system as described above.
taking the Laplace transform in terms of the entries of the matrices, gives
We can solve for
Now you use partial fractions. It equals
Now the solution to our problem is x1 and so, we only need to go backwards in the table with the top line.
As before, say both a,b are 1. Then this reduces to
which is the same answer as before.
Using the table as just described really is a pretty good way to solve these kinds of equations, but there
is a much easier way to do it. You let the computer algebra system do the tedious work for you.
Here is the general idea for a first order system. Be patient. I will consider specific examples a
little later. However, if you are looking for something which will solve all first order systems in
closed form using known elementary functions, then you are looking for something which is not
there. You can indeed speak of it in general theoretical terms but the only problems which are
completely solvable in closed form are those for which you can exactly find the eigenvalues of the
matrix. Unfortunately, this involves solving polynomial equations and none of us can do these in