Definition 10.3.1The Fourier transformis definedas follows for f ∈ L^{1}
(ℝ)
.
∫ ∞
Ff (t) ≡ √1-- e−itxf (x)dx
2π −∞
where here I am using the usual notation from calculus to denote the Lebesgue integral in which, to be moreprecise, you would put dm_{1}in place of dx. The inverseFourier transform is defined the same way exceptyou delete the minus sign in the complex exponential.
∫ ∞
F −1f (t) ≡ √-1 eitxf (x)dx
2π − ∞
Does it deserve to be called the “inverse” Fourier transform? This question will be explored somewhat
below.
In studying the Fourier transform, I will use some improper integrals.
Definition 10.3.2Define∫_{a}^{∞}f
(t)
dt ≡ lim_{r→∞}∫_{a}^{r}f
(t)
dt. This coincides with the Lebesgueintegral when f ∈ L^{1}
(a,∞ )
.
With this convention, there is a very important improper integral involving sin
(x)
∕x. You can show
with a little estimating that x → sin
(x)
∕x is not in L^{1}
(0,∞ )
. Nevertheless, a lot can be said about
improper integrals involving this function.
Theorem 10.3.3The following hold
∫_{0}^{∞}
sinu-
u
du =
π
2
lim_{r→∞}∫_{δ}^{∞}
sin(uru)
du = 0 whenever δ > 0.
If f ∈ L^{1}
(ℝ)
, then lim_{r→∞}∫_{ℝ} sin
(ru)
f
(u)
du = 0. This is called the Riemann Lebesguelemma.
Proof: You know
1
u
= ∫_{0}^{∞}e^{−ut}dt. Therefore, using Fubini’s theorem,
∫ r ∫ r ∫ ∞ ∫ ∞ ∫ r
sin-udu = sin(u) e−utdtdu = e−utsin (u) dudt
0 u 0 0 0 0
which is obviously in L^{1} and so one can apply the dominated convergence theorem and conclude
that
∫ r sinu ∫ ∞ 1 π
rli→m∞ -u--du = 1-+-t2dt = 2.
0 0
This shows part 1.
Now consider ∫_{δ}^{∞}
sin(ru)
u
du. It equals ∫_{0}^{∞}
sin(ru)
u
du −∫_{0}^{δ}
sin(ru)
u
du which can be seen from the
definition of what the improper integral means. Also, you can change the variable. Let ru = t so rdu = dt
and the above reduces to
∫ ∞ sin(t) 1 ∫ rδsin (t) ∫ ∞ sin(ru)
-----r-dt− -----dt = -------du
0 t r 0 t δ u
and that at some x > 0, g is locally Holder continuous from theright and from the left. This means there exist constants K,δ > 0 and r ∈ (0,1] such that for
∫ ∫
−δ g(x+-u)- ∞ g-(x-+-u)
lr→im∞ −∞ sin (ur) u du = lrim→∞ δ sin(ur) u du = 0
First Integral in 10.3: This converges to 0 as r →∞ because of the Riemann Lebesgue lemma.
Indeed, for 0 ≤ u ≤ δ,
||g (x − u)− g(x− )|| 1
||---------------|| ≤ K -1−-r
2u u
which is integrable on
[0,δ]
. The other quotient also is integrable by similar reasoning. ■
The next theorem justifies the terminology above which defines F^{−1} and calls it the inverse
Fourier transform. Roughly it says that the inverse Fourier transform of the Fourier transform
equals the mid point of the jump. Thus if the original function is continuous, it restores the
original value of this function. Surely this is what you would want by calling something the
inverse Fourier transform. However, note that in this theorem, it is defined in terms of an
improper integral. This is because there is no guarentee that the Fourier transform will end
up being in L^{1}. Thus instead of ∫_{−∞}^{∞} we write lim_{R→∞}∫_{−R}^{R}. Of course, IF the Fourier
transform ends up being in L^{1}, then this amounts to the same thing. The interesting thing is
that even if this is not the case, the formula still works provided you consider an improper
integral.
Now for certain special kinds of functions, the Fourier transform is indeed in L^{1} and one can
show that it maps this special kind of function to another function of the same sort. This can
be used as the basis for a general theory of Fourier transforms. However, the following does
indeed give adequate justification for the terminology that F^{−1} is called the inverse Fourier
transform.
Theorem 10.3.6Let g ∈ L^{1}
(ℝ )
and suppose g is locally Holder continuous from the right and from theleft at x as in 10.1and 10.2. Then
∫ ∫
-1- R ixt ∞ −ity g(x+-)+-g(x− )
lRim→∞ 2π −R e −∞ e g (y)dydt = 2 .
Proof: Consider the following manipulations.
-1
2π
∫_{−R}^{R}e^{ixt}∫_{−∞}^{∞}e^{−ity}g
(y)
dydt =
∫ ∫ ∫ ∫
1-- ∞ R ixt−ity -1- ∞ R i(x−y)t
2π − ∞ −Re e g (y)dtdy = 2π −∞ −R e g(y)dtdy
( )
1-∫ ∞ ∫ R i(x− y)t ∫ R −i(x−y)t
= 2π − ∞g (y) 0 e dt+ 0 e dt dy
∫ (∫ )
1-- ∞ R
= 2π − ∞g (y) 0 2cos((x − y)t)dt dy
∫ ∫
1- ∞ sin-R-(x-−-y) -1 ∞ sin-Ry-
= π −∞ g(y) x − y dy = π −∞ g (x − y) y dy
∫ ∞
= 1- (g (x − y)+ g(x+ y)) sinRy-dy
π 0 y
2-∫ ∞ (g-(x-−-y)+-g(x+-y)) sin-Ry-
= π 0 2 y dy
Does this situation ever occur? Yes, it does. This is discussed a little later.
How does the Fourier transform relate to the Laplace transform? This is considered next. Recall that
from Theorem 10.2.2 if g has exponential growth
|g (t)|
≤ Ce^{ηt}, then if Re
(s)
> η, one can define ℒg
(s)
as
∫ ∞
ℒg (s) ≡ e−sug(u)du
0
and also s →ℒg
(s)
is differentiable on Re
(s)
> η in the sense that if h ∈ ℂ and G
(s)
≡ℒg
(s)
,
then
G-(s+-h)−-G-(s) ′ ∫ ∞ −su
lhim→0 h = G (s) = − 0 ue g(u)du
This is an example of an analytic function of the complex variable s. It will follow from later theorems that
the existence of one derivative implies the existence of all derivatives. This is of course very different than
in real analysis where you may have only finitely many derivatives. The difference here is that h ∈ ℂ. You
are saying much more by allowing h to be complex. Then the next theorem shows how to invert the
Laplace transform. It is one of those results which says that you get the mid point of the jump
when you do a certain process. It is like what happens in Fourier series where the Fourier
series converges to the midpoint of the jump under suitable conditions. For a fairly elementary
discussion of this kind of thing, see the single variable advanced calculus book on my web
page.
Theorem 10.3.8Let g be a measurablefunction defined on
(0,∞ )
which has exponentialgrowth
ηt
|g(t)| ≤ Ce for some real η
and is Holder continuous from the right and left as in 10.1and 10.2. ForRe