This remarkable theorem has to do with when a martingale is a Wiener process. The
proof I am giving here follows [?].
Definition 62.8.1Let W
(t)
be a stochastic process which has the properties thatwhenever t_{1}< t_{2}<
⋅⋅⋅
< t_{m}, the increments
{W (ti)− W (ti− 1)}
are independentand whenever s < t, it follows W
(t)
− W
(s)
is normally distributed with variancet−s and mean 0. Also t → W
(t)
is Holder continuous with every exponent γ < 1∕2
and W
(0)
= 0. This is called a Wiener process.
First here is a lemma.
Lemma 62.8.2Let
{X (t)}
be a real martingale adapted to the filtration ℱ_{t}fort ∈
[a,b]
some interval such that for all t ∈
[a,b]
,E
( )
X (t)2
< ∞. Then
{ }
X (t)2 − t
isalso a martingale if and only if whenever s < t,
( )
E (X (t)− X (s))2|ℱs = t− s.
Proof:Suppose first
{ }
X (t)2 − t
is a real martingale. Then since
{X (t)}
is a
martingale,
( ) ( )
E (X (t)− X (s))2|ℱs = E X (t)2 − 2X (t)X (s) +X (s)2|ℱs
= E (X (t)2|ℱ )− 2E (X (t)X (s)|ℱ )+ X (s)2
( s) s
= E X (t)2|ℱs − 2X (s)E (X (t)|ℱs )+ X (s)2
( )
= E X (t)2|ℱs − 2X (s)2 + X (s)2
( )
= E X (t)2 − t|ℱs + t− X (s)2
2 2
= X (s) − s+ t− X (s) = t− s
Next suppose E
( 2 )
(X (t)− X (s))|ℱs
= t − s. Then since
{X (t)}
is a martingale,
( )
t− s = E X (t)2 − X (s)2|ℱs
( )
= E X (t)2 − t|ℱs + t− X (s)2
and so
( ) ( )
0 = E X (t)2 − t|ℱs − X (s)2 − s
which proves the converse.
Theorem 62.8.3Suppose
{X (t)}
is a real stochastic process which satisfies allthe conditions of a real Wiener process except the requirement that it be continuous.Then both
{X (t)}
and
{ }
X (t)2 − t
are martingales.
Proof: First define the filtration to be
ℱt ≡ σ (X (s)− X (r) : r ≤ s ≤ t).
Claim:If A ∈ℱ_{s}, then
∫ ∫
XA (X (t)− X (s))dP = P (A ) (X (t) − X (s))dP.
Ω Ω
Proof of claim:Let G denote those sets of ℱ_{s} for which the above formula holds.
Then it is clear that G is closed with respect to countable unions of disjoint sets and
complements. Let K denote those sets which are finite intersections of sets of the form
(X (u)− X (r))
^{−1}
(B)
where B is a Borel set and r ≤ u ≤ s. Say a set, A of K is of the
form
∩mi=1(X (ui) − X (ri))−1 (Bi)
Then since disjoint increments are independent, linear combinations of the random
variables, X
(u )
i
− X
(r )
i
are normally distributed. Consequently,
(X (u )− X (r ),⋅⋅⋅,X (u )− X (r ) ,X (t)− X (s))
1 1 m m
is multivariate normal. The covariance matrix is of the form
( A 0 )
0 t− s
and so the random vector,
(X (u1)− X (r1),⋅⋅⋅,X (um )− X (rm ))
and the random
variable X
(t)
−X
(s)
are independent. Consequently, X_{A} is independent of X
(t)
−X
(s)
for any A ∈K. Then by the lemma on π systems, Lemma 10.12.3 on Page 923,
ℱ_{s}⊇G⊇ σ
(K )
= ℱ_{s}. This proves the claim.
Thus
∫ ∫
(X (t)− X (s))dP = (X (t) − X (s))XAdP
A Ω ∫
= P (A ) Ω(X (t) − X (s))dP = 0
which shows that since A ∈ℱ_{s} was arbitrary,
E (X (t)|ℱs) = X (s)
and
{X (t)}
is a martingale.
Now consider whether
{ 2 }
X (t) − t
is a martingale. By assumption,
ℒ(X (t) − X (s)) = ℒ (X (t − s)) = N (0,t− s).
Then for A ∈ℱ_{s}, the independence of X_{A} and X
(t)
− X
(s)
shows
∫ ( 2 ) ∫ 2
E (X (t)− X (s)) |ℱs dP = (X (t)− X (s)) dP
A A ∫
= P (A )(t− s) = (t− s)dP
A
and since A ∈ℱ_{s} is arbitrary,
( )
E (X (t)− X (s))2|ℱs = t− s
and so the result follows from Lemma 62.8.2. This proves the theorem.
The next lemma is the main result from which Levy’s theorem will be established.
Lemma 62.8.4Let
{X (t)}
be a real continuous martingale adapted to the filtration ℱ_{t}for t ∈
[a,b]
some interval such that for all t ∈
[a,b]
,E
( )
X (t)2
< ∞. Suppose also that
{ 2 }
X (t) − t
is a martingale. Then for λ real,
( ) ( )
E eiλX (b) = E eiλX(a) e− (b−a)λ22-
Proof: Let λ ∈
[− p,p]
where for most of the proof, p is fixed but arbitrary. Let
{tn}
k
_{k=0}^{2n
} be uniform partitions such that t_{k}^{n}−t_{k−1}^{n} = δ_{n}≡
(b− a)
∕2^{n}. Now for ε > 0
define a stopping time τ_{ε,n} to be the first time, t such that there exist s_{1},s_{2}∈
[a,t]
with
|s − s |
1 2
< δ_{n} but
|X (s1)− X (s2)| = ε.
If no such time exists, then τ_{ε,n}≡ b.
Then τ_{ε,n} really is a stopping time because from continuity of X
(t)
and denoting by
r,r_{1} elements of ℚ, then
⋃∞ ⋂ [ 1 ]
[τε,n > t] = |X (r1)− X (r2)| ≤ ε − m ∈ ℱt
m=10≤r1,r2≤t,|r1−r2|≤δn
because to be in
[τε,n > t]
it means that by t the absolute value of the differences must
always be less than ε. Hence
[τε,n ≤ t]
= Ω ∖
[τε,n > t]
∈ℱ_{t}.
Now consider
[τ = b]
ε,n
for various n. By continuity, it follows that for each
ω ∈ Ω,
τε,n (ω) = b
for all n large enough. Thus
∞
∅ = ∩ n=1[τε,n < b],
the sets in the intersection decreasing. Thus there exists n
(ε)
such that
([ ])
P τε,n(ε) < b < ε. (62.8.42)
(62.8.42)
Denote τ_{ε,n}
(ε)
as τ_{ε} for short and it will always be assumed that n
(ε)
is at least this
large and that lim_{ε→0+}n
(ε)
= ∞. In addition to this, n
(ε)
will also be large enough
that
λ2
1− 2 δn(ε) > 0
for all λ ∈
[− p,p]
. To save on notation, t_{j} will take the place of t_{j}^{n}. Then consider the
stopping times τ_{ε}∧ t_{j} for j = 0,1,
⋅⋅⋅
,2^{n(ε)
}.
Let y_{j}≡ X
(τε ∧tj)
−X
(τε ∧tj−1)
, it follows from the definition of the stopping time
that
|yj| ≤ ε (62.8.43)
(62.8.43)
because both τ_{ε}∧t_{j} and τ_{ε}∧t_{j−1} are less than τ_{ε} and closer together than δ_{n}
(ε)
and so
if
|yj|
> ε, then τ_{ε}≤ t_{j},t_{j−1} and so y_{j} would need to equal 0.
By the optional stopping theorem,
{X (τε ∧tj)}
_{j} is a martingale as is also
{X (τε ∧ tj)− τε ∧ tj} .
j
Thus for A ∈ℱ_{τε∧tj−1},
∫ (2 ) ∫ ( 2 )
E yj|ℱτε∧tj−1 dP = E (X (τε ∧ tj)− X (τε ∧ tj−1)) |ℱ τε∧tj−1 dP
A A
∫ ( )
= E X (τ ∧ t)2|ℱ + X (τ ∧ t )2
A ε j τε∧tj−1 ε j−1
− 2X (τε ∧ tj−1)E (X (τε ∧tj)|ℱ τε∧tj−1) dP
∫ ( 2 ) ∫ ( )
= E X (τε ∧tj) − τε ∧tj|ℱ τε∧tj− 1 dP + E τε ∧ tj|ℱτε∧tj−1 dP
A A
∫ ∫
+ X (τε ∧tj−1)2dP − 2 X (τε ∧ tj− 1)2dP
A A
∫ ∫ ∫ ( )
= X (τε ∧ tj−1)2 dP − τε ∧ tj−1dP + E τε ∧ tj|ℱτε∧tj−1 dP
A∫ A ∫ A
+ X (τε ∧ tj− 1)2dP − 2 X (τε ∧ tj−1)2dP
A A
( )
E (X (τε ∧ tj)− X (τε ∧ tj−1))2|ℱ τε∧tj− 1 ≤ tj − tj−1 = δn(ε) (62.8.44)
(62.8.44)
Also,
E (y |ℱ ) = E(X (τ ∧ t) − X (τ ∧ t )|ℱ ) = 0. (62.8.45)
j τε∧tj−1 ε j ε j− 1 τε∧tj−1
(62.8.45)
Now it is time to find E
(eiλX(τε∧tj))
.
( ) ( )
E eiλX (τε∧tj) = E eiλ(X (τε∧tj−1)+yj)
( ( ))
= E eiλX (τε∧tj−1)E eiλyj|ℱτε∧tj−1 . (62.8.46)
(62.8.46)
Now let o
(1)
denote any quantity which converges to 0 as ε → 0 for all λ ∈
[− p,p]
and
O
(1)
is a quantity which is bounded as ε → 0. Then from 62.8.45 and 62.8.46 you can
consider the power series for e^{iλyj} which converges uniformly due to 62.8.43 and write
62.8.46 as
( ( ) )
iλX(τε∧tj−1) λ2 2
E e 1− 2 σj (1+ o(1)) .
then noting that from 62.8.44 which shows σ_{j}^{2} is o
(1)
,it is routine to verify
λ2 2 − λ22σ2j(1+o(1))
1− 2 σj (1 + o(1)) = e .
Now this shows
( ) ( 2 )
E eiλX(τε∧tj) = E eiλX (τε∧tj−1)e− λ2 σ2j(1+o(1))