This section is on the quadratic variation of a martingale. Actually, you can also
consider the quadratic variation of a local martingale which is more general.
Therefore, this concept is defined first. We will generally assume M
(0)
= 0 since
there is no real loss of generality in doing so. One can simply subtract M
(0)
otherwise.
Definition 62.2.1Let
{M (t)}
be adapted to the normal filtration ℱ_{t}for t >
0. Then
{M (t)}
is a local martingale (submartingale)if there exist stoppingtimes τ_{n}increasing to infinity such that for each n, the process M^{τn}
(t)
≡M
(t∧ τn)
is a martingale (submartingale) with respect to the given filtration. Thesequence of stopping times is called a localizing sequence.The martingale M^{τn}iscalled the stoppedmartingale. Exactly the same convention applies to a localizedsubmartingale.
Proposition 62.2.2If M
(t)
is a continuous local martingale (submartingale) fora normal filtration as above, M
(0)
= 0, then there exists a localizing sequenceτ_{n}such that for each n the stopped martingale(submartingale) M^{τn}is uniformlybounded. Also if M is a martingale, then M^{τ}is also a martingale (submartingale).If τ_{n}is an increasing sequence of stopping times such that lim_{n→∞}τ_{n} = ∞, andfor each τ_{n}and real valued stopping time δ, there exists a function X of τ_{n}∧ δsuch that X
(τn ∧δ)
is ℱ_{τn∧δ}measurable, then lim_{n→∞}X
(τn ∧ δ)
≡ X
(δ)
existsfor each ω and X
(δ)
is ℱ_{δ}measurable.
Proof:First consider the claim about M^{τ} being a martingale (submartingale) when
M is. By optional sampling theorem,
E(M τ (t)|ℱs) = E (M (τ ∧ t)|ℱs) = M (τ ∧ t∧ s) = M τ (s).
The case where M is a submartingale is similar.
Next suppose σ_{n} is a localizing sequence for the local martingale(submartingale) M.
Then define
ηn ≡ inf{t > 0 : ||M (t)|| > n} .
Therefore, by continuity of M,
||M (η )||
n
≤ n. Now consider τ_{n}≡ η_{n}∧ σ_{n}. This is an
increasing sequence of stopping times. By continuity of M, it must be the case that
η_{n}→∞. Hence σ_{n}∧ η_{n}→∞.
Finally, consider the last claim. Pick ω. Then X
(τ (ω) ∧δ (ω ))
n
(ω )
is eventually
constant as n →∞ because for all n large enough, τ_{n}
(ω)
> δ
(ω )
and so this sequence of
functions converges pointwise. That which it converges to, denoted by X
(δ)
, is ℱ_{δ}
measurable because each function ω → X
(τ (ω)∧ δ(ω))
n
(ω)
is ℱ_{δ∧τn}⊆ℱ_{δ} measurable.
■
One can also give a generalization of Lemma 62.1.5 to conclude a local martingale
must be constant or else they must fail to be of bounded variation.
Corollary 62.2.3Let ℱ_{t}be a normal filtration and let A
(t)
,B
(t)
be adapted toℱ_{t}, continuous, and increasing with A
(0)
= B
(0)
= 0 and suppose A
(t)
− B
(t)
≡M
(t)
is a local martingale. Then M
(t)
= A
(t)
− B
(t)
= 0 a.e. for all t.
Proof: Let
{τn}
be a localizing sequence for M. For given n, consider the
martingale,
N_{n}, a set of measure
0. Let N = ∪_{n}N_{n}. Then for ω
∕∈
N, M
(τn(ω)∧ t)
(ω)
= 0. Let n →∞ to conclude that
M
(t)
(ω)
= 0. Therefore, M
(t)
(ω)
= 0 for all t. ■
Recall Example 61.7.7 on Page 7032. For convenience, here is a version of what it
says.
Lemma 62.2.4Let X
(t)
be continuous and adapted to a normal filtration ℱ_{t}and let ηbe a stopping time. Then if K is a closed set with 0
∕∈
K,
τ ≡ inf{t > η : X (t) ∈ K}
is also a stopping time.
Proof:First consider Y
(t)
= X
(t∨η)
−X
(η)
. I claim that Y
(t)
is adapted to ℱ_{t}.
Consider U and open set and
[Y (t) ∈ U]
. Is it in ℱ_{t}? We know it is in ℱ_{t∨η}. It
equals
([Y (t) ∈ U]∩ [η ≤ t])∪ ([Y (t) ∈ U]∩ [η > t])
Consider the second of these sets. It equals
([X (η )− X (η) ∈ U ]∩ [η > t])
If 0 ∈ U, then it reduces to
[η > t]
∈ℱ_{t}. If 0
∕∈
U, then it reduces to ∅ still in ℱ_{t}. Next
consider the first set. It equals
[X (t∨ η)− X (η) ∈ U ]∩[η ≤ t]
= [X (t∨ η)− X (η) ∈ U ]∩[t∨ η ≤ t] ∈ ℱt
from the definition of ℱ_{t∨η}. (You know that
[X (t∨ η)− X (η) ∈ U]
∈ℱ_{t∨η} and so when
this is intersected with
[t∨ η ≤ t]
one obtains a set in ℱ_{t}. This is what it means
to be in ℱ_{t∨η}.) Now τ is just the first hitting time of Y
(t)
of the closed set.
■
Proposition 62.2.5Let M
(t)
be a continuous local martingale for t ∈
[0,T]
having values in H a separable Hilbert space adapted to the normal filtration
{ℱt}
such that M
(0)
= 0. Then there exists a unique continuous, increasing,nonnegative, local submartingale
[M ]
(t)
called the quadratic variation suchthat
||M (t)||2 − [M ](t)
is a real local martingale and
[M ]
(0)
= 0. Here t ∈
[0,T]
. If δ is any stoppingtime
[M δ] = [M ]δ
Proof:First it is necessary to define some stopping times. Define stopping times
τ_{0}^{n}≡ η_{0}^{n}≡ 0.
n { n n −n}
ηk+1 ≡ inf s > ηk : ||M (s)− M (ηk)|| = 2 ,
τnk ≡ ηnk ∧ T
where inf ∅≡∞. These are stopping times by Example 61.7.7 on Page 7032. See also
Lemma 62.2.4. Then for t > 0 and δ any stopping time, and fixed ω, for some
k,
t∧ δ ∈ I (ω), I (ω ) ≡ [τ n(ω),τn(ω)], I (ω) ≡ (τn (ω ),τn (ω )] some k
k 0 0 1 k k k+1
Here is why. The sequence
{τn(ω)}
k
_{k=1}^{∞} eventually equals T for all n sufficiently large.
This is because if it did not, it would converge, being bounded above by T and then by
continuity of M,
{M (τn(ω))}
k
_{k=1}^{∞} would be a Cauchy sequence contrary to the
requirement that
|| ( ) ||
||M t∧δ ∧τnk+1 − M (t ∧δ ∧τnk)||
= ||||M δ (t∧ τn )− M δ(t∧ τn)|||| ≤ 2− n
k+1 k
You can see this is the case by considering the cases, t∧δ ≥ τ_{k+1}^{n},t∧δ ∈ [τ_{k}^{n},τ_{k+1}^{n}),
and t ∧ δ < τ_{k}^{n}. It is only this approximation property and the fact that the τ_{k}^{n}
partition
[0,T]
which is important in the following argument.
Now let α_{n} be a localizing sequence such that M^{αn} is bounded as in Proposition
62.2.2. Thus M^{αn}
(t)
∈ L^{2}
(Ω )
and this is all that is needed. In what follows, let δ
be a stopping time and denote M^{αp∧δ} by M to save notation. Thus M will
be uniformly bounded and from the definition of the stopping times τ_{k}^{n}, for
t ∈
[0,T]
,
∑ ( n ) n
M (t) ≡ M t∧ τk+1 − M (t∧ τk), (62.2.4)
k≥0
(62.2.4)
and the terms of the series are eventually 0, as soon as η_{k}^{n} = ∞.
Therefore,
|||| ||||2
||M (t)||2 = ||||∑ M (t∧τn )− M (t ∧τn)||||
||||k≥0 k+1 k ||||
Then this equals
∑ || ||
= ||M (t∧τnk+1)− M (t∧ τnk)||2
k≥0
∑
+ ((M (t∧ τnk+1)− M (t∧ τnk)),(M (t∧ τnj+1) − M (t∧ τjn))) (62.2.5)
j⁄=k
(62.2.5)
Consider the second sum. It equals
∑ k−∑ 1(( ( n ) n ) ( ( n ) ( n)))
2 M t∧τk+1 − M (t ∧τk) , M t∧ τj+1 − M t∧ τj
k≥0 j=0
( )
∑ ( ( ) ) k∑−1( ( ) ( ))
= 2 ( M t∧ τnk+1 − M (t∧ τnk) , M t∧ τnj+1 − M t∧ τnj )
k≥0 j=0
∑ (( ( n ) n ) n )
= 2 M t∧τk+1 − M (t∧ τk) ,M (t∧ τk)
k≥0
This last sum equals P_{n}
(t)
defined as
∑ ( n ( ( n ) n ))
2 M (τk), M t∧ τk+1 − M (t∧ τk) ≡ Pn (t) (62.2.6)
k≥0
(62.2.6)
This is because in the k^{th} term, if t ≥ τ_{k}^{n}, then it reduces to
(M (τnk),(M (t∧ τnk+1) − M (t∧ τkn)))
while if t < τ_{k}^{n}, then the term reduces to 0 which is also the same as
(M (τn),(M (t∧ τn ) − M (t∧τn))).
k k+1 k
This is a finite sum because eventually, for large enough k, τ_{k}^{n} = T. However the
number of nonzero terms depends on ω. This is not a good thing. However, a little more
can be said. In fact the sum also converges in L^{2}
(Ω)
. Say
||M (t,ω)||
≤ C.
( ( ) 2)
| ∑q ( n ( ( n ) n)) |
E ( ( M (τk), M t∧ τk+1 − M (t∧τk ) ) )
k≥p
∑q (( n ( ( n ) n ))2)
= E M (τk), M t∧ τk+1 − M (t∧ τk) + mixed terms (62.2.7)
k≥p
E (E ((M (τn),Δ )(M (τn),Δ )|ℱ ))
(( ( n) j ) j (( (jn) k) τk))
= E ((M (τj),Δj) E( M( )τj ,Δk |ℱτk))
= E M τnj ,Δj M τnj ,E (Δk |ℱτk) = 0
Now since the mixed terms equal 0, it follows from 62.2.7, that expression is dominated
by
∑q (|| ( ) ||)
C2 E ||M t∧ τkn+1 − M (t∧τnk )||2
k≥p
Using a similar manipulation to what was just done to show the mixed terms equal 0,
this equals
2∑q (|||| ( n )||||2) ( n 2)
C E M t∧ τk+1 − E ||M (t∧ τk)||
k=p(|| ( )|| || ( )|| )
≤ C2E ||M t ∧τnq+1 ||2 − ||M t∧ τpn ||2
The integrand converges to 0 as p,q →∞ and the uniform bound on M allows a use of
the dominated convergence theorem. Thus the partial sums of the series of 62.2.6
converge in L^{2}
(Ω )
as claimed.
By adding in the values of
{ n+1}
τk
P_{n}
(t)
can be written in the form
2∑ (M (τn+1′),(M (t∧ τn+1)− M (t∧τn+1)))
k≥0 k k+1 k
where τ_{k}^{n+1′} has some repeats. From the construction,
||||M (τn+1′)− M (τn+1)|||| ≤ 2−(n+1)
k k
Thus
Pn(t)− Pn+1(t) = 2∑ (M (τn+1′)− M (τ n+1) ,(M (t∧τn+1) − M (t∧ τn+1)))
k≥0 k k k+1 k
and so from Proposition 62.1.4 applied to ξ_{k}≡ M
( n+1′)
τk
− M
( n+1)
τk
,
( 2) ( −2n ( 2))
E ||Pn (t)− Pn+1 (t)|| ≤ 2 E ||M (t)|| . (62.2.8)
(62.2.8)
Now t → P_{n}
(t)
is continuous because it is a finite sum of continuous functions. It is
also the case that
{Pn(t)}
is a martingale. To see this use Lemma 62.1.1. Let σ be a
stopping time having two values. Then using Corollary 62.1.3 and the Doob optional
sampling theorem, Theorem 61.7.11
( )
∑q ( ( ( ) ))
E M (τnk), M σ ∧τnk+1 − M (σ ∧ τkn)
k=0
q
= ∑ E ((M (τn),(M (σ ∧τn )− M (σ ∧ τn))))
k=0 k k+1 k
∑q (( ( n ( ( n ) n )) ))
= E E M (τk ),M σ∧ τk+1 − M (σ ∧τk) |ℱτnk
k=q0
∑ (( n ( ( n ) n ) n))
= E M (τk),E M σ∧ τk+1 − M (σ ∧τk) |ℱτk
k=q0
= ∑ E ((M (τn),E (M (σ∧ τn ∧ τn)− M (σ ∧ τn)))) = 0
k=0 k k+1 k k
Note the Doob theorem applies because σ ∧τ_{k+1}^{n} is a bounded stopping time due to the
fact σ has only two values. Similarly
( )
∑q ( n ( ( n ) n ))
E M (τk), M t ∧τk+1 − M (t∧ τk)
k=0
∑q (( n ( ( n ) n )))
= E M (τk), M t ∧τk+1 − M (t∧ τk)
k=0
∑q (( ( ( ( ) )) ))
= E E M (τnk), M t∧ τnk+1 − M (t∧ τnk) |ℱ τnk
k=0
∑q (( n ( ( n ) n ) ))
= E M (τk) ,E M t∧ τk+1 − M (t∧ τk) |ℱ τnk
k=0
∑q (( n ( ( n n) n )))
= E M (τk) ,E M t∧ τk+1 ∧ τk − M (t∧ τk) = 0
k=0
It follows each partial sum for P_{n}
(t)
is a martingale. As shown above, these partial sums
converge in L^{2}
(Ω )
and so it follows that P_{n}
(t)
is also a martingale. Note the Doob
theorem applies because t ∧ τ_{k+1}^{n} is a bounded stopping time.
I want to argue that P_{n} is a Cauchy sequence in ℳ_{T}^{2}