2287
Of course there is no change in anything if M has its values in a Hilbert space W whileY has its values in its dual space. Then one defines
∫ t0 ⟨Y,dM⟩W ′,W by analogy to the above
for Y an elementary function, step function which is adapted.We use the following definition.
Definition 66.0.18 Let τ p be an increasing sequence of stopping times for which Mτ p is aL2 martingale. If M is already an L2 martingale, simply let τ p ≡ ∞. Let G denote thosefunctions Y which are adapted and for which there is a sequence of elementary functions{Y n} satisfying ∥Y n (t)∥W ′M∗ ∈ L2 (Ω) for each t with
limn→∞
E(∫ T
0∥Y −Y n∥2
W ′ d [M]τ p
)= 0
for each τ p.
Then exactly the same arguments given above yield the following simple generaliza-tions.
Definition 66.0.19 Let Y ∈ G . Then∫ t
0⟨Y,dMτ p⟩W ′,W ≡ lim
n→∞
∫ t
0⟨Y n,dMτ p⟩W ′,W in L2 (Ω)
Lemma 66.0.20 The above definition is well defined. Also,∫ t
0 ⟨Y,dMτ p⟩W ′,W is a continu-ous martingale. The inequality
E
(∣∣∣∣∫ t
0⟨Y,dMτ p⟩W ′,W
∣∣∣∣2)≤ E
(∫ t
0∥Y∥2
W ′ d [M]τ p
)
is also valid. For any sequence of elementary functions {Y n} ,∥Y n (t)∥W ′M∗ ∈ L2 (Ω) ,
∥Y n−Y∥L2(Ω;L2([0,T ];W ′,d[Mτ p ]))→ 0
there exists a subsequence, still denoted as {Y n} of elementary functions for which∫ t
0⟨Y n,dMτ p⟩W ′,W
converges uniformly to∫ t
0 ⟨Y,dMτ p⟩W ′,W on [0,T ] for ω off some set of measure zero. Inaddition, the quadratic variation satisfies the following inequality.[∫ (·)
0⟨Y,dMτ p⟩W ′,W
](t)≤
∫ t
0∥Y∥2
W ′ d [M]τ p ≤∫ t
0∥Y∥2
W ′ d [M]
As before, you can consider the case where you only know X[0,τ p]Y ∈ G . This yieldsa local martingale as before.