Now let U be a real separable Hilbert space. Let an orthonormal basis for U be
{gi}
.
Now let L^{2}
(0,∞, U )
be H in the above construction. For h,g ∈ L^{2}
(0,∞, U)
.
E (W (h)W (g)) = (h,g)2 ≡ (h,g)
L (0,∞,U) H
Here each W
(g)
will be a real valued normal random variable, the variance of W
(g)
is
|g|
_{L2(0,∞,U )
}^{2} and its mean is 0, every vector
(W (h1),⋅⋅⋅,W (hn))
being generalized
multivariate normal. Let
ψk(t) = W (X (0,t)gk).
Then this is a real valued random variable. Disjoint increments are obviously independent
in the same way as before. Also
( ( ) ( )) ∫ ∞
E (ψk(t)ψj(s)) = E W X(0,t)gk W X(0,s)gj ≡ X(0,t∧s)(gk,gj)U dt = 0
0
(63.6.36)
(63.6.36)
if j≠k. Thus the random variables ψ_{k}
(t)
and ψ_{j}
(s)
are independent. This is because,
from the construction,
(ψ (t),ψ (s))
k j
is normally distributed and the covariance is a
diagonal matrix. Also
ψk (t)− ψk (s) = W (X(0,t)Jgk) − W (X(0,s)Jgk ) = W (X (s,t)Jgk)
ψk (t− s) ≡ W (X(0,t− s)Jgk)
so ψ_{k}
(t− s)
has the same mean, 0 and variance,
|t− s|
, as ψ_{k}
(t)
− ψ_{s}
(s)
. Thus these
have the same distribution because both are normally distributed.
Now let J be a Hilbert Schmidt map from U to H. Then consider
∑
W (t) = ψk(t)Jgk. (63.6.37)
k
(63.6.37)
This has values in H. It is shown below that the series converges in L^{2}
(Ω;H )
. Recall the
definition of a Q Wiener process.
Definition 63.6.7Let W
(t)
be a stochastic process with values in H, a real separableHilbert space which has the properties that t → W
(t,ω)
is continuous, whenevert_{1}< t_{2}<
⋅⋅⋅
< t_{m}, the increments
{W (t) − W (t )}
i i−1
are independent, W
(0)
= 0, andwhenever s < t,
ℒ (W (t) − W (s)) = N (0,(t− s)Q)
which means that whenever h ∈ H,
ℒ ((h,W (t)− W (s))) = N (0,(t− s)(Qh,h ))
Also
E ((h1,W (t) − W (s))(h2,W (t)− W (s))) = (Qh1,h2) (t− s).
Here Q is a nonnegative trace class operator. Recall this means
∑∞
Q = λiei ⊗ ei
i=1
where
{ei}
is a complete orthonormal basis, λ_{i}≥ 0, and
∞∑
λi < ∞
i=1
Such a stochastic process is called a Q Wiener process.In the case where these havevalues in ℝ^{n}, tQ ends up being the covariance matrix of W
(t)
.
Proposition 63.6.8The process defined in 63.6.37is a Q Wiener process in Hwhere Q = JJ^{∗}.
Proof:First, why does the sum converge? Consider the sum for an increment in
time. Let t_{i−1} = 0 to obtain the convergence of the sum for a given t. Consider the
difference of two partial sums.
and this converges to 0 as m,n →∞ since J is Hilbert Schmidt. Thus the sum converges
in L^{2}
(Ω,H )
. Why are the disjoint increments independent?
Let λ_{k}∈ H. Consider t_{0}< t_{1}<
⋅⋅⋅
< t_{n}.
( )
∑n ∏n
E expi (λk,W (tk)− W (tk− 1)) = E (exp(i(λk,W (tk)− W (tk−1))))?
k=1 k=1
(63.6.38)
(63.6.38)
Start with the left. There are finitely many increments concerned and so it can
be assumed that for each k one can have m →∞ such that the partial sums
up to m in the definition of W
(tk)
− W
(tk−1)
converge pointwise a.e. Thus
( ∑n )
E expi (λk,W (tk)− W (tk− 1))
k=1
( n ( m ))
= lim E (exp i∑ ( λ ,∑ (ψ (t)− ψ (t ))Jg ))
m→∞ k=1 k j=1 j k j k−1 j
( n m )
= lim E (exp ∑ ∑ i(λk,(ψj(tk)− ψj(tk− 1))Jgj))
m→ ∞ k=1 j=1
and the vector whose k^{th} component is ∑_{i=1}^{m}
(ψi(tk) − ψi(tk−1))
(Jgi,h)
_{H} for
k = 1,2,
⋅⋅⋅
,n is normally distributed and the covariance is a diagonal matrix.
Hence these are independent random variables as hoped. Now you can pass to a
limit as m →∞. Since this is true for any h ∈ H that the random variables
(W (tk)− W (tk−1),h)
_{H} are independent, it follows that the random variables
W
(tk)
− W
(tk−1)
are also.
What of the Holder continuity? In the above computation for independence, as a
special case, for λ ∈ H,
( )
E(expi(λ,W (t)− W (s))) = exp − 1 |J∗λ|2(t− s) (63.6.40)
2 U
(63.6.40)
In particular, replacing λ with λr for r real,
( )
E (expir(λ,W (t) − W (s))) = exp − 1r2|J ∗λ|2 (t − s)
2 U
Now we differentiate with respect to r and then take r = 0 as before to obtain finally
that
( )
E (λ,W (t) − W (s))2m ≤ Cm |J∗λ|2m |t− s|m = Cm (Qλ,λ)m |t− s|m
Then letting
{hk }
be an orthonormal basis for H, and using the above inequality with
Minkowski’s inequalitiy,
( ( ))1∕m ( ( [∑∞ ]m ))1 ∕m
E |W (t)− W (s)|2m = E (W (t)− W (s),hk)2
k=1
∑∞ [ ( 2m )]1∕m ∑∞ ( m ∗ 2m)1∕m
≤ E (W (t)− W (s),hk) ≤ Cm (t− s) |J hk|U
k=1 ∞ k∞=1∞
= C1∕m |t − s|∑ |J∗h |2= C1 ∕m |t− s|∑ ∑ (J ∗h ,g )2
m k=1 k U m k=1 j=1 k j
∞ ∞ ∞
= C1∕m |t − s|∑ ∑ (hk,Jgj)2 = |t− s|C1 ∕m ∑ |Jgj|2
m j=1k=1 m j=1 H
and since J is Hilbert Schmidt, modifying the constant yields
( )
E |W (t)− W (s)|2m ≤ Cm |t− s|m
By the Kolmogorov Centsov theorem, Theorem 61.2.3,
( ∥W (t)− W (s)∥)
E sup ----(t−-s)γ---- ≤ Cm
0≤s<t≤T