2220 CHAPTER 64. WIENER PROCESSES
What is the right side of 64.6.38.n
∏k=1
E (exp(i(λ k,W (tk)−W (tk−1))))
=n
∏k=1
E
[exp
(i
(λ k,
∞
∑j=1
(ψ j (tk)−ψ j (tk−1)
)Jg j
))]
= limm→∞
n
∏k=1
E
[exp
(i
(λ k,
m
∑j=1
(ψ j (tk)−ψ j (tk−1)
)Jg j
))]
= limm→∞
n
∏k=1
E
[exp
(i
m
∑j=1
(Jg j,λ k)(
ψ j (tk)−ψ j (tk−1)))]
= limm→∞
n
∏k=1
E
(m
∏j=1
i(J∗λ k,g j)(
ψ j (tk)−ψ j (tk−1)))
and by independence, 64.6.36,
= limm→∞
n
∏k=1
m
∏j=1
E[i(J∗λ k,g j)
(ψ j (tk)−ψ j (tk−1)
)]= lim
m→∞
n
∏k=1
m
∏j=1
e−12 (J∗λ k,g j)
2(tk−tk−1) = lim
m→∞
n
∏k=1
exp
(−1
2
m
∑j=1
(J∗λ k,g j)2 (tk− tk−1)
)
=n
∏k=1
exp
(−1
2
∞
∑j=1
(J∗λ k,g j)2 (tk− tk−1)
)
= exp
(−1
2
n
∑k=1
∞
∑j=1
(J∗λ k,g j)2 (tk− tk−1)
)which is exactly the same thing as 64.6.39. Thus the disjoint increments are independent.
You could also do something like the following. Let Wm (t) denote the partial sum forW (t) and since there are only finitely many increments, we can assume the partial sumsconverge a.e. Then we need to consider the random variables
{(Wm (tk)−Wm (tk−1))}mk=1 =
{(m
∑i=1
(ψ i (tk)−ψ i (tk−1))Jgi
)}m
k=1
Then for any h ∈ H, you could consider{(m
∑i=1
(ψ i (tk)−ψ i (tk−1))(Jgi,h)H
)}m
k=1
and the vector whose kth component is ∑mi=1 (ψ i (tk)−ψ i (tk−1))(Jgi,h)H for k = 1,2, · · · ,n
is normally distributed and the covariance is a diagonal matrix. Hence these are indepen-dent random variables as hoped. Now you can pass to a limit as m→ ∞. Since this is true