1876 CHAPTER 59. BASIC PROBABILITY
This is the same as{ω :
∣∣∣∣∣∣∣∣∣∣ l
∑j=k−1
X j (ω)
∣∣∣∣∣∣∣∣∣∣< 1/n,k ≥ m
}∈ σ
(∪∞
j=p−1σ (X j))
Thus∪∞
m=p∩l,k≥m {ω : ||Sk (ω)−Sl (ω)||< 1/n} ∈ σ(∪∞
j=p−1σ (X j))
and so the intersection for all p of these is a tail event. Then the intersection over all n ofthese tail events is a tail event.
From this it can be concluded that if you have a sequence of independent random vari-ables, {Xk} the set where it converges is either of probability 1 or probability 0. A similarconclusion holds for the set where the infinite sum of these random variables converges.This is stated in the next corollary. This incredible assertion is the next corollary.
Corollary 59.6.6 Let {Xk} be a sequence of random variables having values in a Banachspace. Then
limn→∞
Xn (ω)
either exists for a.e. ω or the convergence fails to take place for a.e. ω. Also if
A≡
{ω :
∞
∑k=1
Xk (ω) converges
},
then P(A) = 0 or 1.
59.7 Kolmogorov’s InequalityKolmogorov’s inequality is a very interesting inequality which depends on independenceof a set of random vectors. The random vectors have values in Rn or more generally somereal separable Hilbert space.
Lemma 59.7.1 If Y,X are independent random variables having values in a real separableHilbert space, H with E
(|X|2
),E(|Y|2
)< ∞, then
∫Ω
(X,Y)dP =
(∫Ω
XdP,∫
Ω
YdP).
Proof: Let {ek} be a complete orthonormal basis. Thus∫Ω
(X,Y)dP =∫
Ω
∞
∑k=1
(X,ek)(Y,ek)dP
Now
∫Ω
∞
∑k=1|(X,ek)(Y,ek)|dP≤
∫Ω
(∑k|(X,ek)|
2
)1/2(∑k|(Y,ek)|
2
)1/2
dP