This extension theorem is one of the most important theorems in probability theory. As an
example, one sometimes wants to consider infinitely many independent normally distributed
random variables. Is there a probability space such that this kind of thing even exists? The
answer is yes and one way to show this is through the use of the Kolmogorov extension
theorem. I am presenting the most general version of this theorem that I have seen. For
another proof see the book by Strook [31]. What I am using here is a modification of one in
Billingsley [5].
Let M_{t} be a complete separable metric space. This is called a Polish space. I will denote
a totally ordered index set, (Like ℝ) and the interest will be in building a measure on the
product space, ∏_{t∈I}M_{t}. If you like less generality, just think of M_{t} = ℝ^{kt} or even M_{t} = ℝ.
By the well ordering principle, you can always put an order on any index set so this order is
no restriction, but we do not insist on a well order and in fact, index sets of great
interest are ℝ or [0,∞). Also for X a topological space, ℬ
(X )
will denote the Borel
sets.
Notation 22.4.1The symbol J will denote a finite subset of I,J =
(t1,⋅⋅⋅,tn)
, thet_{i}taken in order. E_{J}will denote a set which has a set E_{t}of ℬ
(Mt)
in the t^{th}positionfor t ∈ J and for t
∕∈
J, the set in the t^{th}position will be M_{t}. K_{J}will denote a set whichhas a compact set in the t^{th}position for t ∈ J and for t
∕∈
J, the set in the t^{th}positionwill be M_{t}. Also denote by ℛ_{J}the sets E_{J}and ℛ the union of all such ℛ_{J}. Let ℰ_{J}denote finite disjoint unions of sets of ℛ_{J}and let ℰ denote finite disjoint unions of setsof ℛ. Thus if F is a set of ℰ, there exists J such that F is a finite disjoint union of setsof ℛ_{J}. For F ∈ Ω, denote by π_{J}
(F )
the set∏_{t∈J}F_{t}whereF =∏_{t∈I}F_{t}.
Lemma 22.4.2The sets, ℰ ,ℰ_{J} defined above form an algebra of sets of∏_{t∈I}M_{t}.
Proof:First consider ℛ_{J}. If A,B∈ℛ_{J}, then A ∩ B∈ℛ_{J} also. Is A ∖ B a
finite disjoint union of sets of ℛ_{J}? It suffices to verify that π_{J}
(A ∖B )
is a finite
disjoint union of π_{J}
(ℛJ)
. Let
|J|
denote the number of indices in J. If
|J |
= 1, then
it is obvious that π_{J}
(A ∖B )
is a finite disjoint union of sets of π_{J}
(ℛJ )
. In fact,
letting J =
(t)
and the t^{th} entry of A is A and the t^{th} entry of B is B, then the t^{th}
entry of A ∖ B is A ∖ B, a Borel set of M_{t}, a finite disjoint union of Borel sets of
M_{t}.
Suppose then that for A,B sets of ℛ_{J}, π_{J}
(A ∖B )
is a finite disjoint union of sets of
π_{J}
(ℛJ)
for
|J |
≤ n, and consider J =
(t1,⋅⋅⋅,tn,tn+1)
. Let the t_{i}^{th} entry of A and B be
respectively A_{i} and B_{i}. It follows that π_{J}
is the finite disjoint union of sets of ℛ_{(t1,⋅⋅⋅,tn)
}. Therefore, the above is
the finite disjoint union of sets of ℛ_{J}. It follows that ℰ_{J} is an algebra.
Now suppose A,B∈ℛ. Then for some finite set J, both are in ℛ_{J}. Then from what was
just shown,
A ∖ B ∈ ℰJ ⊆ ℰ,A ∩ B ∈ ℛ.
By Lemma 22.3.5 on Page 1548 this shows ℰ is an algebra. ■
With this preparation, here is the Kolmogorov extension theorem. In the statement and
proof of the theorem, F_{i},G_{i}, and E_{i} will denote Borel sets. Any list of indices from I will
always be assumed to be taken in order. Thus, if J ⊆ I and J =
(t1,⋅⋅⋅,tn)
, it will always be
assumed t_{1}< t_{2}<
⋅⋅⋅
< t_{n}.
Theorem 22.4.3For each finite set
J = (t1,⋅⋅⋅,tn) ⊆ I,
suppose there exists a Borel probability measure, ν_{J} = ν_{t1}
⋅⋅⋅
t_{n}defined on the Borel sets of∏_{t∈J}M_{t}such that the following consistency condition holds. If
where if s_{i} = t_{j}, then G_{si} = F_{tj}and if s_{i}is not equal to any of the indices, t_{k}, thenG_{si} = M_{si}. Then for ℰ defined in Notation 22.4.1, there exists a probability measure, P and aσ algebra ℱ = σ
(ℰ)
such that
( )
∏ Mt,P,ℱ
t∈I
is a probability space. Also there exist measurable functions, X_{s} : ∏_{t∈I}M_{t}→ M_{s}definedas
( ∏n ) (∏ )
= P ( (Xt1,⋅⋅⋅,Xtn) ∈ Ftj) = P Ft (22.6)
j=1 t∈I
(22.6)
where F_{t} = M_{t}for every t
∕∈
{t1⋅⋅⋅tn}
and F_{ti}is a Borel set. Also if f is a nonnegativefunction of finitely many variables, x_{t1},
⋅⋅⋅
,x_{tn}, measurable with respect to ℬ
( )
∏n Mt
j=1 j
,then f is also measurable with respect to ℱ and
∫
f (xt1,⋅⋅⋅,xtn)dνt1⋅⋅⋅tn
∫Mt1×⋅⋅⋅×Mtn
= f (xt1,⋅⋅⋅,xtn)dP (22.7)
∏t∈IMt
Proof: Let ℰ be the algebra of sets defined in the above notation. I want to define a
measure on ℰ. For F ∈ℰ, there exists J such that F is the finite disjoint union of sets of ℛ_{J}.
Define
P0 (F ) ≡ νJ (πJ (F ))
Then P_{0} is well defined because of the consistency condition on the measures ν_{J}. P_{0} is
clearly finitely additive because the ν_{J} are measures and one can pick J as large as
desired to include all t where there may be something other than M_{t}. Also, from the
definition,
( )
∏
P0(Ω) ≡ P0 Mt = νt1 (Mt1) = 1.
t∈I
Next I will show P_{0} is a finite measure on ℰ. After this it is only a matter of using the
Caratheodory extension theorem to get the existence of the desired probability measure
P.
Claim: Suppose E^{n} is in ℰ and suppose E^{n}↓∅. Then P_{0}
(En)
↓ 0.
Proof of the claim: If not, there exists a sequence such that although
E^{n}↓∅,P_{0}
(En)
↓ ε > 0. Let E^{n}∈ℰ_{Jn}. Thus it is a finite disjoint union of sets of ℛ_{Jn}. By
regularity of the measures ν_{J}, which follows from Lemmas 7.5.3 and 7.5.4, there exists
K_{Jn}⊆ E^{n} such that
--ε- n
νJn (πJn (KJn))+ 2n+2 > νJn (πJn (E ))
Thus
P0(KJn )+ -εn+2 ≡ νJn (πJn (KJn )) +-εn+2
2 n 2 n
> νJn (πJn (E )) ≡ P0(E )
The interesting thing about these K_{Jn} is: they have the finite intersection property. Here is
why.
m m m
ε ≤ P0(∩km=1KJk)+ P0 ((Em ∖∩kk=1KJk))
≤ P0(∩k=1KJk)+ P0 ∪k=1E ∖KJk
m ∑∞ ε m
< P0(∩k=1KJk)+ 2k+2 < P0 (∩ k=1KJk)+ ε∕2,
k=1
and so P_{0}
m
(∩k=1KJk)
> ε∕2. In considering all the E^{n}, there are countably many entries in
the product space which have something other than M_{t} in them. Say these are
{t1,t2,⋅⋅⋅}
.
Let p_{ti} be a point which is in the intersection of the t_{i} components of the sets K_{Jn}. The
compact sets in the t_{i} position must have the finite intersection property also because if not,
the sets K_{Jn} can’t have it. Thus there is such a point. As to the other positions, use the
axiom of choice to pick something in each of these. Thus the intersection of these K_{Jn}
contains a point which is contrary to E^{n}↓∅ because these sets are contained in the
E^{n}.
With the claim, it follows P_{0} is a measure on ℰ. Here is why: If E = ∪_{k=1}^{∞}E^{k} where
E,E^{k}∈ℰ, then
(E ∖∪nk=1Ek )
↓∅ and so
n
P0 (∪ k=1Ek ) → P0(E).
Hence if the E_{k} are disjoint, P_{0}
(∪nk=1Ek)
= ∑_{k=1}^{n}P_{0}
(Ek)
→ P_{0}
(E)
. Thus for disjoint E_{k}
having ∪_{k}E_{k} = E ∈ℰ,
∞ ∑∞
P0 (∪ k=1Ek) = P0 (Ek).
k=1
Now to conclude the proof, apply the Caratheodory extension theorem to obtain P a
probability measure which extends P_{0} to a σ algebra which contains σ
(ℰ)
the sigma algebra
generated by ℰ with P = P_{0} on ℰ. Thus for E_{J}∈ℰ, P
(EJ )
= P_{0}
(EJ )
= ν_{J}
(PJ Ej)
.
Next, let
(∏ )
t∈I Mt,ℱ, P
be the probability space and for x ∈∏_{t∈I}M_{t} let X_{t}
(x)
= x_{t},
the t^{th} entry of x. It follows X_{t} is measurable (also continuous) because if U is open in M_{t},
then X_{t}^{−1}
(U )
has a U in the t^{th} slot and M_{s} everywhere else for s≠t. Thus inverse images of
open sets are measurable. Also, letting J be a finite subset of I and for J =
(t1,⋅⋅⋅,tn)
, and
F_{t1},
⋅⋅⋅
,F_{tn} Borel sets in M_{t1}
⋅⋅⋅
M_{tn} respectively, it follows F_{J}, where F_{J} has F_{ti} in the t_{i}^{th}
entry, is in ℰ and therefore,
P ([Xt1 ∈ Ft1]∩ [Xt2 ∈ Ft2]∩ ⋅⋅⋅∩ [Xtn ∈ Ftn]) =
P ([(X ,X ,⋅⋅⋅,X ) ∈ F × ⋅⋅⋅×F ]) = P (F ) = P (F )
t1 t2 tn t1 tn J 0 J
= νt1⋅⋅⋅tn (Ft1 × ⋅⋅⋅× Ftn)
Finally consider the claim about the integrals. Suppose f
(xt1,⋅⋅⋅,xtn)
= X_{F} where F is a
Borel set of ∏_{t∈J}M_{t} where J =
(∏ ) ∫
= P Ft = X ∏t∈IFt (x)dP
∫ t∈I Ω
= ΩXF (xt1,⋅⋅⋅,xtn)dP (22.9)
where F_{t} = M_{t} if t
∕∈
J. Let K denote sets, F of the sort in 22.8. It is clearly a π system. Now
let G denote those sets F in ℬ
(∏t ∈J Mt )
such that 22.9 holds. Thus G⊇K. It is clear that G
is closed with respect to countable disjoint unions and complements. Hence G⊇ σ
(K)
but
σ
(K )
= ℬ
(∏t∈J Mt)
because every open set in ∏_{t∈J}M_{t} is the countable union
of rectangles like 22.8 in which each F_{ti} is open. Therefore, 22.9 holds for every
F ∈ℬ
(∏t∈J Mt )
.
Passing to simple functions and then using the monotone convergence theorem yields the
final claim of the theorem. ■
As a special case, you can obtain a version of product measure for possibly infinitely many
factors. Suppose in the context of the above theorem that ν_{t} is a probability measure defined
on the Borel sets of M_{t}≡ ℝ^{nt} for n_{t} a positive integer, and let the measures, ν_{t
1}
⋅⋅⋅
t_{n} be
defined on the Borel sets of ∏_{i=1}^{n}M_{ti} by
Then these measures satisfy the necessary consistency condition and so the Kolmogorov
extension theorem given above can be applied to obtain a measure P defined on a
(∏ )
t∈I Mt,ℱ
and measurable functions X_{s} : ∏_{t∈I}M_{t}→ M_{s} such that for F_{ti} a Borel set in
M_{ti},
will be denoted as ∏_{t∈I}ν_{t}. This proves the following theorem which describes an infinite
product measure.
Theorem 22.4.4Let M_{t}for t ∈ I be given as in Theorem 22.4.3and let ν_{t}be a Borel probability measure defined on the Borel sets of M_{t}. Then there exists ameasure P and a σ algebra ℱ = σ