First suppose X is a random vector having values in ℝ^{n} and its distribution
function is N
(m,Σ)
where m is the mean and Σ is the covariance. Then the
characteristic function of X or equivalently, the characteristic function of its distribution
is
eit⋅me − 12t∗Σt
What is the distribution of a ⋅ X where a ∈ ℝ^{n}? In other words, if you take a linear
functional and do it to X to get a scalar valued random variable, what is the distribution
of this scalar valued random variable? Let Y = a ⋅ X. Then
E (eitY) = E(eita⋅X)
which from the above formula is
eia⋅mte− 12a∗Σat2
which is the characteristic function of a random variable whose distribution is
N
∗
(a ⋅m, a Σa )
. In other words, it is normally distributed having mean equal to a ⋅ m
and variance equal to a^{∗}Σa. Obviously such a concept generalizes to a Banach space in
place of ℝ^{n} and this motivates the following definition.
Definition 60.7.1Let E be a real separable Banach space. A probability measure, μdefined on ℬ
(E)
is called a Gaussian measureif for every h ∈ E^{′}, the law of h consideredas a random variable defined on the probability space,
(E, ℬ(E ),μ )
is normal. That is,for A ⊆ ℝ a Borel set,
λh (A) ≡ μ (h−1(A))
is given by
∫ 1 -1- 2
√----e−2σ2(x−m )dx
A 2πσ
for some σ and m. A Gaussian measure is called symmetric if m is always equal to0.
There is another definition of symmetric. First here are a few simple conventions. For
f ∈ E^{′},x → f
(x)
is normally distributed. In particular,
∫
|f (x)|dμ < ∞
E
and so it makes sense to define
∫
m μ(f) ≡ f (x)dμ.
E
Thus m_{μ}
(f)
is the mean of the random variable x → f
(x)
. It is obvious that f → m_{μ}
(f)
is linear. Also define the variance σ^{2}
(f)
by
∫
σ2(f) ≡ (f (x)− m (f))2 dμ
E μ
This is finite because x → f
(x)
is normally distributed. The following lemma gives such
an equivalent condition for μ to be symmetric.
Lemma 60.7.2Let μ be a Gaussian measure defined on ℬ
(E )
. Then μ
(F )
=
μ
(− F)
for all F Borel if and only if m_{μ}
(f)
= 0 for all f ∈ E^{′}. Such a Gaussianmeasure is called symmetric.
Proof: Suppose first m_{μ}
(f)
= 0 for all f ∈ E^{′}. Let
G ≡ f−1(F )∩ f−1 (F ) ∩⋅⋅⋅∩ f−1(F )
1 1 2 2 m m
where F_{i} is a Borel set of ℝ and each f_{i}∈ E^{′}. Since every linear combination of the f_{i} is
in E^{′}, every such linear combination is normally distributed and so f ≡
(f1,⋅⋅⋅,fm )
is
multivariate normal. That is, λ_{f} the distribution measure, is multivariate normal. Since
each m_{μ}
(f)
= 0, it follows
(∏m ) ( m∏ )
μ (G) = λf Fi = λf − Fi = μ(− G ) (60.7.24)
i=1 i=1
(60.7.24)
By Lemma 19.1.4 on Page 1990 there exists a countable subset, D ≡
{fk}
_{k=1}^{∞} of the
closed unit ball such that for every x ∈ E,
||x|| = sup |f (x)|.
f∈D
Therefore, letting D
(a,r)
denote the closed ball centered at a having radius r, it
follows
D (a,r )∩ D (b,r ) = ∩∞ f− 1(D (f (a),r ))∩∩ ∞ f−1(D (f (b),r ))
1 2 k=1 k k 1 k=1k k 2
The intersection of these two closed balls is the intersection of sets of the form
∩nk=1fk−1(D (fk (a),r1))∩ ∩nk=1f−k1(D (fk(b),r2))
to which 60.7.24 applies. Therefore, by continuing this way it follows that if G is any
finite intersection of closed balls,
μ (G) = μ (− G) .
Let K denote the set of finite intersections of closed balls, a π system. Thus for G ∈K the
above holds. Now let
G ≡ {F ∈ σ (K) : μ (F) = μ (− F )}
Thus G contains K and it is clearly closed with respect to complements and countable
disjoint unions. By the π system lemma, G⊇ σ
(K )
but σ
(K)
clearly contains the open
sets since every open ball is the countable union of closed disks and every open set
is the countable union of open balls. Therefore, μ
where X is a random variable defined on aprobability space,
(Ω,ℱ, P)
which has values in E, a Banach space. Suppose alsothat for all ϕ ∈ E^{′},ϕ ∘ X is normally distributed. Then μ is a Gaussian measure.Conversely, suppose μ is a Gaussian measure on ℬ
(E )
and X is a random variablehaving values in E such that ℒ
(X )
= μ. Then for every h ∈ E^{′},h ∘ X is normallydistributed.
Proof:First suppose μ is a Gaussian measure and X is a random variable such that
ℒ
(X)
= μ. Then if F is a Borel set in ℝ, and h ∈ E^{′}
( −1 ) ( −1 ( −1 ))
P (h∘ X) (F ) = P X h (F)
= μ(h−1 (F ))
∫ |x−m|2
= √-1-- e−--2σ2-dx
2πσ F
for some m and σ^{2} showing that h ∘ X is normally distributed.
Next suppose h ∘ X is normally distributed whenever h ∈ E^{′} and ℒ
(X )
= μ. Then
letting F be a Borel set in ℝ, I need to verify
(− 1 ) 1 ∫ − |x−m|2
μ h (F) = √2-πσ e 2σ2 dx.
F
However, this is easy because
μ(h−1(F )) = P (X −1(h− 1(F )))
( )
= P (h ∘X )−1(F)
which is given to equal
∫ 2
√-1-- e− |x−2mσ|2-dx
2πσ F
for some m and σ^{2}. This proves the lemma.
Here is another important observation. Suppose X is as just described, a random
variable having values in E such that ℒ
(X )
= μ and suppose h_{1},
⋅⋅⋅
,h_{n} are each in E^{′}.
Then for scalars, t_{1},
⋅⋅⋅
,t_{n},
t1h1 ∘X + ⋅⋅⋅+ tnhn ∘ X
= (th + ⋅⋅⋅+ t h )∘ X
1 1 n n
and this last is assumed to be normally distributed because
Obviously there exist examples of Gaussian measures defined on E, a Banach
space. Here is why. Let ξ be a random variable defined on a probability space,
(Ω, ℱ,P)
which is normally distributed with mean 0 and variance σ^{2}. Then let
X
(ω)
≡ ξ
(ω)
e where e ∈ E. Then let μ ≡ℒ
(X )
. For A a Borel set of ℝ and
h ∈ E^{′},
μ([h(x) ∈ A]) ≡ P ([X (ω) ∈ [x : h (x) ∈ A]])
= P ([h∘ X ∈ A]) = P ([ξ(ω)h (e) ∈ A])
1 ∫ − --1---x2
= ------√---- e 2|h(e)|2σ2 dx
|h(e)|σ 2π A
because h
(e)
ξ is a random variable which has variance
|h(e)|
^{2}σ^{2} and mean 0. Thus μ
is indeed a Gaussian measure. Similarly, one can consider finite sums of the
form
n
∑ ξ(ω)e
i=1 i i
where the ξ_{i} are independent normal random variables having mean 0 for convenience.
However, this is a rather trivial case.