2010 CHAPTER 61. PROBABILITY IN INFINITE DIMENSIONS

Hence ln f (x) = kx and so ln f(t2)= kt2 and so φ (t) = f

(t2)= ekt2

for all t. The con-stant, k must be nonpositive because φ (t) is bounded due to its definition. Therefore, thecharacteristic function of ν is

φ ν (t) = e−12 t2σ2

for some σ ≥ 0. That is, ν is the law of a generalized normal random variable.Note the other direction of the implication is obvious. If ξ ,ζ ∼ N (0,σ) and they are

independent, then if α2 +β2 = 1, it follows

αξ +βζ ∼ N(0,σ2)

because

E(

eit(αξ+βζ ))

= E(

eitαξ

)E(

eitβζ

)= e−

12 (αt)2

σ2e−

12 (β t)2

σ2

= e−12 t2σ2

,

the characteristic function for a random variable which is N (0,σ). This proves the theorem.The next theorem is a useful gimmick for showing certain random variables are inde-

pendent in the context of normal distributions.

Theorem 61.6.9 Let X and Y be random vectors having values in Rp and Rq respectively.Suppose also that (X,Y) is multivariate normally distributed and

E((X−E (X))(Y−E (Y))∗

)= 0.

Then X and Y are independent random vectors.

Proof: Let Z =(X,Y) ,m = p+q. Then by hypothesis, the characteristic function of Zis of the form

E(eit·Z)= eit·me−

12 it∗Σt

where m = (mX,mY) = E (Z) = E (X,Y) and

Σ =

(E((X−E (X))(X−E (X))∗

)0

0 E((Y−E (Y))(Y−E (Y))∗

) )≡

(ΣX 00 ΣY

).

Therefore, letting t = (u,v) where u ∈ Rp and v ∈ Rq

E(eit·Z) = E

(ei(u,v)·(X,Y)

)= E

(ei(u·X+v·Y)

)= eiu·mXe−

12 u∗ΣXueiv·mY e−

12 v∗ΣYv

= E(eiu·X)E

(eiv·Y) . (61.6.23)

2010 CHAPTER 61. PROBABILITY IN INFINITE DIMENSIONSHence In f (x) = kx and so Inf (t*) = kt? and so @ (t) = f (t?) = efor all t. The con-stant, k must be nonpositive because @ (+) is bounded due to its definition. Therefore, thecharacteristic function of v isdy()eer™for some o > 0. That is, v is the law of a generalized normal random variable.Note the other direction of the implication is obvious. If €,¢ ~ N(0,o) and they areindependent, then if a? + B —_ 1, it followsag + BC ~N (0,07)becauseE (er(as+B0)) =~ E (cvs) E (ev#5)(ar)’o? ,—3(Bt)?o?2)fan)Nie NIrofan)the characteristic function for a random variable which is N (0,0). This proves the theorem.The next theorem is a useful gimmick for showing certain random variables are inde-pendent in the context of normal distributions.Theorem 61.6.9 Let X and Y be random vectors having values in R? and R¢ respectively.Suppose also that (X,Y) is multivariate normally distributed andE ((X—E (X)) (Y—-E(Y))") =0.Then X and Y are independent random vectors.Proof: Let Z = (X,Y) ,m= p+q. Then by hypothesis, the characteristic function of Zis of the formE( eit Z) — pitm,—pit*Ztwhere m = (mx,my) = E (Z) = E (X,Y) anda ( E ((X-E (X)) (X-E (X))") 00 E ((Y—E(¥)) (Y-E(¥))’)_ (xx 0= 0 yy):Therefore, letting t = (u,v) where u € R? and v € R?E (2) _ FE (ciuv-&w)) _E (cerry). I . 1jum ,— 7u"Exu pivmy ,—7V'LyveE(e™*) E(e*). (61.6.23)