28.2. LINEAR COMBINATIONS 763
Conversely, suppose every linear combination is normally distributed. Next suppose∑
pj=1 a jX j = a ·X is normally distributed with mean µ and variance σ2 so that its charac-
teristic function is given as eitµ e−12 t2σ2
. I will now relate µ and σ2 to various quantitiesinvolving the X j. Letting m j = E (X j) ,m= (m1, · · · ,mp)
∗
µ =p
∑j=1
a jE (X j) =p
∑j=1
a jm j, σ2 = E
( p
∑j=1
a jX j−p
∑j=1
a jm j
)2
= E
( p
∑j=1
a j (X j−m j)
)2= ∑
j,ka jakE ((X j−m j)(Xk−mk))
It follows the mean of the random variable, a ·X is µ = ∑ j a jm j = a ·m and its varianceis
σ2 = a∗E
((X−m)(X−m)∗
)a
Therefore, E(eita·X) = eitµ e−
12 t2σ2
= eita·me−12 t2a∗E((X−m)(X−m)∗)a. Letting s = ta
this showsE(eis·X)= eis·me−
12s∗E((X−m)(X−m)∗)s = eis·me−
12s∗Σs
which is the characteristic function of a normally distributed random variable withm givenabove and Σ given by
Σ jk = E ((X j−m j)(Xk−mk)) .
By assumption, a is completely arbitrary and so it follows that s is also. Hence, X isnormally distributed as claimed. ■
Corollary 28.2.2 Let X = (X1, · · · ,Xp) ,Y = (Y1, · · · ,Yp) where each Xi,Yi is a realvalued random variable. Suppose also that for every a∈Rp, a ·X and a ·Y are both nor-mally distributed with the same mean and variance. Then X and Y are both multivariatenormal random vectors with the same mean and variance.
Proof: In the Proof of Theorem 28.2.1 the proof implies that the characteristic functionsof a ·X and a ·Y are both of the form eitme−
12 σ2t2
. Then as in the proof of that theorem, itmust be the case that m = ∑
pj=1 a jm j where E (Xi) = mi = E (Yi) and
σ2 = a∗E
((X−m)(X−m)∗
)a= a∗E
((Y −m)(Y −m)∗
)a
and this last equation must hold for every a. Therefore,
E((X−m)(X−m)∗
)= E
((Y −m)(Y −m)∗
)≡ Σ
and so the characteristic function of both X and Y is eis·me−12s∗Σs as in the proof of
Theorem 28.2.1. ■
Theorem 28.2.3 Suppose X = (X1, · · · ,Xp) is normally distributed with mean mand covariance Σ. Then if X1 is uncorrelated with any of the Xi, meaning
E ((X1−m1)(X j−m j)) = 0 for j > 1,
then X1 and (X2, · · · ,Xp) are both normally distributed and the two random vectors areindependent. Here m j ≡ E (X j) . More generally, if the covariance matrix is a diagonalmatrix, the random variables,
{X1, · · · ,Xp
}are independent.