762 CHAPTER 28. THE NORMAL DISTRIBUTION
which is the characteristic function of a random variable which is N (−m,Σ) . Theorem27.1.4 again implies −X ∼ N (−m,Σ) . Finally consider the last claim. You apply what isknown aboutX with t replaced with at and then massage things. This gives the character-istic function for aX is given by
E (exp(i t·aX)) = exp(i t·am)exp(−1
2t∗Σa2t
)which is the characteristic function of a normal random vector having mean am and co-variance a2Σ. ■
28.2 Linear CombinationsFollowing [44] a random vector has a generalized normal distribution if its characteristicfunction is given as eit·me−
12 t∗Σt where Σ is symmetric and has nonnegative eigenvalues.
For a random real valued variable, m is scalar and so is Σ so the characteristic functionof such a generalized normally distributed random variable is eitµ e−
12 t2σ2
. These gener-alized normal distributions do not require Σ to be invertible, only that the eigenvalues benonnegative. In one dimension this would correspond the characteristic function of a diracmeasure having point mass 1 at µ. In higher dimensions, it could be a mixture of suchthings with more familiar things. I won’t try very hard to distinguish between general-ized normal distributions and normal distributions in which the covariance matrix has allpositive eigenvalues. These generalized normal distributions are discussed more a littlelater.
Here are some other interesting results about normal distributions found in [44]. Thenext theorem has to do with the question whether a random vector is normally distributedin the above generalized sense.
Theorem 28.2.1 LetX = (X1, · · · ,Xp) where each Xi is a real valued random vari-able. Then X is normally distributed in the above generalized sense if and only if everylinear combination, ∑
pj=1 aiXi is normally distributed. In this case the mean ofX is
m= (E (X1) , · · · ,E (Xp))
and the covariance matrix forX is
Σ jk = E((X j−m j)(Xk−mk)
∗) .Proof: Suppose first X is normally distributed. Then its characteristic function is of
the formφX (t) = E
(eit·X)= eit·me−
12 t∗Σt.
Then letting a= (a1, · · · ,ap)
E(
eit ∑pj=1 aiXi
)= E
(eita·X)= eita·me−
12a∗Σat2
which is the characteristic function of a normally distributed random variable with meana ·m and variance σ2 = a∗Σa. This proves half of the theorem. If X is normally dis-tributed, then every linear combination is normally distributed.