59.16. THE MULTIVARIATE NORMAL DISTRIBUTION 1913
which is the characteristic function of a random vector distributed as
Np (m1 +m2,Σ1 +Σ2).
Now it follows that X1 +X2 ∼ Np (m1 +m2,Σ1 +Σ2) by Theorem 59.8.4. This proves59.16.31.
The assertion about −X is also easy to see because
E(
eit·(−X))
= E(
ei(−t)·X)
=1
(2π)p/2 (detΣ)1/2
∫Rp
ei(−t)·xe−12 (x−m)∗Σ−1(x−m)dx
=1
(2π)p/2 (detΣ)1/2
∫Rp
eit·xe−12 (x+m)∗Σ−1(x+m)dx
which is the characteristic function of a random variable which is N (−m,Σ) . Theorem59.8.4 again implies −X ∼ N (−m,Σ) . Finally consider the last claim. You apply what isknown about X with t replaced with at and then massage things. This gives the character-istic function for aX is given by
E (exp(it·aX)) = exp(it·am)exp(−1
2t∗Σa2t
)which is the characteristic function of a normal random vector having mean am and co-variance a2Σ. This proves the theorem.
Following [103] a random vector has a generalized normal distribution if its character-istic function is given as
eit·me−12 t∗Σt (59.16.33)
where Σ is symmetric and has nonnegative eigenvalues. For a random real valued vari-able, m is scalar and so is Σ so the characteristic function of such a generalized normallydistributed random variable is
eitµ e−12 t2σ2
(59.16.34)
These generalized normal distributions do not require Σ to be invertible, only that the eigen-values be nonnegative. In one dimension this would correspond the characteristic functionof a dirac measure having point mass 1 at µ. In higher dimensions, it could be a mixture ofsuch things with more familiar things. I won’t try very hard to distinguish between gener-alized normal distributions and normal distributions in which the covariance matrix has allpositive eigenvalues.
Here are some other interesting results about normal distributions found in [103]. Thenext theorem has to do with the question whether a random vector is normally distributedin the above generalized sense.
Theorem 59.16.4 Let X = (X1, · · · ,Xp) where each Xi is a real valued random variable.Then X is normally distributed in the above generalized sense if and only if every linearcombination, ∑
pj=1 aiXi is normally distributed. In this case the mean of X is
m = (E (X1) , · · · ,E (Xp))