for μ a given vector and Σ a given positive definite symmetric matrix.
Theorem 58.16.2ForX ∼N_{p}
(m, Σ)
,m = E
(X)
and
( ∗)
Σ = E (X − m)(X − m ) .
Proof:Let R be an orthogonal transformation such that
RΣR ∗ = D = diag (σ21,⋅⋅⋅,σ2p).
Changing the variable by x − m = R^{∗}y,
∫ −1 ∗ −1 ( 1 )
E (X ) ≡ xe 2 (x−m) Σ (x−m )dx ----p∕2------1∕2
ℝp ((2π) det(Σ ) )
∫ ∗ − 1y∗D −1y 1
= p (R y +m )e 2 dy ----p∕2∏p----
ℝ∫ ( (2π) ) i=1σi
− 12y∗D−1y ------1------
= m ℝp e dy (2π)p∕2∏p σ = m
i=1 i
by Fubini’s theorem and the easy to establish formula
∫
√-1-- e− y22σ2dy = 1.
2πσ ℝ
Next let M ≡ E
( )
(X − m)(X − m )∗
. Thus, changing the variable as above by
x − m = R^{∗}y
∫ ( )
∗ −21(x−m )∗Σ−1(x− m) -------1-------
M = ℝp (x − m )(x − m ) e dx (2π)p∕2det(Σ)1∕2
∫ ( )
= R ∗ yy∗e− 12y∗D −1ydy ------1∏----- R
ℝp (2π)p∕2 pi=1σi
Therefore,
( )
∗ ∫ − 1y∗D−1y ------1------
(RM R )ij = ℝp yiyje 2 dy (2π)p∕2∏p σ = 0,
i=1 i
so; RMR^{∗} is a diagonal matrix.
∫ 1 ∗ −1 ( 1 )
(RM R ∗)ii = y2ie−2y D ydy ----p∕2∏p----- .
ℝp (2π) i=1σi
Using Fubini’s theorem and the easy to establish equations,
. Finally consider the last claim. You apply what is
known about X with t replaced with at and then massage things. This gives the
characteristic function for aX is given by
( 1 )
E (exp (it⋅aX)) = exp(it⋅am )exp −2 t∗Σa2t
which is the characteristic function of a normal random vector having mean am and
covariance a^{2}Σ. This proves the theorem.
Following [?] a random vector has a generalized normal distribution if its
characteristic function is given as
it⋅m − 1t∗Σt
e e 2 (58.16.33)
(58.16.33)
where Σ is symmetric and has nonnegative eigenvalues. For a random real valued
variable, m is scalar and so is Σ so the characteristic function of such a generalized
normally distributed random variable is
itμ− 1t2σ2
e e 2 (58.16.34)
(58.16.34)
These generalized normal distributions do not require Σ to be invertible, only that the
eigenvalues be nonnegative. In one dimension this would correspond the characteristic
function of a dirac measure having point mass 1 at μ. In higher dimensions, it could be a
mixture of such things with more familiar things. I won’t try very hard to distinguish
between generalized normal distributions and normal distributions in which the
covariance matrix has all positive eigenvalues.
Here are some other interesting results about normal distributions found in [?]. The
next theorem has to do with the question whether a random vector is normally
distributed in the above generalized sense.
Theorem 58.16.4Let X =
(X ,⋅⋅⋅,X )
1 p
where each X_{i}is a real valued randomvariable. Then X is normally distributed in the above generalized sense if and only ifevery linear combination,∑_{j=1}^{p}a_{i}X_{i}is normally distributed. In this case the mean of Xis
m = (E (X ),⋅⋅⋅,E(X ))
1 p
and the covariance matrix for X is
Σ = E ((X − m )(X − m )∗).
jk j j k k
Proof: Suppose first X is normally distributed. Then its characteristic function is of
the form
ϕ (t) = E (eit⋅X) = eit⋅me− 12t∗Σt.
X
Then letting a =
(a1,⋅⋅⋅,ap)
( ∑p ) ( ) 1 ∗ 2
E eit j=1aiXi = E eita⋅X = eita⋅me−2a Σat
which is the characteristic function of a normally distributed random variable with mean
a ⋅ m and variance σ^{2} = a^{∗}Σa. This proves half of the theorem. If X is normally
distributed, then every linear combination is normally distributed.
Next suppose ∑_{j=1}^{p}a_{j}X_{j} = a ⋅ X is normally distributed with mean μ and variance
σ^{2} so that its characteristic function is given in 58.16.34. I will now relate μ and σ^{2} to
various quantities involving the X_{j}. Letting m_{j} = E
It follows the mean of the normally distributed random variable, a ⋅ X is
∑
μ = ajmj = a⋅m
j
and its variance is
σ2 = a∗E ((X − m)(X − m )∗)a
Therefore,
E (eita⋅X ) = eitμe− 12t2σ2
ita⋅m − 1t2a∗E((X−m )(X−m )∗)a
= e e 2 .
Then letting s = ta this shows
E (eis⋅X) = eis⋅me− 12s∗E((X −m)(X −m)∗)s
is⋅m − 1s∗Σs
= e e 2
which is the characteristic function of a normally distributed random variable with m
given above and Σ given by
Σjk = E((Xj − mj)(Xk − mk)).
By assumption, a is completely arbitrary and so it follows that s is also. Hence, X is
normally distributed as claimed. ■
Corollary 58.16.5Let X =
(X1, ⋅⋅⋅,Xp )
,Y =
(Y1,⋅⋅⋅,Yp)
where each X_{i},Y_{i}isa real valued random variable. Suppose also that for every a ∈ ℝ^{p},a ⋅ Xanda ⋅ Yare both normally distributed with the same mean and variance. Then X and Y areboth multivariate normal random vectors with the same mean and variance.
Proof: In the Proof of Theorem 58.16.4 the proof implies that the characteristic
functions of a ⋅ X and a ⋅ Y are both of the form
eitme− 12σ2t2.
Then as in the proof of that theorem, it must be the case that
∑p
m = ajmj
j=1
where E
(Xi )
= m_{i} = E
(Yi)
and
2 ∗ ( ∗)
σ = a E ((X − m )(X − m ) )a
= a ∗E (Y − m )(Y − m)∗ a
and this last equation must hold for every a. Therefore,
E ((X − m )(X − m )∗) = E ((Y − m )(Y − m )∗) ≡ Σ
and so the characteristic function of both X and Y is e^{is⋅m}e^{−}