TA session# 4
Jun Sakamoto
May 17,2016
Contents
1
Properties of normal distribution 1
2
Reproducing property of normal distribution 3
3
Multivariate normal distribution 4
1 Properties of normal distribution
Normal distribution is a very common probability distribution. Density function is written as below.
f(x) = √1 2πσe
−(x−µ)22σ2
If a random variable forrows normal distribution, we write
X ∼ N (µ, σ2). Property 1.1
xis normal distributed with mean µ and variance σ2 . y is a variable as bellow.
y= ax + b
aand b are non-stochastic. Then, y follows normal distribution as below.
Y ∼ N (µ, σ2) Proof
We show moment generating function of y.
Figure 1: density function Figure 2: distribution function
My(t) = E(ety)
=E(et(ax+b))
=ebtE(eatx)
=ebtMx(at)
=exp[(aµ + b)t +12(a2σ2)t2]
Q.E.D Standard normal distribution
If Y is nomal distributed with mean 0 and variance 1, then Y is called as standard normal distribution. X follows normal distribution can be transformed into Y follows stanndard normal distribution as follows.
Y = X−µσ
Moment generating function of standard normal distribution is written as below.
et22 Log normal distribution
Log normal distribution is very important in financial economics. For example the stock price is assumed to follow a lognormal distribution by many cases. Also log normal distribution is used by Black Scholes equation. If logX follows normal distribution, then X follows log normal distribution.
Proposition 1.2
Density function of log normal distribution is written as bellow
Figure 3: density function Figure 4: distribution function
√ 1
2πσxexp[−
(logx−µ)2 2σ2 ]
Proof
Let Y = logX and f (x) is a density function of X.Y follows normal distribution.
f(x) =dxdP(X < x) = dyd P(eY < x) =dxdP(Y < logx) =dxdydydP(Y < logx) = 1xf(y) =
√ 1
2πσxexp[−
(logx−µ)2 2σ2 ]
Q.E.D
2 Reproducing property of normal distribution
If random variable X and Y are independent, then covariance of X and Y is 0. Thus E[XY ] = E[X]E[Y ]. Proposition 2.1
If X and Y are independent, then moment generating function of X + Y is MX(t)MY(t). Proof
MX+Y(t) = E(et(X+Y ))
=E(etXetY)
=E(etY)E(etX)
=MX(t)MY(t)
Q.E.D Proposition 2.2
Let X ∼ N (µX, σX2) and Y ∼ N (µY, σY2). Also X and Y are independent. Then X + Y follows
X+ Y ∼ N (µX+ µY, σX2 + σ2Y). Proof
By independence of X and Y ,
MX+Y(t) = MX(t)MY(t)
=exp[µXt+σ2X2t2]exp[µYt+σ2Y2t2]
=exp[(µX+ µY)t + (σX2+σ22Y)t2]
Q.E.D Proposition 2.3
If X = (X1, X2, ..., Xn) and Xi are mutualy independent and identicaly distributed, then (X1+ X2+ ... + Xn) ∼ N (µ,σn2)
Proof
X1
n ∼ N ( Xi
n , σ2 n2)
X1
n + X2
n ∼ N ( 2µ
n, 2σ2
n2 ) So we can get Pn
i=1Xni ∼ N (( nµ
n = µ), ( nσ2
n2 = σ2
n))
Q.E.D
3 Multivariate normal distribution
LetX = (X1, X2, ..., Xn) and Xi are standard normal distributed. A joint density function can be written by the product of density function. So,
f(x1, x2, ..., xn) = f (x1)f (x2)...f (xn)
=Qni=1 1
(2π)12e
−x22i
=Qni=1 1
(2π)12e
−12xx′
We assume X = (X1, X2, ..., Xn)′ are mutually independent and follows standard normal distribution, and random variable Y = (Y1, Y2, ..., Yn)′ is given by bellow.
Y= a + BX
Where a is non-stochastic n × 1 vector and B is a non-stochastic regular n × n matrix. The mean and variance of Y are
E[Y] = a, V[Y] = BB′
Define
µ≡ a, Σ ≡ BB′
Also a density function of Multivariate normal distribution is written as below. f(X) = 1
(2π)12|Σ|12
exp[−12(x − µ)′Σ−1(x − µ)] Where µ is a mean vector and Σ is a variance-covariance matrix.
µ=
µ1
µ2
... µn
, Σ =
V(x1) Cov(x1, x2) · · · Cov(x1, xn) Cov(x2, x1) V(x2) ...
... . .. ...
Cov(xn, x1) · · · V(xn)
Let u′ = t′B. Moment generating function of multivariate normal distribution function become MY(t) = E[et′Y] = E[et′(BX+µ)]
=exp[(t’µ)]E[exp[t′BX]] = exp[(t′µ)]E[exp[u′X]]
=exp[(t’µ)]Πni=1E[exp[uixi]] = exp[t′µ]Πni=1mxi(ui)
=exp[(t’µ)]Πni=1exp[12u2i] = exp[t′µ+12u′u]
=exp[t’µ + t′BB′t] = exp[t′µ+ t′Σt] Proposition 3.1
If X1, X2, ..., Xn follow multivariate normal distribution, then random variables are mutually independent if and only if the Cov[xi, xj] = 0.
Proof
If Cov[xi, xj] = 0, then Σ is diagonal matrix. So |Σ| = σ21, σ22, ..., σ2n. Thus density function is written as bellow.
Qn i=1 1
(2πσ2i)12exp[− (xi−µi)2
2σ2i ].
This shapes like a product of a probability density function of one-dimensional normal distribution.
Q.E.D Example
Let Y = a + BX a=2
1
, B =2 3 4 5
.
X is multivariate standard normal distrbution vector, Then
Y1= 2X1+ 3X2+ 2, Y2= 4X1+ 5X2+ 1 and E[Y ] =2
1
, V [Y ] =13 23 23 45
.
Figure 5: Cov[x1, x2] = 0 Figure 6: Cov[x1, x2] = 0.6
Proposition 3.2
If error term of OLS follows normal distribution, then OLSE follows normal distribution. Proof
A regression model
y= Xβ + u, u ∼ N (0, σ2In) The moment generating function of u is given by:
φu(tu) = E[exp(t′uu)] = exp(σ22t′utu) From ˆβ = β + (X′X)−1X′uthus the moment generating function of is:
φβˆ= E[exp(t′ˆ
ββˆ)]
=E[exp(t’βˆβ+ t′ˆ
β(X
′X)−1X′u)]
=exp(t’βˆβ)E[exp(t′ˆ
β(X
′X)−1X′u)]
=exp(t’βˆβ)φu[X(X′X)−1tβˆ]
=exp(t’βˆβ+12t′ˆ
β[σ
2(X′X)−1]t βˆ)
which corresponds to the moment generating function of ˆβ. Thus βˆ∼ N (β, σ2(X′X)−1)
Q.E.D
Appendix
Correlation absence, but the case which isn’t independent.Such case often exists as bellow. Example
X and Y are lottery as bellow.
(X, Y ) = ((100, 0), (0, 100), (−100, 0), (0, −100)) A probability above-mentioned is 25% respectively. Thus
E[X] = 0, E[Y ] = 0, and E[XY ] = 0. So E[XY ] = E[X]E[Y ], Then X and Y are decorrelation.
On the other hand, joint probability and product of probability are
P(X = 100, Y = 0) = 25% 6= P (X = 100)P (Y = 100) = 12.5%.
So X and Y are non-independent. It’s being just called correlation absence that there are no straight relations. So independence is a concept stronger than correlation absence.