• 検索結果がありません。

TA4 最近の更新履歴 Econometrics Ⅰ 2016 TA session

N/A
N/A
Protected

Academic year: 2018

シェア "TA4 最近の更新履歴 Econometrics Ⅰ 2016 TA session"

Copied!
7
0
0

読み込み中.... (全文を見る)

全文

(1)

TA session# 4

Jun Sakamoto

May 17,2016

Contents

1

Properties of normal distribution 1

2

Reproducing property of normal distribution 3

3

Multivariate normal distribution 4

1 Properties of normal distribution

Normal distribution is a very common probability distribution. Density function is written as below.

f(x) = 1 2πσe

(x−µ)22σ2

If a random variable forrows normal distribution, we write

X ∼ N (µ, σ2). Property 1.1

xis normal distributed with mean µ and variance σ2 . y is a variable as bellow.

y= ax + b

aand b are non-stochastic. Then, y follows normal distribution as below.

Y ∼ N (µ, σ2) Proof

We show moment generating function of y.

(2)

Figure 1: density function Figure 2: distribution function

My(t) = E(ety)

=E(et(ax+b))

=ebtE(eatx)

=ebtMx(at)

=exp[(aµ + b)t +12(a2σ2)t2]

Q.E.D Standard normal distribution

If Y is nomal distributed with mean 0 and variance 1, then Y is called as standard normal distribution. X follows normal distribution can be transformed into Y follows stanndard normal distribution as follows.

Y = X−µσ

Moment generating function of standard normal distribution is written as below.

et22 Log normal distribution

Log normal distribution is very important in financial economics. For example the stock price is assumed to follow a lognormal distribution by many cases. Also log normal distribution is used by Black Scholes equation. If logX follows normal distribution, then X follows log normal distribution.

Proposition 1.2

Density function of log normal distribution is written as bellow

(3)

Figure 3: density function Figure 4: distribution function

1

2πσxexp[−

(logx−µ)2 2 ]

Proof

Let Y = logX and f (x) is a density function of X.Y follows normal distribution.

f(x) =dxdP(X < x) = dyd P(eY < x) =dxdP(Y < logx) =dxdydydP(Y < logx) = 1xf(y) =

1

2πσxexp[−

(logx−µ)2 2 ]

Q.E.D

2 Reproducing property of normal distribution

If random variable X and Y are independent, then covariance of X and Y is 0. Thus E[XY ] = E[X]E[Y ]. Proposition 2.1

If X and Y are independent, then moment generating function of X + Y is MX(t)MY(t). Proof

MX+Y(t) = E(et(X+Y ))

=E(etXetY)

=E(etY)E(etX)

=MX(t)MY(t)

Q.E.D Proposition 2.2

Let X ∼ N (µX, σX2) and Y ∼ N (µY, σY2). Also X and Y are independent. Then X + Y follows

(4)

X+ Y ∼ N (µX+ µY, σX2 + σ2Y). Proof

By independence of X and Y ,

MX+Y(t) = MX(t)MY(t)

=exp[µXt+σ2X2t2]exp[µYt+σ2Y2t2]

=exp[(µX+ µY)t + X222Y)t2]

Q.E.D Proposition 2.3

If X = (X1, X2, ..., Xn) and Xi are mutualy independent and identicaly distributed, then (X1+ X2+ ... + Xn) ∼ N (µ,σn2)

Proof

X1

n ∼ N ( Xi

n , σ2 n2)

X1

n + X2

n ∼ N (

n,2

n2 ) So we can get Pn

i=1Xni ∼ N ((

n = µ), ( 2

n2 = σ2

n))

Q.E.D

3 Multivariate normal distribution

LetX = (X1, X2, ..., Xn) and Xi are standard normal distributed. A joint density function can be written by the product of density function. So,

f(x1, x2, ..., xn) = f (x1)f (x2)...f (xn)

=Qni=1 1

(2π)12e

x22i

=Qni=1 1

(2π)12e

12xx

We assume X = (X1, X2, ..., Xn) are mutually independent and follows standard normal distribution, and random variable Y = (Y1, Y2, ..., Yn) is given by bellow.

Y= a + BX

Where a is non-stochastic n × 1 vector and B is a non-stochastic regular n × n matrix. The mean and variance of Y are

E[Y] = a, V[Y] = BB

(5)

Define

µ≡ a, Σ ≡ BB

Also a density function of Multivariate normal distribution is written as below. f(X) = 1

(2π)12|Σ|12

exp[−12(x − µ)Σ−1(x − µ)] Where µ is a mean vector and Σ is a variance-covariance matrix.

µ=

 µ1

µ2

... µn

 , Σ =

V(x1) Cov(x1, x2) · · · Cov(x1, xn) Cov(x2, x1) V(x2) ...

... . .. ...

Cov(xn, x1) · · · V(xn)

Let u = tB. Moment generating function of multivariate normal distribution function become MY(t) = E[etY] = E[et(BX+µ)]

=exp[(t’µ)]E[exp[tBX]] = exp[(tµ)]E[exp[uX]]

=exp[(t’µ)]Πni=1E[exp[uixi]] = exp[tµ]Πni=1mxi(ui)

=exp[(t’µ)]Πni=1exp[12u2i] = exp[tµ+12uu]

=exp[t’µ + tBBt] = exp[tµ+ tΣt] Proposition 3.1

If X1, X2, ..., Xn follow multivariate normal distribution, then random variables are mutually independent if and only if the Cov[xi, xj] = 0.

Proof

If Cov[xi, xj] = 0, then Σ is diagonal matrix. So |Σ| = σ21, σ22, ..., σ2n. Thus density function is written as bellow.

Qn i=1 1

(2πσ2i)12exp[− (xi−µi)2

2i ].

This shapes like a product of a probability density function of one-dimensional normal distribution.

Q.E.D Example

Let Y = a + BX a=2

1



, B =2 3 4 5

 .

X is multivariate standard normal distrbution vector, Then

Y1= 2X1+ 3X2+ 2, Y2= 4X1+ 5X2+ 1 and E[Y ] =2

1



, V [Y ] =13 23 23 45

 .

(6)

Figure 5: Cov[x1, x2] = 0 Figure 6: Cov[x1, x2] = 0.6

Proposition 3.2

If error term of OLS follows normal distribution, then OLSE follows normal distribution. Proof

A regression model

y= Xβ + u, u ∼ N (0, σ2In) The moment generating function of u is given by:

φu(tu) = E[exp(tuu)] = exp(σ22tutu) From ˆβ = β + (XX)−1Xuthus the moment generating function of is:

φβˆ= E[exp(tˆ

ββˆ)]

=E[exp(t’βˆβ+ tˆ

β(X

X)−1Xu)]

=exp(t’βˆβ)E[exp(tˆ

β(X

X)−1Xu)]

=exp(t’βˆβ)φu[X(XX)−1tβˆ]

=exp(t’βˆβ+12tˆ

β

2(XX)−1]t βˆ)

which corresponds to the moment generating function of ˆβ. Thus βˆ∼ N (β, σ2(XX)−1)

Q.E.D

(7)

Appendix

Correlation absence, but the case which isn’t independent.Such case often exists as bellow. Example

X and Y are lottery as bellow.

(X, Y ) = ((100, 0), (0, 100), (−100, 0), (0, −100)) A probability above-mentioned is 25% respectively. Thus

E[X] = 0, E[Y ] = 0, and E[XY ] = 0. So E[XY ] = E[X]E[Y ], Then X and Y are decorrelation.

On the other hand, joint probability and product of probability are

P(X = 100, Y = 0) = 25% 6= P (X = 100)P (Y = 100) = 12.5%.

So X and Y are non-independent. It’s being just called correlation absence that there are no straight relations. So independence is a concept stronger than correlation absence.

Figure 1: density function Figure 2: distribution function M y (t) = E(e ty ) =E(e t(ax+b) ) =e bt E(e atx ) =e bt M x (at) =exp[(aµ + b)t + 1 2 (a 2 σ 2 )t 2 ] Q.E.D Standard normal distribution
Figure 3: density function Figure 4: distribution function 1 √ 2πσx exp[− (logx−µ) 22σ2 ] Proof
Figure 5: Cov[x 1 , x 2 ] = 0 Figure 6: Cov[x 1 , x 2 ] = 0.6

参照

関連したドキュメント

Furuta, Log majorization via an order preserving operator inequality, Linear Algebra Appl.. Furuta, Operator functions on chaotic order involving order preserving operator

The problem of determining (within the terms of the classical the- ory of elasticity) the distribution of stresses within an elastic half-space when it is deformed by a normal

Distribution 4.10 is an approximate distribution since the service process of calls in Erlang’s Ideal Grading with the multirate links is not a reversible process due to the fact

We study the projectively normal embeddings of small degree of smooth curves of genus g such that the ideal of the embedded curves is generated by quadrics.. Gimigliano for

Theorem 4.8 shows that the addition of the nonlocal term to local diffusion pro- duces similar early pattern results when compared to the pure local case considered in [33].. Lemma

The study of the eigenvalue problem when the nonlinear term is placed in the equation, that is when one considers a quasilinear problem of the form −∆ p u = λ|u| p−2 u with

Classical Sturm oscillation theory states that the number of oscillations of the fundamental solutions of a regular Sturm-Liouville equation at energy E and over a (possibly

In this paper we study certain properties of Dobrushin’s ergod- icity coefficient for stochastic operators defined on noncommutative L 1 -spaces associated with semi-finite von