• 検索結果がありません。

TA2 最近の更新履歴 Econometrics Ⅰ 2016 TA session

N/A
N/A
Protected

Academic year: 2018

シェア "TA2 最近の更新履歴 Econometrics Ⅰ 2016 TA session"

Copied!
7
0
0

読み込み中.... (全文を見る)

全文

(1)

TA session# 2

Jun Sakamoto

April 26,2016

Contents

1

Matrix differentiation 1

2

Rank 3

3

Inverse matrix 3

4

Eigenvalue and Eigenvector 3

1 Matrix differentiation

Proposition2.1

Let a and β are k×1 vector. Then,

∂aβ

∂β = a Proof

We define a=

 a1

... ak

and β=

 β1

... βk

. Then, a

β= a1β1+ a2β2+ · · · + akβk.Thus,

∂aβ

∂β =

∂(a1β1+a2β2+···+akβk)

∂β1

...

∂(a1β1+a2β2+···+akβk)

∂βk

=

 a1

... ak

= a

Q.E.D Proposition2.2

Let a and β are k×1 vector. Then,

(2)

∂βa

∂β = a Proof

We define a=

 a1

... ak

and β=

 β1

... βk

. Then, β

a= β

1a1+ β2a2+ · · · + βkak.Thus,

∂βa

∂β =

∂(β1a12a2+···+βkak)

∂β1

...

∂(β1a12a2+···+βkak)

∂βk

=

 a1

... ak

= a

Q.E.D Proposition2.3

Let A and β are a k×k matrix and a k×1 vector.Then,

∂βAβ

∂β = (A + A

Proof

We only consider the case β is 2×1 vector and A is 2×2 matrix. Then A =a c b d



and β=1 β2

 .Then, βAβ = aβ12+ bβ1β2+ cβ1β2+ dβ12.Thus,

∂βAβ

∂β =

∂(aβ12+bβ1β2+cβ1β2+dβ12)

∂β1

∂(aβ12+bβ1β2+cβ1β2+dβ12)

∂β2

!

=2aβ1+ bβ2+ cβ21+ cβ1+ 2dβ2



=

 2a b+ c b+ c 2d

 β1 β2



=(a c b d



+a b c d

 )1

β2



=(A + A

Q.E.D Especially, when A is symmetic,(i.e, A = A)

∂βAβ

∂β = (2A)β

(3)

2 Rank

We consider a n×k matrix X. Matrix rank is defined as maximum nunber of linear independent column vectors or row vectors in the matrix.

*Example* A=1 2 3

2 4 5



Aconsists of column vectors–(1, 2) , (2, 4) and (3, 5)–and consists of row vectors–(1, 2, 3) and (2, 4, 5).This matrix rank is 2 because column vector (1, 2) and (2, 4) are liner dependent. Then a number of linear independent column vectors is 2. Moreover row vectors (1, 2, 3) and (2, 4, 5) linear independent, so a number of linear independent column vectors is 2. Then rank(A)=2.

Definition2.1

(1)If rank(X)=min(n,k), then X is a full rank matrix.

(2)If X is a square(i.e, n=k) and a full rank, then X is regular matrix.

3 Inverse matrix

Definition2.2

Let matrix X and Y be a n×n matrix. If XY = Y X=In.T henmatrixXis a inverse matrix of Y .Moreover if matrix X has inverse matrix, then it is called to be regular.

Definition2.3

Matrix X has inverse matrix if and only if the determinant of X is not 0. Proposition 2.4

(1)If X is a regular matrix, then (X−1)−1=X.

(2)If X and Y be a regular matrix, XY , (XY )−1 are a regular matrix and Y−1X−1 is equal to a (XY )−1 Proof

Let X and Y be a regular matrix.

(1)Obvious by definition of inverse matrix.

(2)(XY )(Y−1X−1)=X(Y Y−1)X−1=XIX−1= XX−1=I.

Then XY and (Y−1X−1) are a regular matrix and XY (XY )−1=I. So Y−1X−1 is equal to a (XY )−1. Q.E.D

4 Eigenvalue and Eigenvector

Deffinition2.4

Let a matrix A be square. If a scalar λ and a vector x satisfy the following equation, then there are called eigenvalue and eigenvector respectively.

Ax=λx

(4)

Above equation can be written as

Ax− λx=0 (A − λI)x=0.

A condition with a solution which has non-trival solutin is det |A − λI|=0.

Moreover a solution of a above equation are eigenvalues and vice versa. Also eigenvalues(λ1, λ2, ...λn) correspond to eigenvectors(x1, x2, ..., xn).

*Example* Let a matrix A be

8 1 4 5

 .

Then,

det|A − λI|=

8 − λ 1 4 5 − λ

= (8 − λ)(5 − λ) − 4=36 − 13λ + λ2= (λ − 9)(λ − 4)=0. Thus,λ1= 9, λ2= 4.

An eigenvector of correspond to λ1 is

−1 1

4 −4

 x1

x2



=0

thus,x1= x2.Then, x1=t 1 1



. (t is a arbitrary constant) Also eigenvector of correspond to λ2is

4 1 4 1

 x1

x2



=0

thus,4x1= x2.Then, x1=t1 4



Proposition2.5

(1)Eigenvalues and Eigenvectors of symmetric matrix are real value.

(2)An eigenvector xi corespond to an eigenvalue λiand an eigenvector xj corespond to an eigenvalue λj are orthgonal for all j.

Proof

(1)Let A be a real symmetric matrix.Let n denote the order of A. We take a solution of det|A − λI|=0.

We have an eigenvalue λc with eigenvector xc, perhaps both with complex entries. By definition of 2.4 Axc = λcxc.

(5)

Multiplying conjugate complex vector ¯xc from the left, we have

¯ x

cAxc = ¯xcλcxc.

¯ x

cAxc = λcx¯cxc. Taking the complex conjecture transpose on the both sides,

¯

xcAxc = ¯λcx¯cxc. λcx¯cxc = ¯λcx¯cxc.

Therefore,

c− ¯λc)(¯xcxc) = 0 Since,(¯xcxc) 6= 0,

λc = ¯λc. Thus, λc is a real number.

Q.E.D (2)Let λ1and λ2 are eigenvalues and x1 and x2eigenvectors. Multiplying x1 and x2 to Ax2= λ2x2 and Ax1= λ1x1from the left.

x1Ax2= λ2x1x2 x2Ax1= λ1x2x1 Therefore,

λ2x1x2= x1Ax2= (x2Ax1) = λ1(x2x1) Hence,

2− λ1)x1x2= 0. Thus,if λ26= λ1, then x1x2=0.

Q.E.D Definition2.5

Let a matrix A be square. If it satisfy the following equation, then it is called as idempotent matrix.

A2=A Proposition2.6

(1)Eigenvalue of idempotent matrix is 0 or 1.

(2)Rank of idempotent matrix is equivalent to trace of matrix. Proof

(1)Let A be a idempotent matrix. By definition of eigen value,

(6)

λx= Ax

= AAx

= λAx = λ2x. Therefore,

λx= λ2x (λ − λ2)x = 0 λ(1 − λ)x = 0. Since, x6= 0,λ is 1 or 0.

Q.E.D (2)We consider the case where A is nonnull. An order of idempotent matrix A is n×n, then there exist a n×r matrix B and r×n matrix L such that A = BL. Also rank(B)=rank(L)=r. We have that

BLBL= A(2) = A = BL = BIrL, since

BL= Ir. Thus, we find that

trace(A) = trace(BL) = trace(LB) = trace(Ir) = r

Q.E.D Definition 2.6

Let a matrix A be a n×n. If a matrix A can be diagonalized, that is, P−1AP = D,

where P and D are n×n regular matrix and n×n diagonal matrix, respectively. Proposition2.7

Let A has linear independent eigenvectors–x1, x2,...,xn. Also P consists of (x1, x2,...,xn). Then, P−1AP = D is diagonal matrix and nonzero elemnts are eigenvalue of A.

Proof

P−1AP = P−1A(x1, x2, ..., xn)

= (P−1Ax1, P−1Ax2, ..., P−1Axn)

= (λ1P−1x1, λ2P−1x2, ..., λnP−1xn) Also, P−1xi is ith column vector of P−1P. So P−1xi are unit vector ei

Then, (λ1P−1x1, λ2P−1x2, ..., λnP−1xn) = λ1e1, λ2e2, ..., λnen. Therefore,

(7)

P−1AP =

λ1 0

. ..

0 λn

Q.E.D

*Example*

A= 1 3

−2 −4



So,

λ1= −1, x1= t 3

−2



andλ2= −2, x2= t 1

−1

 . Then a matrix P is

P = 3 1

−2 −1



, P−1= 1 1

−2 −3



Since

P−1AP =−1 0 0 −2



参照

関連したドキュメント

We find the criteria for the solvability of the operator equation AX − XB = C, where A, B , and C are unbounded operators, and use the result to show existence and regularity

In this paper we are interested in the solvability of a mixed type Monge-Amp`ere equation, a homology equation appearing in a normal form theory of singular vector fields and the

In order to achieve the minimum of the lowest eigenvalue under a total mass constraint, the Stieltjes extension of the problem is necessary.. Section 3 gives two discrete examples

In particular, a 2-vector space is skeletal if the corresponding 2-term chain complex has vanishing differential, and two 2-vector spaces are equivalent if the corresponding 2-term

Key words and phrases: higher order difference equation, periodic solution, global attractivity, Riccati difference equation, population model.. Received October 6, 2017,

To ensure integrability, the R-operator must satisfy the Yang–Baxter equation, and the monodromy operator the so-called RM M -equation which, in the case when the auxiliary

Theorem 3.5 can be applied to determine the Poincar´ e-Liapunov first integral, Reeb inverse integrating factor and Liapunov constants for the case when the polynomial

Burton, “Stability and Periodic Solutions of Ordinary and Func- tional Differential Equations,” Academic Press, New York, 1985.