• 検索結果がありません。

HM chapter5 Recent site activity masaruinaba

N/A
N/A
Protected

Academic year: 2018

シェア "HM chapter5 Recent site activity masaruinaba"

Copied!
17
0
0

読み込み中.... (全文を見る)

全文

(1)

Chapter 5: Parameterized Expectations

Contents

I Characterization of Approximate Solutions 2

I.1 An Illustrative Example . . . 2

I.2 A General Framework . . . 3

I.3 Adaptive Learnig . . . 4

II Computation of the Approximate Solution 6 II.1 Choice of T and ψ . . . 6

II.2 Iterative Computation of the Fixed Point . . . 6

II.3 Direct Computation of the Fixed Point . . . 7

II.4 Starting Points . . . 8

III Applications 9 III.1 Stochastic Growth with Non-Negative Investment . . . 9

III.2 The Benchmark Model . . . 11

III.3 Limited Participation Model of Money . . . 12

Overview

• In the rational expectations equilibrium of a recursive DGE model agent’s conditional ex- pectations are time invariant functions of the model’s state variables.

• PEA is an approach to approximate the conditional expectations by using methods of func- tional approximation.

• Using simple functions and employing Monte Carlo techniques.

• Some advantages:

1. PEA does not suffer from the curse of dimensionality. 2. It deals easily with binding constraints.

(2)

I Characterization of Approximate Solutions

I.1 An Illustrative Example

The Model. The dynamics of the stochastic Ramsey model: Kt+1= Ztf(Kt) + (1 − δ)Kt− Ct,

u(Ct) = βEt[u(Ct+1)(1 − δ + αZt+1f(Kt+1))], Zt= Zt−1ρ eǫt.

Conditional Expectations. The policy function can be written as a function of the model’s state variables:

Kt+1= hK(Kt, Zt).

The conditional expectation is also time invariant function E of the model’s state variables. Let the policy function of consumption

Ct = hC(Kt, Zt) ≡ Ztf(Kt) + (1 − δ)Kt− hK(Kt, Zt).

Therefore Ct+1 = hC(hK(Kt, Zt), Zt+1). Then, inside the parenthesis of the conditional expectation can be written as

φ(Kt, Zt, ǫt+1) ≡ u

( hC(

Kt+1

z }| {

hK(Kt, Zt), Ztρeǫt+1)

| {z }

Ct+1

)

× (

1 − δ + Ztρeǫt+1f

(

Kt+1

z }| {

hK(Kt, Zt))).

Since the innovations are normal variables, the conditional expectation can be written as

E(Kt, Zt) ≡

−∞

φ(Kt, Zt, ǫt+1)e

−(ǫt+1)2

2σ2

2πσ t+1.

Suppose we know E(Kt, Zt). Given Kt and Zt, we obtain policy functions for all t as follows. u(Ct) = βE(Kt, Zt),

Kt+1= Ztf(Kt) + (1 − δ)Kt− Ct, Zt= Zt−1ρ eǫt.

Approximation of E. The purpose of PEA is to approximate E(Kt, Zt) by a simple function ψ(Kt, Zt). For instance, Den Haan and Marset (1990) use φ(Kt, Zt) = γ1Ktγ2Ztγ3. The idea how to choose the parameters is as follows.

• Let y denote a random variable that we wish to forecast using obsevations on (x1, x2, . . . , xn).

• To seek a function h that minimizes the expected mean quadratic error E[(y − h(x1, x2, . . . , xn))2].

• The solution is the conditional expectation (See Sargent [1987]). E[y | (x1, x2, . . . , xn)] = arg min

h E[(y − h(x1, x2, . . . , xn))2].

(3)

The parameter choice mimics this definition.

• Some additional notation.

st= [Ct, Kt, Kt+1, Zt] (model’s variables), wt = [Kt, Zt] (state variables),

γt= [γ1, γ2, . . . , γn] (p-vector of parameters for the approximation function of E).

• Given K0, Z0, and {ǫt}t=0, we obtain a time path {st}t=0. Note that the time path depends on γ, we write st(γ) and wt(γ).

• Let φ(st+1(γ)) = u(Ct+1) (1 − δ + Zt+1f(Kt+1)). Define the map Γ : Rp → Rp by

Γ(γ) := arg min

γ T −1

t=0

[

φ(st+1(γ)) − ψ(γ,wt(γ))]2.

• Using the fixed point γp,T of this mapping, γp,T = Γ(γp,T),

φ(γ, ·) is an approximation of the model’s conditional expectation Et(φ(·)) .

I.2 A General Framework

• The vector of model’s variables: st ∈ U ⊂ Rn(s).

• Consider two further subsets of the variables in st

1. All exogenous stochastic process with the Markov property: zt⊂ Rn(z).

2. The model’s state variables: wt ∈ X ⊂ Rn(w). wt = [xt,zt], where xt denotes the model’s endogenous state variables.

• Two vector valued functions

1. φ : U → V ⊂ Rk,where k is the size of xt.

2. The function g collects the model’s equilibrium system (e.g. Euler equations, resource constraints, and so on).

g(Et[φ(st+1)], st

)

= 0.

• Due to the recursive nature of the model, there is a time invariant conditional expectations function E

E := arg max

ψ:X→V E

[(φ(st+1) − ψ(wt))

(

φ(st+1) − ψ(wt))]

• E(wt) is a solution of g(E(wt), st

)

= 0.

(4)

Algorithm 5.1.1 (PEA)

Parameterized Expectations Algorithm: Approximation of the solution of g(·, st) = 0.

Step 1: Choose a function φ(γ, ·) : X → V that depends on the vector of parameters γ ∈ Rp. Step 2: Draw a sequence of shocks {z}Tt=0,

Step 3: Solve the equation

g(ψ(γ, wt(γ)), st(γ))= 0 with respect to {wt(γ))}Tt=0.

Step 4: Find the fixed point γp,T = Γ(γp,T) of the map Γ defined by

Γ(γ) := arg min

γ

T −1 t=0

kφ (st+1(γ)) − ψ(γ,wt(γ))k2,

where k · k denotes the Euclidean norm.

Step 5: Decide where ψ(γp,T,·) is close to the true but unknown solution E. If not, change either T or ψ(·) and return to Step 1.

I.3 Adaptive Learnig

Models of Learning

• The rational expectations equilibrium presupposes two requirements: – individual rationality,

– mutual consistency of perceptions of the environment.

The agents in the model use the true conditional expectations function for their forecasts.

• Models of “learning” depict economic agents as econometricians that use current and past observations to estimate the parameters of the economy’s low of motion.

• Agents that act like econometricians are not as smart as those that populate the rational expectations equilibrium.

• Sketch an adaptive learning process whose stationary point is the approximate solution. Recursive Least Squares. Assume that we want to estimate the linear equation

yi = γxi+ ǫi, i= 1, 2, . . . , t,

where γ is a p-dimensional column vector of parameters. Put y = [y1, y2, . . . , yt] and X = [x1,x2, . . . ,xt]. The least squares estimator gives

γt= (XX)1Xy = ( t

i=1

xixi

)1( t

i=1

xiyi

) .

(5)

Suppose we have estimated γt−1 and now we have additional observation (yt, xt1, . . . , xtp). We update the estimation as follows.

γt= γt−1+1 tR

1

t xt(yt− γt−1xt),

Rt = Rt−1+ 1 t(xix

i− Rt−1), where Rt= 1t(∑ti=1xixi).

Learning Dynamics and the PEA.

• Suppose the agents in our model economy were not able to compute the true conditional expectations function E.

• Let ψ(γt,·) denote the agents’ forecast of φ(·) using recent estimates γt. Note that the policy function depends on t.

• The entire history of the model ˜st and ˜wt economy depends upon the sequence of estimates {γτ}tτ =0.

• Assume that agents use non-linear least squares to estimate, at period t they choose γt to

minimize 1 t

t−1 i=0

[φ(˜si+1) − ψ(γt,w˜i)]2

• Linearize ψ(γt,·) at the previous estimate γt−1,

ψ(γt,·) ≈ ψ(γt−1,·) + ∇ψ(γt−1,·)(γt− γt−1). Then, the previous problem can be written as

minγt

1 t

t−1 i=0

[yi− γtwi]2 where

yi := φ(˜si+1) − ψ(γt−1,w˜i) + ∇ψ(γt−1t−1,

wi := ∇ψ(γt−1).

• Able to apply the recursive formula to formulate the dynamics of our model under non-linear least square learning1,

γt= γt−1+ 1

tRt∇ψ(γt−1)

(

φ(˜si+1) − ψ(γt−1,w˜t−1)), Rt= Rt−1+1

t

(∇ψ(γt−1)

∇ψ(γt−1) − Rt−1), 0 = g(ψ(γt,w˜t), ˜st).

1Marcet and Marshall (1994) show that this process converges to γp,T for t → ∞, when the absolute values of eigenvalues of Γ(γ) are less than one in the neighborhood γp,T.

(6)

II Computation of the Approximate Solution

This section considers each steps in PEA algorithm.

II.1 Choice of T and ψ

Sample Size. The accuracy of the approximation increases with T .

• Suppose ΩT := {st}Tt=0 from the true function E. ΩT is a sample drawn from the ergordic distribution.

• The larger ΩT is , the better does it represent the properties of the underlying distribution.

• Duffy and McNeils (2000) use T = 2000; Den Haan and Marcet (1990) T = 2500; Marcet and Lorenzoni (1999) T = 10000; Christiano and Fisher (2000) T = 100000.

• To eliminate the influence of the initial value w0 one can disregard the first 0.5 or 1.0 percent of the data points.

Function Approximation (choice of ψ).

• The vector valued function ψ : Rn(w) → Rk. The j-th coordinate of ψ as a map ψj : X ⊂

R→ R.

• In our applications, we use a complete set of polynomials of degree p in the n(w) variables (w1t, w2t, . . . , wn(w)t) to build φj. The members of the set are either products of monomials (w1k1w2k2· · · wn(w)kn ) or Chebyshev polynomials where n(w)i=1 ki = p.

• Usually we don’t know the boundaries of X. Monomials are easy to use, since the domain is the entire real line. But the drawback is that they have the problems of multicolinearity.

• The domain of orthogonal families of polynomials (e.g. Chebyshev polinomials) are certain compact intervals of the real line. We must specify a compact region X. Note that the orthogonality fo Chebyshev polinomials in discrete applications pertains to the zeros of these polynomials.

II.2 Iterative Computation of the Fixed Point

Convergence. Consider the actual computation of the parameters of expectations function ψ(γ, ·).

• Finding the fixed point by iterating on the mapping Γ defined in Step 4 of Algorithm 5.1.1, γs+1= Γ(γs), s = 0, 1, . . . ,

starting with an arbitrary γ0.

• However, this procedure need not converge even if the fixed point exists.

(7)

• Den Haan and Marcet (1990) as well as Marcet and Marshall (1994) propose to iterate on γs+1= (1 − λ)γs+ λΓ(γs), s= 0, 1, . . . ,

for some λ ∈ (0, 1] to foster convergence.

• Indeed, if the related adaptive learning model is locally stable, there are starting values γ0

such that for a sufficiantly small λ will converge. Non-Linear Least Squares.

• In the iterating procedure, we have to solve

minγ

1 T

T −1 t=0

kφ(st+1(γ)) − ψ(γ,wt(γ))k2.

• This breaks down k non-linear least squares problem.

minγ

1 T

T −1 t=0

k j=1

[

φj(st+1(γ)) − ψjj,wt(γ))]2,

≡ minγ

k j=1

1 T

T −1

t=0

[

φj(st+1(γ)) − ψjj,wt(γ))]2,

k j=1

minγj

1 T

T −1

t=0

[

φj(st+1(γ)) − ψjj,wt(γ))]2,

where γj := [γ1j, . . . , γpj], j = 1, 2, . . . , k.

• The damped Gauss-Newton method can be applied to this problem.

• In the early stages of the iterations it is not necessary to compute the minimum with great accuracy. By choosing very generous stopping criteria, one can make the algorithm make faster.

II.3 Direct Computation of the Fixed Point

Consider the PEA as a solution to a complicated system of k × p non-linear equations.

• The k × p first-order conditions for the minimization problem in Step 4 of Algorithm 5.1.1 may be written as follows:

0 = −2 T

T −1

t=0

[

φj(st+1(γ)) − ψjj,wt(γ))] ∂ψj

∂γij

j,wt(γ))

for all i = 1, 2, . . . , p, and j = 1, . . . , k.

(8)

• The iterative procedure of the previous subsection solve this problem for γ and stops if γ = γ. Therefore, The zeros of the following non-linear system of equations in γ is an equivalent characterization of the approximate model solution (if we obtain wt(γ)) easily).

0 = −2 T

T −1

t=0

[

φj(st+1(γ)) − ψjj,wt(γ))] ∂ψj

∂γijj,wt(γ)) for all i = 1, 2, . . . , p, and j = 1, . . . , k.

• It is difficult to find good starting points γ0.

II.4 Starting Points

Good starting values are needed. There are several ways. Homotopy.

• In mathematics two vector-valued function f : X → Y and g : X → Y are said to be homotopic if f can be continuously deformed in to g.

• A function h(x, s) that equals f for s = 0 and g for s = 1 is called a homotopy. For instance, h(x, s) := (1 − s)f(x) + sg(x).

• The idea behind homotopy methods is to construct a path in X × R that takes us from the known solution to the solution of the problem of interest.

• Suppose we want to solve g(x) = 0 and know the solution x0of f (x). Simple continuation methods use the linear homotopy, form an increasing sequence 0 < s1 < s2 < · · · < 1 and solove the related sequence of problems h(x, s) = 0. If the homotopy path in X × R has peaks and troughs along the s dimension, simple continuation methods can fail (see Judd [1998]).

• In the case of DGE model, we have an analytical solution for a simple stochastic growth model with δ = 1 and log-utility function. We set the solution as f (·) to solve the model with δ ∈ [0, 1) and CRRA utility function.

• For more complicated models, it is less obvious from where to start. We have search methods such as automatic programming, machine learning, game theory, and numerical optimiza- tion.

Genetic Algorighms. (Omitted) Using the Linear Policy Functions.

• One can use the linear policy function (in Chapter 2) to trace out a path for the vector st and solve the non-linear regression problem

minγ0

1 T

T −1 t=0

[φ(st+1) − ψ(γ0,wt)]2.

• At this point one can apply econometric tests to check whether the chosen degree of ψ is appropriate. For example, t-tests.

(9)

III Applications

III.1 Stochastic Growth with Non-Negative Investment

The Model. The models’ F.O.C.s are as follows. 0 = Ctη − µt− βEt

[Ct+1(1 − δ + αZt+1Kt+1α−1) − µt+1(1 − δ)], 0 = ZtKtα+ (1 − δ)Kt− Ct− Kt+1,

0 = µt[Kt+1− (1 − δ)Kt], 0 ≤ µt,

0 ≤ Kt+1− (1 − δ)Kt. Implementation

• In the case of δ = 1 and η = 1, an analytic solution for the policy functions are Kt+1 = αβZtKtα and Ct= (1 − αβ)ZtKtα. The conditional expectation is described analytically as

E(Kt, Zt) = 1

(1 − αβ)βK

α

t Z

1 t .

• Consider in the general case δ ∈ [0, 1] and η > 0.

• Use a complete set of base functions (Monomials or Chebyshev polynomials). – The first degree, complete polynomial with monomial base functions is

ψ(γ, Kt, Zt) := exp(γ1 + γ2ln Kt+ γ3ln Zt). – The second degree polynomials,

ψ(γ, Kt, Zt) := exp(γ1+ γ2ln Kt+ γ3ln Zt+ γ4(ln Kt)2+ γ5ln Ktln Zt+ γ6(ln Zt)2). – In the case of a base of Chebyshev polynomials,

ψ(γ, Kt, Zt) :=

n1 i=0

n2 j=0

γijTi(X1(ln Kt))Tj(X2(ln Zt)),

where X1 : [a, b] → [−1, 1] and X2 : [c, d] → [−1, 1].

• The Kuhn-Tucker conditions in the simulation are implemented as follows:

1. Given (Kt, Zt), we assume µt= 0 which means the non-negativity constraint is binding. Ct=(βψ(γ, Kt, Zt,))1/η.

Then,

Kt+1 = ZtKtα+ (1 − δ)Kt− Ct.

(10)

2. We test the non-negativity constraint. – If Kt+1− (1 − δ)Kt<0, we set

Ct= ZtKtα < Ct, Kt+1 = (1 − δ)Kt,

µt= Ctη− βψ(γ, Kt, Zt).

• If Kt+1<0, terminate the process and find a good starting value or use a genetic algorithm. Results.

(11)

III.2 The Benchmark Model

Results.

(12)

III.3 Limited Participation Model of Money

Motivation.

• Liquidity effect: In the textbook IS-LM model, R↓ by an expansionary monetary policy shock

⇒ r ↓ since inflationary expectations do not adjust immediately

⇒ I ⇑

⇒ the aggregate spending ⇑.

• Most monetary DGE models do not reproduce this liquidity effect.

• Christiano, Eichenbaum, and Evans (1997). Limited participation model.

– A monetary model with the liquidity and the inflationary expectations effects. – A rudimentary banking sector.

– Households with CIA constraint can lend part of the financial wealth Mt to the banks at the gross nominal interest rate qt.

– The firms pay wages to households before they sell their output. To finance them, they borrow money form the banks.

– The government injects money into the economy via banks.

– The crucial assumption is that banks receive the monetary transfer after households have decided about the volume of their banking deposits (limitation to participate). The Banking Sector.

• Bt: deposit from households at the beginning of period t.

• Mt+1− Mt: Government transfers, where Mt are beginning-of-period money balances.

• Bt+ (Mt+1− Mt): financeable amount for lendings to firms.

• Banks profits

DtB = qt(Bt+ Mt+1− Mt)

Pt

qtBt

Pt = qt

Mt+1− Mt

Pt . Producers.

• Yt= Zt(AtNt)1−αKtα

• At grows deterministically at the rate a − 1 ≥ 0.

• Zt= Zt−1ρZeǫ

Zt, ǫZ

t ∼ N(0, σz).

• Producers hire workers at the nominal wage rate Wt and capital services at the real rental rate rt. Since they have to pay workers in advance, they borrow WtNt at the nominal rate of interest qt− 1 from banks. Their profits

DtP = Yt− qtWt

PtNt− rtKt.

(13)

• The F.O.C.s with wt:= AWtPtt and kt:= KAtt. qtwt = (1 − α)ZtNtαktα,

rt= αZtNt1−αkα−1t ,

Money Supply. Money supply rule: µt := Mt+1

Mt , µt = µ

1−ρµµρµ t−1eǫ

µ t, ǫµ

t ∼ N(0, σµ). Households.

• Households maximizes their lifetime utilities subject to some constraints.

Ct,Ntmax,Kt+1,Xt+1

E0

t=0

βt

{ 1

1 − η

[(Ct− θAtNtν)1−η− 1]},

s.t. Kt+1− Kt+ (Xt+1− Xt) + (Bt+1− Bt) Pt

WPt

t

Nt+ (rt− δ)Kt+ (qt− 1)

Bt Pt

+ DtB− Ct,

Ct

Xt+ WtNt

Pt .

• The F.O.C.s are as follows: λt+ ξt= (ct− θNtν)η,

Nt=( wt θnu

)ν−11 ,

λt= βaηEtλt+1(1 − δ + αZt+1Nt+11−αkt+1α−1),

λt= βaηEt

( λt+1qt+1

πt+1 )

, λt= βaηEt

( λt+1+ ξt+1

πt+1 )

, 0 = ξt

( xt

t + wtNt− ct )

, where yt := AYt

t, ct :=

Ct

At, kt :=

Kt

At, wt :=

Wt

AtPt, πt :=

Pt

Pt−1, λt := ΛtA

η

t, xt := At−1XPtt−1,

mt := A Mt

t−1Pt−1, and ξt := ΞtA

η

t. Λt and Ξt are the Langrange multipliers for budget and CIA constraints, respectively.

• Since wtNt= Bt+MAt+1tPtMt = mt+1xtt, the CIA constraint is described as follows. ct = mt+1, if ξt >0,

ct ≤ mt+1, if ξt= 0, mt+1 = µtmt

t

.

• In equilibrium, the household’s budget constraint is akt+1 = ZtNt1−αkαt + (1 − δ)kt− ct.

(14)

Stationary Equilibrium.

• Zt= 1 and µt = µ for all t in the stationary equilibrium. π= µ

a.

• From the Euler equation, 1 = βaη(1 − δ + αy

k )

,

yk = a

η− β(1 − δ)

αβ .

• The Fisher equation is implied, q= π(1 − δ + r).

• The Euler equation for holding money, ξ= λ(q − 1).

Accordingly, the CIA constraint binds in equilibrium if the nominal interest rate is positive: (q − 1) > 0. This condition is bind if µ > βa1−η.

• Finally,

Nν−1 = 1 q

( 1 − α νθ

y N

) .

The PEA Solution.

• If we know the conditional expectations, three equations of the equilibrium system show λt and it looks redundant.

• The following is the solution that really works.

– We parameterize the rhs of the Euler equation for capital. λt= βaηψ11, kt, mt, xt, Zt, µt),

– Since mt>0, we can rewrite the second Euler equation, mt+1λt= βaηψ22, kt, mt, xt, Zt, µt),

– Analogously, we can rewrite the third Euler equation, xt+1λt = βaηψ33, kt, mt, xt, Zt, µt),

• Therefore, if we know the ψ, we obtain λt, mt+1, and xt+1. From wtNt = µtmttxt and Nt=(θwntu

)ν−11

, we obtain Nt and wt

(15)

• From qtwt= (1 − α)ZtNtαktα, we obtain qt.

• Finally we check the Kuhn-Tuker conditions: assume ξt = 0. This implies ct = λt1/η + θNtν,

– If ct< mt+1, we accept this solution.

– Otherwise, ˜ct= mt+1 and ξt= (˜ct− θNtν)η − λt

• kt+1 is computed from kt+1= ZtN

1−α

t ktα+ (1 − δ)kt− ct

a .

Implementation.

• The Fortran program LP.for obtains starting values from the solution of the log-linearized model. Since the model has five state variables and three conditional expectations, the potential number of coefficients becomes large and will have multicolinearity in the case of monomials. (A complete second degree polynomial in five variables has 21 coefficients.)

• Given the log-linear solution, we compute time paths for the relevant variables.

1. Looking at the correlation matrix between the potential regressors and exclude highly correlated regressors with others.

2. We regress the error terms form the log-linear solution on the remaining regressors for initial values. Given these initial values, we compute parameters using either non-linear solver or iterations.

3. We reduce the set of regressors further: we exclude regressors whose t-ratio are smaller than one in absolute value.

Results.

(16)
(17)

参照

関連したドキュメント

To deal with the complexity of analyzing a liquid sloshing dynamic effect in partially filled tank vehicles, the paper uses equivalent mechanical model to simulate liquid sloshing...

Differential equations with delayed and advanced argument (also called mixed differential equations) occur in many problems of economy, biology and physics (see for example [8, 12,

It is suggested by our method that most of the quadratic algebras for all St¨ ackel equivalence classes of 3D second order quantum superintegrable systems on conformally flat

In particular, we consider a reverse Lee decomposition for the deformation gra- dient and we choose an appropriate state space in which one of the variables, characterizing the

As explained above, the main step is to reduce the problem of estimating the prob- ability of δ − layers to estimating the probability of wasted δ − excursions. It is easy to see

Here we continue this line of research and study a quasistatic frictionless contact problem for an electro-viscoelastic material, in the framework of the MTCM, when the foundation

In this paper we show how to obtain a result closely analogous to the McAlister theorem for a certain class of inverse semigroups with zero, based on the idea of a Brandt

Our method of proof can also be used to recover the rational homotopy of L K(2) S 0 as well as the chromatic splitting conjecture at primes p &gt; 3 [16]; we only need to use the