Homework 2
Masaru Inaba
2010 年 7 月 02 日
講義7月15日の講義で提出すること.解答は,できるだけ丁寧に書き,途中の計算プロセス
なども記述すること.結果だけが書かれたものは採点の対象としない.提出にはA4 の用紙を用
い,ホチキス等で綴じること.問題について,不明なことがある場合には,遠慮なくメール等で 質問すること.
e-mail: big.rice.plant.leaf@gmail.com
1 Applications
1.1 The Japanese Saving Rate
Read “The Japanese Saving Rate” (AER, December 2006, downloadable from Selo Imro- horoglu’s homepage or our lecture page.
http://www-rcf.usc.edu/~simrohor/research/published/aer.96.5.pdf
). Summerize what they do using their model, and what their results in 3 pages of A4 paper. (20 points)
1.2 The 1990s in Japan: A Lost Decade
Read “The 1990s in Japan: A Lost Decade” (August 2003 version, downloadable from Fumio Hayashi’s homepage
http://fhayashi.fc2web.com/Prescott1/Postscript_2003/hayashi-prescott.pdf
). Summerize what they do using what their model, and what their results in 3 pages of A4 paper. (20 points)
1
2 Dynamic Optimization
2.1 Direct Method
Consider the Brock-Mirman problem of maximizing
{cmaxt,kt+1}
∞
∑
t=0
βtln ct,
s.t. ct+ kt+1 ≤ Aktα, k0 : given.
where 0 < β < 1, 0 < α < 1, and A0 > 0 are constant. We consider the problem of the social planners problem.
(a) Derive the equilibrium conditions (the first-order conditions and resource constraint) by La- grange multiplier method.
(b) Define the decentralized economy problem as we learned in our lecture. The decentralized economy means that there are the market where consumers and firms participate in the market and optimize their utilities and profits. (2 points)
(c) Define the competitive equilibrium. (2 points)
(d) Derive the equilibrium condition and show the equations achieve the same allocations as those of the social planner problem. (2 points)
2.2 Dynamic Programming
Consider the Brock-Mirman problem of maximizing
{cmaxt,kt+1}
∞
∑
t=0
βtln ct, s.t. ct+ kt+1 ≤ Aktα,
k0 : given.
where 0 < β < 1, 0 < α < 1, and A0 > 0 are constant. We consider the problem of the social planners problem.
(a) Derive Bellman’s equation. (2 points)
(b) (I didn’t explain following issues, but please try to solve by checking lecture slide or Sargent’s text book.) Guess and Verify: Guess that the value function is described as V (k) = constant + v1ln(k). Derive the value function by the method of guess and verify. For now, derive only v1. We don’t have to derive the constant. (3 points)
2
(c) Value Function Iteration: Use recursions on Bellman’s equation above, starting from V0(k) ≡ 0 to show the same value function as derived by guess and verify, which means that the value function has the same value of coefficient v1. (Hint: Compute analytically V1(k) by using V0(k) ≡ 0. Next, Compute analytically V2(k) by using V1(k). And next, Compute analitically V3(k) by using V2(k). Then, Compute analytically V4(k) by using V3(k). Finally, using the above sequence of function {Vj(·)}j = 1, 2, 3, 4, · · · , we can conjecture Vj(· · · ) in general. Then j → ∞, we will get V = limj→∞Vj. For now, all we need to consider is about the coefficient of ln(k).) (3 points)
(d) Optimal Policy Function: Derive the optimal policy function on {ct} and {kt+1}. (2 points)
(e) Euler Equation: Derive the Euler equation. (2 points)
2.3 Habit Persistence (similar to Exercise 1.4 of Sargent (1987), Dy-
namic Macroeconomic Theroy )
Consider the problem of choosing a time path of consumption {ct} to maximize
{cmaxt,kt+1}
∞
∑
t=0
βt(ln(ct) + γ ln(ct−1)), s.t.ct+ kt+1≤ Akαt,
ct≥ 0, kt+1 ≥ 0,
k0, c−1 : given, positive constant,
where A > 0, 0 < β < 1, γ > 0, and 0 < α < 1. Here ctis consumption at t, and ktis capital stock at the beginning of period t. The current utility log(ct) + γ log(ct−1) represents habit persistence in consumption. This optimization problem can be reformulated to fit the standard dynamic programming format. Introduce a new state variable xt and rewrite the problem as maximizing
{cmaxt,kt+1}
∞
∑
t=0
βt(ln(ct) + γ ln(xt)), s.t.ct+ kt+1≤ Aktα,
xt+1 = ct,
ct≥ 0, kt+1 ≥ 0,
k0, x0 : given, positive constant.
Now there are two state variables, kt+1and xt+1, and two transition equations for kt+1= Aktα− ct
and xt+1 = ct. The return function (or payoff function) depends on a state variable, xt, as well as the control ct.
3
(a) (2 points) Let v(k0, x0) be the maximized value of ∑∞t=0βt(ln(ct) + γ ln(xt)). Formulate Bellman’s functional equation in v(k, x), a value function.
(b) (3 points) Prove that the solution of Bellman’s functional equation is of the form v(k, x) = E+ F ln k + G ln x. Give explicit formulas for the constants, E, F , and G.
(c) (3 points) Prove that the optimal policy of the form ln kt+1 = I + H ln kt where H and I are constants. Give explicit formulas for the constants, H, and I.
4