EC004 OutputDynamics - Microfoundation 2022 Lecture5
EC004 OutputDynamics - Microfoundation 2022 Lecture5
Mausumi Das
24 May, 2022
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 1 / 21
Cake Eating Problem: In…nite Horizon
If we extend the time horizon of the cake eating problem to in…nity,
the corresponding dynamic optimization problem will be speci…ed as:
∞
∞
Max. ∑ βt u (ct ) ; u 0 > 0; u 00 < 0;
fct gt =0 ,fW t +1 gt∞=0 t =0
0<β<1
subject to
(i) ct Wt for all t > 0.
(ii) Wt +1 = Wt ct ; ct > 0, Wt +1 > 0 for all t > 0; W0 given.
We will use the dynamic programming technique to solve this in…nite
horizon problem.
We have already discussed the concepts of value function and the
Bellman equation that gives the relationship between the value
functions at two consecutive time periods.
Let’s now apply these concepts to …nd the solution paths of the above
problem.
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 2 / 21
Solving the Cake Eating problem through Dynamic
Programming:
Recall that the Bellman equation for the Cake-eating problem at time
0 is given by:
u 0 ( W0 W1 ) = βV 0 (W1 )
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 3 / 21
Cake Eating: Dynamic Programming (Contd.)
It seems that from the above FOC we would be able to easily …nd the
optimal c0 once we know the exact speci…cation of u (c ) and the
given value of W0 .
The matter is not that simple though. There is a catch!
Recall the unlike the direct approach, where we had …rst solved for
the optimal consumption stream and then calculated the value
function by plugging in these optimal values in the objective function,
here we have just used the knowledge that such a value function
exists without explicitly deriving it.
Hence we do not know the exact from of V (W0 ) or V (W1 ) or, for
that matter, V 0 (W1 )!
Is there a way to derive the value of V 0 (W1 ) without actually solving
the entire problem?
(If not, then we are back to the direct approach; dynamic
programming approach then will have no special appeal!)
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 4 / 21
Cake Eating: Dynamic Programming (Contd.)
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 5 / 21
Cake Eating: Dynamic Programming (Contd.)
Now consider the Bellman equation relevant for the next period (i.e.,
at time period 1, not at t = 0) :
V (W1 ) Max.u (W1 W2 ) + βV (W2 )
fW 2 g
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 8 / 21
A Digression: Dynamic Programming - Generic Case
Consider the following canonical discrete-time, stationary, dynamic
optimization problem:
∞
Max.
∞
fxt +1 gt =0 ,fyt gt∞=0
∑ βt Ũ (xt , yt )
t =0
subject to
(i) yt 2 G̃ (xt ) for all t = 0;
(ii) xt +1 = f˜ (xt , yt ); xt 2 X for all t = 0; x0 given.
Here yt is the control variable (main choice variable); xt is the state
variable (the auxiliary variable which co-moves with the main choice
variable) ; Ũ represents the instantaneous payo¤ function.
(i) speci…es what values the control variable yt is allowed to take (the
feasible set), given the value of xt at time t;
(ii) speci…es evolution of the state variable as a function of previous
period’s state and control variables (state transition equation).
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 9 / 21
Dynamic Programming : Generic Case (Contd.)
subject to
(i) xt +1 2 G (xt ) for all t = 0; x0 given.
∞
Let xt +1 t =0
denote the corresponding solution .
Then we write the value function of the above problem at time 0 as a
function of x0 :
∞
V (x0 ) Max.
∞ ∑ βt U (xt , xt +1 ) ;
f x t +1 g t =0 t = 0
xt +1 2 G (xt ) for all t = 0;
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 10 / 21
Dynamic Programming : General Case (Contd.)
Using the same logic as before, we can write the Bellman equation of
the above problem at time 0 in the following way:
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 11 / 21
Dynamic Programming : Generic Case (Contd.)
Likewise, we can write down the Bellman equation for any two
consecutive time periods t and t + 1 as:
Or equivalently:
where x and x̃ denote the values of the state variable in any two
consecutive time periods.
The maximizer of the right hand side of equation above is called a
policy function:
x̃ = π (x ),
which solves the RHS of the Bellman Equation above.
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 12 / 21
Dynamic Programming : Generic Case (Contd.)
If we knew the exact form of the value function V (.) and were it
di¤erentiable, we could have easily found the policy function by
solving the following FONC (the Euler Equation):
∂U (x, x̃ )
x̃ : + βV 0 (x̃ ) = 0. (4)
∂x̃
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 13 / 21
Dynamic Programming : Generic Case (Contd.)
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 14 / 21
Dynamic Programming : Generic Case (Contd.)
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 15 / 21
Dynamic Programming : Generic Case (Contd.)
∂U (x̃, x̂ )
V 0 (x̃ ) = . (5)
∂x̃
Combining the Euler Equation and the Envelope Condition, we get
the following equation:
∂U (x, x̃ ) ∂U (x̃, x̂ )
+β =0
∂x̃ ∂x̃
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 16 / 21
Dynamic Programming : Generic Case (Contd.)
Replacing x, x̃, x̂ by their suitable time subscripts:
∂U (xt , xt +1 ) ∂U (xt +1 , xt +2 )
+β = 0; x0 given. (6)
∂xt +1 ∂xt +1
Equation (6) is a di¤erence equation which we should be able to solve
to derive the time path of the state variable xt .
Notice that (6) is a di¤erence equation of order 2. To solve this
equation, we need two boundary conditions.
One boundary condition is speci…ed by the given initial value x0 .
But we need another boundary condition to precisely pin down the
solution path.
Typically in a Dynamic Programming problem such a boundary
condition is provided by the following Transversality condition
(TVC):
∂U (xt , xt +1 )
lim βt xt = 0. (7)
t !∞ ∂xt
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 17 / 21
Transversality Condition and its Interpretation:
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 18 / 21
Dynamic Programming - Generic Case: Reference
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 19 / 21
Back to Household’s Choice Problem: In…nite Horizon
Let us now go back to optimization problem of household h under
in…nite horizon given by:
∞
∞
Max. ∞ ∑ βt u cth ; u 0 > 0; u 00 < 0
fcth gt =0 ,fath+1 gt =0 t =0
subject to
(i) cth 5 wt + rt ath + (1 δ)ath for all t = 0;
(ii) ath+1 = wt + (1 + rt δ)ath cth ; ath = 0 for all t = 0; a0h given.
Notice that this is not a "stationary" dynamic programming problem
because the wage rate (wt ) and the rental rate (rt ) keep changing
over time, which means the nature of the state transition equation
will keep changing over time.
You could still use the dynamic programming technique developed
earlier (with some caveats). But let’s …rst solve a stationary
counterpart of the problem by assuming that wt and rt remain
unchanged at some given values: w and r .
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 20 / 21
Back to Household’s Choice Problem: In…nite Horizon, No
Borrowing
∞
Max. ∞ ∑ βt u cth ; u 0 > 0; u 00 < 0
fcth gt =0 ,fath+1 gt =0 t =0
subject to
Das (Lecture 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 21 / 21