Chapter 6 Dynamic Optimization Math Econ 3rd y
Chapter 6 Dynamic Optimization Math Econ 3rd y
Optimization is the unifying paradigm in almost all economic analysis. The optimization
problem has an objective function, constraints & choice variables. There are lots of different
types of optimization problems and how you solve them will depend on the branch on which you
find yourself. For example, a simple two period consumption model: we can consider the simple
consumer’s optimization problems, max U y , y ; st : p
1 2
T
y x ; y-the vector of choice
variables; x-the consumer’s exogenously determined income.
What happen if the consumer lives for two periods, but has to survive off the income endowment
provided at the beginning of the first period? In our real world’s practice, it is known that action
taken in one period has consequences in another period and links too. For example all else are
equal, the budget constraint requires that it should be lower in the second period. This reminds us
that most of house holds who are with the monthly salary/wege reflects same.Hence, what
requires this type of frame work is that, we have to find an optimal time path of different
variable, and finding of this optimal path is the dynamic optimization.
The dynamic optimization problem poses the question of what is the optimal magnitude of a
choice variable in each period of time with the plug period (discrete time period (DTP)) or at
each point of time in a given time interval say 0 to T (continuous time period (CTP)).
The time horizon may go up to infinite ( ).So, the solution of a dynamic optimization problem
would take the form of an optimal time path for every choice variable detailing the best value of
the variable today, tomorrow and so forth till the end of the plug period.
The technique of solving continuous dynamic optimization is called as the optimal control
theory, which allows consideration of the optimal time path of a set of variables rather than just
the identification of stationary equilibrium.
Similar to the static optimization, in order to solve the optimization problem we need first to find
the first-order conditions:
6.1.1.How to derive the first-order conditions (FOCs):
step2. Take the derivative of the Hamiltonian with respect to the control variable & set it to zero.
dH df dg
0
dy dy dy
step3. Take the derivative of the Hamiltonian with respect to the state variable and set it to equal
the negative of the Lagrange multiplier with respect to time.
dH df dg d
t
,
dx dx dx dt
Step4.Transversality condition.
case3. Infinite horizons with f in a form different from that specified in case2, lim H 0
t
t
The first-order condition with more than one state and/or control variable:
T
1
max y dt , st : x y
2
A, xT free
,
Example1.
t t t
and x
t 0
yt 0
t t t
1
2 y t 0..............................................II
dH 1
0 1 y t2
2
dy 2 t
step3. The state variable is x
t
dH
0 t means constant…………………………………………………(III)
,
dk
T
0...............................................................................................IV
1. III Cnstant
2. IV with is constant, T
0 for all T.
3. To find y , solve II
t
1 1
2 y t 0
2
1 yt
2 t
t 0
1
dH 1
0 1 y t2 2 y
2
dy 2 t
y 0
t
x y 0
,
t t
x A e , t 0 x A constant.
t
t t
y & x
1
Example2. max ln 4 yt xt dt , st : xt 4 xt 1 1, x1 e
, 2
t 0
yt 0
The Hamiltonian:
H ln 4 y t x dt 4 x 1 y
t t t t
Max condition:
dH 1
1. y 4 t xt 0
d y t
t
2.
,
t
dH
d xt
1
xt 4 t 1 y
t
3. x d
, dH
4 xt 1 y
t
t
t
x e
2
4. 1
1
Simplifying the 1st equation, y
t 4 t xt
1 1
t 4 t 1
,
x t
4 t xt
1
5. y
t 4 t xt
4 t
,
6. t
1
4 xt
,
7. x t
t
1
Substitute in 7 xt 4 xt
,
,
t
4t
t 0 e
1
x 4x
,
e
t t 4t
0
e x 4 x
4t , 1
Solve linear F.O.D.E. t t
0
4t 4t 1
e x 4e x
,
t t
0
e x 4 e x dt
4t , 4t 1
Integrate: t t
dt
0
4t t
xt e A
1
A 0
2
A1 A
4t t 4t t
So, xt e xt e A
2
0 0
te
4t
x Ae
4t
9. t
0
te
4t
at t 0 x0 1 xt Ae
4t
0
4 0
0 e 4 0
1 Ae A 1
0
4 1
1 e
4 4
e e
4 1
at t 1 x1 e e Ae 0
2 2 4 2
e e
;
0 0
4
e e
2
0
1.156
e t 1.156 e
4 t 4 t
t 0
4t
x Ae e xt e 0.865t e
4t 4t 4t
t
0
1 1
Substitute in 5
y t 4 t xt 1.156 e
4t
4 e 4t
0.865 t e
4t
1
y
t 4.624 4t
Optimal control theory; as we mentioned above it is the technique of solving continuous dynamic
optimization, which allows consideration of the optimal time path of a set of variables rather than
just the identification of stationary equilibrium.
So, here the aim is to find the optimal time path for a control variable, which is selected in each
period. The state variable ,i.e., the value of which is dependent on some other value ( control
variable) in which always in motion have equations of motion or transition set equal to the
derivative of Hamiltonian function with respect to the Lagrange multiplier or the constraint.
Hence the goal of optimal control theory is to choose a path of values over time for the control
(choice) variable that will optimize the functional subject to the constraint set on the state
variable.
T
y 0
x0 x0 ; xT xT
where V is the value of the functional to be optimized; yt -the control variable it is selected or
controlled to optimize V ; xt the state variable ,which changes over time according to the
,
differential equation set equal to x in the constraint, and whose value is indirectly determined
by the control variable in the constraint; and t time.
10.1.1. The Hamiltonian and the necessary conditions for maximization in optimal control
theorem
Here, the first three conditions are the maximum principle ( pontriyagen maximum principle)and
the fourth is called the boundary condition. The two equations of motion in the 2 nd& 3rdcondition
are the Hamiltonian system. For minimization, the objective functional can simply be multiply
by (-1), as in concave programming.
Subjectto : x 6 y
,
x 0
7; x1 70
6 y
2
a) The Hamiltonian is H 5 x 3 y 2 y
b) Maximum principle are:
dH
i) 3 4 y 6 0
dy
y 0.75 1.5...........................................................................................I
5......................................................................................II
dH
,
ii)
dx
x d 6 y.......................................................................................III
, dH
iii)
x 4.5t 22 .5 t 9 k 1 t k 2 .................................................................................V
2
x 0
7; x1 70,
x k 0 2
7; k 2 7
x 4.5 22.5 9 k
1 1
7 70; k1 9
Substituting k 1
9 & k 2 7 in IV and V ,
2
f 5x 3 y 2 y
f f 0 0
D xx xy
, D 0, D 0
f f 0 4 1 2
yx yy
D fails the strict negative definite criteria but proves to be negative semi-definite with D
1
0
and D 2
D 0.
However, for the semi-definite test we must also test the variables in reverse order.
f f 4 0
D yy yx
, D 4, D D 0
f xy
f xx
0 0 1 2
with both discriminate tests negative semi-definite, the objective functional f is jointly concave
in x
and y .Since the constraint is linear, it is also jointly concave and does not need testing. we
conclude that the functional is indeed maximized.
3
V 4 x 5 y dt , Subjectto : x 8 y, x0 2; x3 117
2 ,
Example3. Solve max
y
0
y 0.8...........................................................................................I
4......................................................................................II
dH
,
ii.
dx
x d 8 y.......................................................................................III
, dH
iii.
80.8 6.4.........................................III
dH
Substituting I in to II we get: x d
,
Substituting k 1
12 & k 2 2 in IV and V ,
x 25t 76 .8.............................................................III
,
,
Substitute for x in the equation of motion in the constraint, we can find the control variable.
x 8y
,
25t 76.8 8 y
f f 0 0
D xx xy
, D 0, D D 0
f f 0 10 1 2
yx yy
D fails the strict negative definite criteria but proves to be negative semi-definite with D 1
0
and D 2
D 0.
However, for the semi-definite test we must also test the variables in reverse order.
f f 10 0
D yy yx
, D 10, D D 0
f xy
f xx
0 0 1 2
with both discriminate tests negative semi-definite, the objective functional f is jointly concave
in x
and y . Since the constraint is linear, it is also jointly concave and does not need testing. we
conclude that the functional is indeed maximized.
From the above we have seen the necessary condition for maximization so far. The sufficiency
conditions will be fulfilled if:
i. Both the objective functional and the constraint are differentiable and jointly concave in
x (state variable) and y (control variable);
ii. t 0 ,if the constraint is non-linear in x or y .If the constraint is linear, t may
assume any sign.
For non-linear functions, a simple test for joint concavity is the discriminant test,given the
discriminant of the 2nd –order derivatives of the function, but linear functions are always both
concave and convex.
f f
D xx xy
f yx
f yy
D 1
f 0 and D 2
D 0 .strictly concave and negative definite, it indicates a global
xx
D 1
f 0 and D 2
D 0 concave and negative semi definite, is indicative of a local
xx
maximum and is sufficient for a maximum if the test is employed for every possible ordering of
the variables with similar results. See from the above Example (2) & (3)at point (e) which are
evaluated the sufficiency conditions.
Foe optimal control problem involving continuous time with a finite time horizon and a free end
point can be expressed as:
T
dH
(i) 0
dy
d dH
,
(ii)
dt dx
dx dH
x
,
(iii)
dt d
x0 x0 , xT 0
here, the 1st three conditions for maximization, comprising the maximum condition, remain the
same but the last or boundary condition changes. This last condition is called transversality
condition for free end point. If the value of x at T is free to vary, the constraint must be non-
binding and the shadow price
evaluated at T must equal zero (0).
2
Example4. max 6 x 4 y dt ; Subject to: x 16 y ; x0 8; x2 40
2 ,
y 0
y 2.............................................................................I
d dH
6..................................................................II
,
(ii)
dt dx
dx dH
x 16 y
,
(iii)
dt d
x 32 6t k 192 t 32 k
,
1 1
x0 8; x2 40
x0 k 2 8; k 2 8
Substituting k 1
5.5,& k 2 8 in to ( IV ) & (V )
x 16 y.
,
or take the derivative of (VII)and substitute it in the constraint
1
Example5. max 4 y x 2 x dt ; Subject to: x x y ; x0 6; x1 free
2 2 ,
y
y 0
x 2 x x y
2
A) The Hamiltonian is: H 4 y
2
y
B) The necessary conditions for the maximum principle are:
dH
(i) 4 2y 0
dy
y 0.5 2.............................................................................I
d
1 4 x 1 4 x ...........................................II
dH
,
(ii)
dt dx
dH
x d x y ...............................................................................III
,
(iii)
From y in I x x 2 0.5
,
,
, 1 4 1
x 2; Y AX B
In matrix form: x 0.5 1
Tr A Tr A 4 A 2
Characteristic roots:
r 1, 2
2
0 0 4 3 3.464 1.732
2
r 1, 2
2 2
r 1
1.732
The Eigen vector corresponding to is:
1 1.732 4 k 1 2.732 k 1
A r I K 1 i
1 1.732 k 2 0,5
4
0
0.732 k 2
0.5
2.732 k1 4 k 2 0, k1 1.464 k 2
1 1.464
1.732t
y k 1e
c
1
1 1.732 4 k 1 0.732 4 k 1
A r I K 1 i
1 1.732 k 2 0,5
0
2.732 k 2
0 .5
0.732 k1 4 k 2 0, k1 5.464 k 2
2 5.464 1.732t
y k 2e
c
1
_
1
For the particular solution, y y p
A B
_
y p y _ 3 0.5 1 2 0.83
_
1 1 4 1 2.33
x
t 1.464 k 1 e
1.732t
5.464 k 2 e 2.33 ........................................................IV
1.732t
xt k 1 e
1.732t
k2e 0.83 ............................................................................V
1.732t
1.7321 1.7321
1 0 1.464 k 1 e 5.464 k 2 e 2.33 0
x0 k1 k 2 0.83 6
t 0.242 e
1.732t
2.683 e 2.33 ...............................VI ,Costate variable.
1.732t
xt 0.1653 e
1.732t
0.4910 0.83 ................................V , State variable.
1.732t
e
D) Finally substituting VI in I ,we obtain:
y 0.5 2.............................................................................I
y t 0.121 e
1.732t
1.3415 e 0.33 ............................. Control variable.
1.732t
f f 4 0
D xx xy
, D 4 0, D D 80
f f 0 2 1 2
yx yy