0% found this document useful (0 votes)
81 views17 pages

Chapter 6 Dynamic Optimization Math Econ 3rd y

Chapter 6 discusses dynamic optimization, highlighting its importance in economic analysis through the formulation of optimization problems involving objective functions, constraints, and choice variables. It introduces the concept of finding optimal time paths for choice variables over discrete or continuous time periods, utilizing techniques such as optimal control theory and the Hamiltonian function. The chapter outlines the steps to derive first-order conditions for solving these optimization problems, emphasizing the significance of maximizing or minimizing performance indices within given constraints.

Uploaded by

ermiastiruneh035
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views17 pages

Chapter 6 Dynamic Optimization Math Econ 3rd y

Chapter 6 discusses dynamic optimization, highlighting its importance in economic analysis through the formulation of optimization problems involving objective functions, constraints, and choice variables. It introduces the concept of finding optimal time paths for choice variables over discrete or continuous time periods, utilizing techniques such as optimal control theory and the Hamiltonian function. The chapter outlines the steps to derive first-order conditions for solving these optimization problems, emphasizing the significance of maximizing or minimizing performance indices within given constraints.

Uploaded by

ermiastiruneh035
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Chapter 6: Dynamic optimization

6.1. Over view of optimization

Optimization is the unifying paradigm in almost all economic analysis. The optimization
problem has an objective function, constraints & choice variables. There are lots of different
types of optimization problems and how you solve them will depend on the branch on which you
find yourself. For example, a simple two period consumption model: we can consider the simple
consumer’s optimization problems, max U y , y ; st : p
1 2
T
y  x ; y-the vector of choice
variables; x-the consumer’s exogenously determined income.

What happen if the consumer lives for two periods, but has to survive off the income endowment
provided at the beginning of the first period? In our real world’s practice, it is known that action
taken in one period has consequences in another period and links too. For example all else are
equal, the budget constraint requires that it should be lower in the second period. This reminds us
that most of house holds who are with the monthly salary/wege reflects same.Hence, what
requires this type of frame work is that, we have to find an optimal time path of different
variable, and finding of this optimal path is the dynamic optimization.

The dynamic optimization problem poses the question of what is the optimal magnitude of a
choice variable in each period of time with the plug period (discrete time period (DTP)) or at
each point of time in a given time interval say 0 to T (continuous time period (CTP)).

The time horizon may go up to infinite ( ).So, the solution of a dynamic optimization problem
would take the form of an optimal time path for every choice variable detailing the best value of
the variable today, tomorrow and so forth till the end of the plug period.

In general, a dynamic optimization problem contains the following basic things:

i) a given initial point and a given terminal period;


ii) a set of admissible path from initial to terminal point;
iii) a set of path values serving as performance (economic) indices such as cost, profit etc,
associated with the various parts;
iv) a specific objective either to max or min the path when or performance index by
choosing the optimal path.

The technique of solving continuous dynamic optimization is called as the optimal control
theory, which allows consideration of the optimal time path of a set of variables rather than just
the identification of stationary equilibrium.

Similar to the static optimization, in order to solve the optimization problem we need first to find
the first-order conditions:
6.1.1.How to derive the first-order conditions (FOCs):

step1. Construct the Hamiltonian function.

H  f xt , yt , t    t g xt , yt , t  , where;  t is the Lagrange multiplier

step2. Take the derivative of the Hamiltonian with respect to the control variable & set it to zero.
dH df dg
  0
dy dy dy

step3. Take the derivative of the Hamiltonian with respect to the state variable and set it to equal
the negative of the Lagrange multiplier with respect to time.

dH df dg d
     t
,

dx dx dx dt

Step4.Transversality condition.

case1. Finite horizons  T xT   0

case2. Infinite horizons with f of the firm f x , y , t  e U x , y  ;lim  , x   0


t t
 t
t t
t 
t t

case3. Infinite horizons with f in a form different from that specified in case2, lim H   0
t 
t

The first-order condition with more than one state and/or control variable:
T
1 
max   y dt , st : x  y
2
 A, xT  free
,
Example1.
 t t t
and x
t 0
yt 0

Objective functions: concave, continuous and differentiable.

step1. Construct the Hamiltonian function.

H   1  y    y .........................................................................I 


2

 t t t

step2. The control variable is y t

 
1

 2 y   t  0..............................................II 
dH 1
0 1 y t2
2

dy 2 t
step3. The state variable is x
t

dH
 0    t  means constant…………………………………………………(III)
,

dk

step4. Transversality condition is

 T
 0...............................................................................................IV 

1. III    Cnstant
2. IV  with  is constant,  T
 0 for all T.

3. To find y , solve II 
t

1 1 
  2 y   t  0
2 
1  yt 
2 t

  

t  0 

 
1

dH 1
0 1 y t2 2 y
2

dy 2 t


y 0
t

x  y 0
,
t t

x  A e , t  0  x  A  constant.
t
t t

   y & x
1

Example2. max  ln 4 yt xt dt , st : xt  4 xt 1   1, x1  e
, 2

t 0
yt 0

The Hamiltonian:

H  ln 4 y  t x dt   4 x 1  y 
t t t t

Max condition:
dH 1
1.  y  4  t xt  0
d y t
t

2. 
,
t

dH
d xt
1

  xt  4  t 1   y 
t

3. x d
, dH
 4 xt 1   y
t
 t
t

x e
2
4. 1

1
Simplifying the 1st equation, y 
t 4  t xt

Substituting for y in 2


t

1  1 
t    4  t 1 
,

x t

 4  t xt 

  4  t  you can solve the differential equation


,
t

1
5. y 
t 4  t xt

  4  t
,
6. t

1
 4 xt 
,
7. x t
 t

take 6    4  t & solving it   t   0 e


, 4 t
8. t ,



1 

Substitute in 7  xt  4 xt 
,
,
t 
 4t 
 t   0 e 

1
x 4x 
,

e
t t  4t
0
e x  4 x   
 4t , 1
Solve linear F.O.D.E. t t
 0

 4t  4t 1
e x  4e x 
,
t t
 0

 e x  4 e x dt    
 4t ,  4t 1
Integrate: t t
dt
0

 4t t
 xt e  A  
1
 A 0
2

  A1  A  
 4t t  4t t
So, xt e  xt e  A
2
 0  0

te
4t

x  Ae
4t
9. t
 0

te
4t

at t  0  x0  1        xt   Ae
4t

 0

4 0 
 0 e 4 0 
1  Ae  A 1
 0

4 1
 1 e
4 4

e e
4 1
at t  1  x1  e      e   Ae    0 
2 2 4 2

 e e
;
 0 0
4
e e
2

 0
 1.156

From 8 & 9;

  e   t  1.156 e
4 t 4 t
t 0

4t

x  Ae e  xt  e  0.865t e
4t 4t 4t
t
 0
1 1
Substitute in 5  
y t 4  t xt 1.156 e
 4t
4 e 4t
 0.865 t e
4t

1
y 
t 4.624  4t

So, this is the solution to the problem can be graphed.

10.1.Optimal control theory (OCT)

Optimal control theory; as we mentioned above it is the technique of solving continuous dynamic
optimization, which allows consideration of the optimal time path of a set of variables rather than
just the identification of stationary equilibrium.

So, here the aim is to find the optimal time path for a control variable, which is selected in each
period. The state variable ,i.e., the value of which is dependent on some other value ( control
variable) in which always in motion have equations of motion or transition set equal to the
derivative of Hamiltonian function with respect to the Lagrange multiplier or the constraint.
Hence the goal of optimal control theory is to choose a path of values over time for the control
(choice) variable that will optimize the functional subject to the constraint set on the state
variable.
T

max V   f xt , yt , t dt, Subjectto: x  gxt , yt , t 


,

y 0

x0  x0 ; xT   xT

where V is the value of the functional to be optimized; yt  -the control variable it is selected or
controlled to optimize V ; xt  the state variable ,which changes over time according to the
,
differential equation set equal to x in the constraint, and whose value is indirectly determined
by the control variable in the constraint; and t time.

10.1.1. The Hamiltonian and the necessary conditions for maximization in optimal control
theorem

In dynamic optimization of OCT: a functional (an objective function) subject to a constraint on


the state variable can be managed using the Hamiltonian function H  .Hence, H is defined as:
H xt , yt ,  t , t   f xt , yt , t    t gxt , yt , t  , where  t  is the state variable. As that of
the Lagrangean multiplier, the costate variable  t  estimates the marginal value or shadow
price of the associated state variablext  .Then the necessary condition for maximization will be
expressed as:
dH
(i) 0
dy
d dH
 
,
(ii)
dt dx
dx dH
x 
,
(iii)
dt d
(iv) x0  x0 , xT   xT

Here, the first three conditions are the maximum principle ( pontriyagen maximum principle)and
the fourth is called the boundary condition. The two equations of motion in the 2 nd& 3rdcondition
are the Hamiltonian system. For minimization, the objective functional can simply be multiply
by (-1), as in concave programming.

Example2. Solve the following optimal problem:


1

max V    5x  3 y  2 y dt, Subjectto : x  6 y, x


2
 7; x1  70
,
0
y 0

Subjectto : x  6 y
,

x 0
 7; x1  70

  6 y 
2
a) The Hamiltonian is H  5 x  3 y  2 y
b) Maximum principle are:
dH
i)  3  4 y  6  0
dy

y  0.75  1.5...........................................................................................I 

 5......................................................................................II 
dH

,
ii)
dx

x  d  6 y.......................................................................................III 
, dH
iii)

from I  , x  60.75  1.5   4.5  9........................................................................III 


,

Integrating II ,   5t  k1 ......................................................................................IV 

Substituting in III  and then integrating:


x  4.59  5t  k   4.5  45t  9 k
,
1 1

x  4.5t  22 .5 t  9 k 1 t  k 2 .................................................................................V 
2

c) Applying the boundary condition:

x 0
 7; x1  70,

x k 0 2
 7; k 2  7

x  4.5  22.5  9 k
1 1
 7  70; k1  9

Substituting k 1
 9 & k 2  7 in IV  and V  ,

  5t  9 costate vatiable ...............................................................................VI 

x  22 .5 t  85 .5t  7 state variable .............................................................VII 


2

d) For the final solution, we then simply substitute VI  in to I  ,


y  0.75  1.5 5t  9

y  7.5t  14 .25 Control variable

e) For the sufficiency conditions, with:

2
f  5x  3 y  2 y

f  5 , and f  3  4 y ,we have


x y

f f 0 0
D xx xy
 , D  0, D 0
f f 0 4 1 2
yx yy

D fails the strict negative definite criteria but proves to be negative semi-definite with D
1
0

and D 2
 D  0.

However, for the semi-definite test we must also test the variables in reverse order.
f f 4 0
D yy yx
 , D  4, D  D 0
f xy
f xx
0 0 1 2

with both discriminate tests negative semi-definite, the objective functional f is jointly concave
in x

and y .Since the constraint is linear, it is also jointly concave and does not need testing. we
conclude that the functional is indeed maximized.
3
V    4 x  5 y dt , Subjectto : x  8 y, x0  2; x3  117
2 ,
Example3. Solve max
y  
0

a) The Hamiltonian is: H  4 x  5 y   8 y 


2

b) Maximum principle are:


dH
i.  10 y  8  0
dy

y  0.8...........................................................................................I 

 4......................................................................................II 
dH

,
ii.
dx

x  d  8 y.......................................................................................III 
, dH
iii.

 80.8   6.4.........................................III 
dH
Substituting I  in to II  we get: x  d
,

Integrating II ,     dt    4dt  4t  k1...............................................IV 


,

Substituting IV  in III  and then integrating:

x  6.4 4t  k   25t  6.4 k .............................................................III 


,
1 1

Integrating III : x    25 t  6.4 k 1dt

x  12 .8 t  6.4 k 1 t  k 2 ............................................................................V 


2
c) Applying the boundary condition:

x0  12 .8 0  6.4 k 0  k


2
1 2
 2  k2  2

x3  12 .8 3  6.4 k 3  2  117  k  12


2
1 1

Substituting k 1
 12 & k 2  2 in IV  and V  ,

  4t  12 costate vatiable ...............................................................................VI 

x  12 .8 t  76 .8t  2 State variable .............................................................VII 


2

d) For the final solution, we then simply substitute VI  in to I  ,

y  0.8 4t  12  3.2t  9.6

y  3.2t  9.6 Control variable

x  25t  6.4 k .............................................................III   and k1  12


,
1

x  25t  76 .8.............................................................III   
,

,
Substitute for x in the equation of motion in the constraint, we can find the control variable.

x  8y
,

 25t  76.8  8 y

y  3.2t  9.6 Control variable

Evaluated at the end points:

y0  3.20  9.6  9.6


y3  3.23  9.6  0

e) For the sufficiency conditions, with:


2
f  4x  5 y

f  0 , and f  10 y ,we have


x y

f f 0 0
D xx xy
 , D  0, D  D 0
f f 0  10 1 2
yx yy

D fails the strict negative definite criteria but proves to be negative semi-definite with D 1
0

and D 2
 D  0.

However, for the semi-definite test we must also test the variables in reverse order.

f f  10 0
D yy yx
 , D  10, D  D 0
f xy
f xx
0 0 1 2

with both discriminate tests negative semi-definite, the objective functional f is jointly concave
in x

and y . Since the constraint is linear, it is also jointly concave and does not need testing. we
conclude that the functional is indeed maximized.

10.1.2. Hamiltonian and Sufficient conditionsfor maximization in optimal control

From the above we have seen the necessary condition for maximization so far. The sufficiency
conditions will be fulfilled if:

i. Both the objective functional and the constraint are differentiable and jointly concave in
x (state variable) and y (control variable);
ii.  t   0 ,if the constraint is non-linear in x or y .If the constraint is linear,  t  may
assume any sign.

For non-linear functions, a simple test for joint concavity is the discriminant test,given the
discriminant of the 2nd –order derivatives of the function, but linear functions are always both
concave and convex.
f f
D xx xy

f yx
f yy

D 1
 f  0 and D 2
 D  0 .strictly concave and negative definite, it indicates a global
xx

maximum and is therefore, always sufficient for a maximum.

D 1
 f  0 and D 2
 D  0 concave and negative semi definite, is indicative of a local
xx

maximum and is sufficient for a maximum if the test is employed for every possible ordering of
the variables with similar results. See from the above Example (2) & (3)at point (e) which are
evaluated the sufficiency conditions.

Optimal control theory with a free end point

Foe optimal control problem involving continuous time with a finite time horizon and a free end
point can be expressed as:
T

Maximize V   f xt , y t , t dt ;Subject to: x  g xt , y t , t 


,

x0  x0 , xT   free

where the upper boundary of integration xT 


is free and unrestricted. Assuming an interior
solution , then:

H xt , yt ,  t , t   f xt , yt , t    t gxt , yt , t 

dH
(i) 0
dy
d dH
 
,
(ii)
dt dx
dx dH
x 
,
(iii)
dt d

x0  x0 , xT   0
here, the 1st three conditions for maximization, comprising the maximum condition, remain the
same but the last or boundary condition changes. This last condition is called transversality
condition for free end point. If the value of x at T is free to vary, the constraint must be non-
binding and the shadow price
 evaluated at T must equal zero (0).

2
Example4. max   6 x  4 y dt ; Subject to: x  16 y ; x0  8; x2  40
2 ,

y 0
 

A) The Hamiltonian is: H  6 x  4 y   16 y 


2

B) The necessary conditions for the maximum principle are:


dH
(i)  8 y _ 16   0
dy

y  2.............................................................................I

d dH
   6..................................................................II
,
(ii)
dt dx
dx dH
x   16 y
,
(iii)
dt d

From (I) x  16 2   32 ..........................................................................................III


,

Integrating (II)  t   6t  k1 .....................................................................................IV

x  32  6t  k   192 t  32 k
,
1 1

xt   96 t  32 k 1 t  k 2 .....................................................................V


2

C) Applying the boundary conditions:

x0  8; x2  40

x0  k 2  8; k 2  8

x2  96 2  32 k 2  k


2
1 2
 40
but k 2  8
x2  96 2  32 k 2  8  40  k  5.5
2
1 1

Substituting k 1
 5.5,& k 2  8 in to ( IV ) & (V )

 t   6t  5.5 costate variable .............................................................................VI

xt   96 t  176 t  8 State variable .....................................................................VII


2

D) For the control variable solution, either substitute (VI) in to (I),

yt   2  2 6t  5.5  22t  11 Control variable.

x  16 y.
,
or take the derivative of (VII)and substitute it in the constraint

x  192 t  176  16 y  yt   12 t  11


,

 yt   12t  11 Control variable.

1
Example5. max   4 y   x  2 x dt ; Subject to: x  x  y ; x0  6; x1  free
2 2 ,


y 
y 0

 x  2 x   x  y 
2
A) The Hamiltonian is: H  4 y 
2
y
B) The necessary conditions for the maximum principle are:
dH
(i)  4  2y    0
dy

y  0.5  2.............................................................................I

d
  1  4 x     1  4 x  ...........................................II
dH
 
,
(ii)
dt dx
dH
x  d  x  y ...............................................................................III
,
(iii)

From y in I  x  x  2  0.5
,
,   
 ,    1 4   1
  x   2; Y  AX  B
In matrix form:  x   0.5 1    


Tr  A  Tr A  4 A 2

Characteristic roots:
r 1, 2
2


0 0  4 3   3.464  1.732
2

r 1, 2
2 2

r 1
 1.732
The Eigen vector corresponding to is:
 1  1.732 4   k 1   2.732   k 1
A  r I  K 1 i
   
1  1.732  k 2   0,5
4
 0
 0.732  k 2 
 0.5

 2.732 k1  4 k 2  0, k1  1.464 k 2

1 1.464

1.732t
y  k 1e
c
 1 

The Eigen vector corresponding to r 1


 1.732
is:

 1  1.732 4   k 1  0.732 4   k 1
A  r I  K 1 i
  
1  1.732  k 2   0,5
 0
2.732  k 2 
 0 .5

 0.732 k1  4 k 2  0, k1  5.464 k 2

2  5.464 1.732t
y   k 2e
c
 1 
_
1
For the particular solution, y y p
A B
_
y p  y  _     3  0.5  1 2   0.83 
_
1  1  4 1   2.33 

 x      

Combining the complementary and particular solutions,

 t   1.464 k 1 e
1.732t
 5.464 k 2 e  2.33 ........................................................IV
1.732t

xt   k 1 e
1.732t
 k2e  0.83 ............................................................................V
1.732t

C) Applying the transversality condition for a free end point  1  0

1.7321 1.7321
 1  0  1.464 k 1 e  5.464 k 2 e  2.33  0

 1  0  8.2744 k1  0.9667 k 2  2.33  0

From the initial condition x0  6

x0  k1  k 2  0.83  6

8.2744 k 1  0.9667 k 2  2.33  0



 solving it, we get: k  0.1653; k 2  0.4910
k 1  k 2  0.83  0 
1

Substituting in IV & V ,

 t   0.242 e
1.732t
 2.683 e  2.33 ...............................VI ,Costate variable.
1.732t

xt   0.1653 e
1.732t
 0.4910  0.83 ................................V , State variable.
1.732t
e
D) Finally substituting VI  in I  ,we obtain:

y  0.5  2.............................................................................I

y t   0.121 e
1.732t
 1.3415 e  0.33 ............................. Control variable.
1.732t

E) For the sufficiency condition, with:


 4  2 y , we have
2
f  4y   x2x ,  1  4 x ,and f
2
y f x y

f f 4 0
D xx xy
 , D  4  0, D  D 80
f f 0 2 1 2
yx yy

D f is strictly concave. The constraint x  y  linear and


Therefore, is negative definite and
hence, also concave, the sufficiency conditions for a global maximization in optimal control
theory are satisfied.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy