Chapter 2 Constrained Optimization Mat Econ 3rd y
Chapter 2 Constrained Optimization Mat Econ 3rd y
Constrained Optimization
Aims and objectives:
Up on successful completion of this chapter, students will be able to solve
. One Variable constrained optimization
. Two variable problems with equality constraint
.Inequality Constraints & The theorem of Kuhn and Tucker
Introduction
In economic optimization problems, the variables involved are often required to satisfy certain
constraints. In the case of unconstrained optimization problems, no restrictions have been made
regarding the value of the choice variables. But in reality, optimization of a certain economic
function should be in line with certain resource requirement or availability. This emanates from
the problem of scarcity of resources. For example; maximization of production should be subject
to the availability of inputs. Minimization of costs should also satisfy a certain level of output.
The other constraint in economics is the non negativity restriction. Although sometimes negative
values may be admissible, most functions in economics are meaningful only in the first quadrant.
So, these constraints should be considered in the optimization problems.
Constrained optimization deals with optimization of the objective function (the function to be
optimized) subject to constraints (restrictions). In the case of linear objective and constraint
functions, the problems are solved using linear programming model. But when we face a non
linear function, we use the concept of derivatives for optimization. This chapter focuses on
optimization of non linear constrained functions.
a
b
x 0
f 0 y attains its unconstrained maximum at point a(x<0). But when a non
y * graph,
In the above
negativity restriction is imposed on the choice variable, the maximum point will be at x=0.
x 0, At.x 0, f x 0
b)
Y
y* a
max y * f x
y y y
b
min
x <0 x =0 x x x
a
In general, the optimum values of a function with non negativity constraint can be summarized
as follows.
if . f x 0, x 0 x 0
Max: f x 0
y f 0 if . f x y 0,x fx0 0
i)
* *
y * f (0)
f x 0 f x 0 f x 0
if . f x 0, x 0
ii) Min: f x 0
if . f x 0, x 0
Example:1
Max y 3x 2 7 x 2, subject to, x 0
In unconstrained operation:
F.O.C. f x 6x 7 0
x 7
6
In the case of two choice variables, optimization problem with equality constraint takes the
form Max / Min : y f x1 , x2 , Subject to : g x1 , x2 c . This type of optimization problem is
x1 , x2
commonly used in economics. Because, for the purpose of simplification, two variable cases
are assumed in finding optimum values. For example in maximization of utility using
indifference approach, the consumer is assumed to consume two bundles of goods.
Example: Max ux1 , x2 , subject to p1 x1 p2 x2 M
In this section, we will see two methods of solving two variable optimization problems with
equality constraints.
i) Direct substitution Method:
Direct substitution method is used for a two variable optimization problem with only
one constraint. It is relatively simple method. In this method, one variable is
eliminated using substitution before calculating the 1 st order condition. Consider the
consumer problem in the above example.
P2 x2 M P1 x1
m p1
x2 x1
P2 p2
Now x2 is expressed as a function of x1 . Substituting this value, we can eliminate x2 from the
equation.
Max, u ux1 , x2 x1
du u u x2
. 0
dx1 x1 x2 x1
p1
Mu1 Mu 2 0
p 2
p
Mu1 Mu 2 . 1
p2
Mu1 p
1
Mu 2 p2
Example u x1 x2 , subject to , x1 4x2 120
4 x2 120 x1
x2 30 1 x1
4
u x1 30 1 x1 30 x1 1 x12
4 4
du
F.O.C.( First Order Condition) 30 1 x1 =0
dx1 2
x1 60, and , x2 15
We can express the optimal choice of variables * , x1* and . y 2* as implicit functions of the
parameter c:
x1* x1* c
x 2* x 2* c
* * c
Note :- If 0, it means that for every one unit increase ( decrease ) in the constant of the
constraining function, the objective function will decrease ( increase by a value approximately
equal to .
If 0, a unit increase (decrease) in the constant of the constraint will lead to an increase
Lxx f xx g xx , Lxy f xy g xy
L yy f yy g yy
In matrix form:
L Lx Ly 0 gx gy
Lx Lxx Lxy g x Lxx Lxy
L Lyx Lyy g y Lyx Lyy
y
Lxx Lxy
The bordered Hessian H is simply the plain Hessian bordered by the
L yx L yy
first order derivatives of the constraint with zero on the principal diagonal.
Determinant Criterion for the sign definiteness of d 2 z .
0 gx gy
Positive .definite 0 min
subject.to.dg 0.iff g x
2
d z.is Lxx Lxy
Negative .definite 0 Max
gy L yx L yy
Subject to g x1 , x2 xn c
0 g1 g2 gn
g1 L11 L12 L1n
H Its bordered leading principal minors can be defined as:
gn Ln1 K n2 Lnn
0 g1 g2 g3
0 g1 g2
g1 L11 L12 L13
H 2 g1 L11 L12 , H 3 etc.
g2 L21 L22 L23
g2 L21 L22
g3 L31 L32 L33
L L1 L2 Ln 0
S.O.C. H 2 0, H 3 | 0 , H2 , H3 Hn 0
H 4 0, 1 H n 0
n
L 2 xy 90 3x 4 y
The subsequent equations can be solved by either of the following ways.
a) Direct substitution
b) Inverse method
c) Cramer’s rule
d) Gauss-Jordan Elimination Method
L 90 3 x 4 y 0 x * 15
F.O.C:- L x 2 y 3 0 y * 11 .25
L y 2 x 4 0 * 7.5
0 g1 g2 0 3 4
S.O.C: g1 L xx L xy H 3 0 2
g L yx L yy 4 2 0
2
g 2 x1 , x 2 , x3 c 2
L f x1 , x 2 , x3 1 c1 g 1 x1 , x 2 x3 2 C 2 g 2 x1 , x 2 x3
L2 f 2 1 g 12 2 g 22 0
L3 f 3 1 g 31 2 g 32 0
L 1 C 1 g 1 x1 , x2 , x3 0
L 2 C 2 g 2 x1 , x2 , x3 0
0 0 g11 g 12 g 31
0 0 g12 g 22 g 32
S.O.C. H g11 g12 L11 L12 L13
1 2
g 2 g 2 L21 L22 L23
1 2
g 3 g 3 L31 L32 L33
H 2 is one that contains L22 as the last element of its principal diagonal
H 3 is one that contains L33 as the last element of its principal diagonal.
A Jacobian Determinant
A Jacobian determinant permits testing for functional dependency for linear & non – linear
functions. A Jacobian determinant is composed of all the first order partial derivatives of a
system of equations arranged in ordered sequence.
y 2 f 2 x1 , x2 , x3
y3 f 3 x1 , x2 , x3
solution exists.
y1 y
E.g y1 4 x1 x2 y11 4 & y12 1 , y 12 means and y11 means 1
x2 x1
y2
y2 16 x12 8 x1 x2 x22 32 x1 8 x2
x1
y 2
8 x1 2 x 2
x 2
4 1
J 64 x1 16 x2 0, thus, the system has a solution because the value
32 x1 8 x2 8 x12 x2
As stated in the preceding paragraphs, partial derivatives can provide a means of testing
whether there exists functional (linear or nonlinear) dependence among a set of n functions in
n variables. This is related to the notion of Jacobian determinant.
Exercise
1. Optimize: y 3x12 5x1 x1 x2 6 x22 4 x2 2 x2 x3 4 x32 2 x3 3x1 x3, using.
Q2 75 0.5 p1 p2
y 110 2 p x 6 p y
Its total cost function is :TC 0.5 x 2 xy y 2 what is the profit maximizing level of
output if the firm has to meet a production quota of 3 x 4 y 350 ?
5. Maximize utility: ux, y x 0.25 y 0.4 , subject to, 2 x 8 y 104 (use any method).
6 x1 2 x 2 36
5 x1 5 x 2 40
subject to Here we add slack variables to make the constraint binding
2 x1 4 x 2 28
x1 , x 2 0
(active)
2 x1 x 2 14
x1 x 2 12
Example-2: Min : C 2 x1 4 x 2 : subject to Here we subtract surplus values to
x1 3x 2 18
x1 , x 2 0
Note :- If at optimality the slack/surplus of a constraint is zero, then the constraint is said to
be active or binding. Otherwise, it is inactive.
Effects of non – negativity Restrictions
i. single variable case
Max : f x1 , subject to , x1 0
In view of the restriction x1 0, three possible situations may arise.
A
П=f(x1) A B C
D
E
X1 X1 X1
L2 L
f 2 g 2 0, x2 0, and .x2 0
x2 x2
L
r g x1 , x 2 0, 0, and . r g x1 , x 2 0
- Kuhn-Tucker(K-T) conditions for minimization problem with inequality constraint:
L x1 f1 g1 0, x1 0, and .x1 Lx 1 0
Lx2 f 2 g 2 0, x 2 0, and .x 2 Lx 2 0
Substituting (4) in (3), we have x1 5 (not feasible) because the x’s should be non-
2
negative.
Hence setting x1* 0, we have x2* 80
And * 20
Step -3: Check whetherthe constraints are satisfied
i) non – negativity constraint x1* 0
x2* 80
* 20
ii) inequality constraint : x1 x2 80
0 80 0
L L
Step -4: check the complementary slackness condition x, 0 since x1 is zero, x, 0
x1 x1
.
L L
x2 , 0, x 2* 80 0
x1 x 2
180 2 x2 0
180 280 20 0
00
L L
- 0, * 20 0
L
80 x1 x2 0
80 0 80 0
00
* Since all the K- T conditions are satisfied, the function is maximized at x1* 0 & x2* 80.
Example -2
Minimize C x1 4 x2 4
2 2
2 x1 3x 2 6
Subject to: 3x1 2 x 2 12
x1 , x 2 0
Solution:
Step1: Formulate the Lagrange
L x1 4 x 2 4 1 6 2 x1 3x 2 2 12 3x1 2 x 2
2 2
To solve such multivariable non –linear programming problem, let us use iteration or trial&
error ), method
Iteration-1 1 0, and .2 0
L L
Thus we have =0 and =0 i.e. 2x1 + 3x2 =6 and 3x1 2x2 12
1 2
24 6
Solving simultaneously yields x1 and x 2 (this is not feasible).
5 5
From equation (1) & (2), we have:
2 x1 21 32 8, and 5
2 x2 31 22 8 6
L
And 2 0 3x1 2 x 2 12 7
2
solving equations (5) , (6),and (7) assuming 1 0, yields
x1* 28
13
13
36
*
2
1 0
2 1613