0% found this document useful (0 votes)
23 views15 pages

Chapter 2 Constrained Optimization Mat Econ 3rd y

Chapter 2 focuses on constrained optimization, teaching students how to solve one-variable and two-variable optimization problems under various constraints. It covers methods such as direct substitution and the Lagrange multiplier method for handling equality constraints, as well as the implications of non-negativity restrictions. The chapter emphasizes the importance of considering constraints in economic optimization due to resource scarcity.

Uploaded by

ermiastiruneh035
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views15 pages

Chapter 2 Constrained Optimization Mat Econ 3rd y

Chapter 2 focuses on constrained optimization, teaching students how to solve one-variable and two-variable optimization problems under various constraints. It covers methods such as direct substitution and the Lagrange multiplier method for handling equality constraints, as well as the implications of non-negativity restrictions. The chapter emphasizes the importance of considering constraints in economic optimization due to resource scarcity.

Uploaded by

ermiastiruneh035
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Chapter 2

Constrained Optimization
Aims and objectives:
Up on successful completion of this chapter, students will be able to solve
. One Variable constrained optimization
. Two variable problems with equality constraint
.Inequality Constraints & The theorem of Kuhn and Tucker

Introduction

In economic optimization problems, the variables involved are often required to satisfy certain
constraints. In the case of unconstrained optimization problems, no restrictions have been made
regarding the value of the choice variables. But in reality, optimization of a certain economic
function should be in line with certain resource requirement or availability. This emanates from
the problem of scarcity of resources. For example; maximization of production should be subject
to the availability of inputs. Minimization of costs should also satisfy a certain level of output.
The other constraint in economics is the non negativity restriction. Although sometimes negative
values may be admissible, most functions in economics are meaningful only in the first quadrant.
So, these constraints should be considered in the optimization problems.
Constrained optimization deals with optimization of the objective function (the function to be
optimized) subject to constraints (restrictions). In the case of linear objective and constraint
functions, the problems are solved using linear programming model. But when we face a non
linear function, we use the concept of derivatives for optimization. This chapter focuses on
optimization of non linear constrained functions.

2.1. One Variable constrained optimization

i) With equality constraint:- optimization of one variable function subject to an equality


constraint takes the form,
Max (min): y  f x, subject.to : x  x

The solution for this type of problem is  y *  f x 


i) Non- negativity constraint:
a) Max(min): y  f x, subjec to : x  0 .
Consider the following graph.

a
b

x 0

f 0 y attains its unconstrained maximum at point a(x<0). But when a non
y * graph,
In the above
negativity restriction is imposed on the choice variable, the maximum point will be at x=0.
x  0, At.x  0, f x  0

b)
Y

y* a
max y *  f x 

Here both the constrained&


x unconstrained max is at
x point ‘a’  f x   0 .
Y

-Both constrained &un


constrained max coincide,
a
and f x   0
Minimum:- minimization of functions with non-negativity constraint can be viewed graphically
as follows

y y y

b
min
x <0 x =0 x x x
a

In general, the optimum values of a function with non negativity constraint can be summarized
as follows.
if . f  x   0, x  0 x 0
Max: f  x   0
y  f 0 if . f  x  y 0,x fx0  0
i)
* *
y * f (0)
f x   0 f x   0 f x   0
if . f  x   0, x  0
ii) Min: f  x   0
if . f  x   0, x  0
Example:1
Max y  3x 2  7 x  2, subject to, x  0
In unconstrained operation:
F.O.C. f x  6x  7  0

x  7
6

But imposing the non – negative constraint we have:


x *  0, and f 0  2
f 0  7  0
2.2. Two variable problems with equality constraint

In the case of two choice variables, optimization problem with equality constraint takes the
form Max / Min : y  f x1 , x2 , Subject to : g x1 , x2   c . This type of optimization problem is
x1 , x2

commonly used in economics. Because, for the purpose of simplification, two variable cases
are assumed in finding optimum values. For example in maximization of utility using
indifference approach, the consumer is assumed to consume two bundles of goods.
Example: Max ux1 , x2 , subject to p1 x1  p2 x2  M
In this section, we will see two methods of solving two variable optimization problems with
equality constraints.
i) Direct substitution Method:
Direct substitution method is used for a two variable optimization problem with only
one constraint. It is relatively simple method. In this method, one variable is
eliminated using substitution before calculating the 1 st order condition. Consider the
consumer problem in the above example.
P2 x2  M  P1 x1
m p1
x2   x1
P2 p2
Now x2 is expressed as a function of x1 . Substituting this value, we can eliminate x2 from the
equation.
 Max, u  ux1 , x2 x1 

du u u x2
  . 0
dx1 x1 x2 x1

 p1 
 Mu1  Mu 2  0
 p 2 
p
 Mu1  Mu 2 . 1
p2
Mu1 p
  1
Mu 2 p2
Example u  x1 x2 , subject to , x1  4x2  120
4 x2  120  x1

x2  30  1 x1
4
 
 u  x1 30  1 x1  30 x1  1 x12
4 4
du
F.O.C.( First Order Condition)  30  1 x1 =0
dx1 2

 x1  60, and , x2  15

iii) Lagrange Multiplier Method


When the constraint is a complicated function or when there are several constraints, we resort
to the method of Lagrange.
Max : f x1 , x2  Subject to g x1 , x2   c

L  f x1 , x2    c  g x1 , x2  L denotes the Lagrange.


An interpretation of the Lagrange Multiplier
The Lagrange multiplier,  , measures the effect on the objective function of a one unit change
in the constant of the constraint function (stock of a resource) . To show this; given the
Lagrangian function;
L  f x1 , x2    c  g x1 , x2 
F.O.C.
L1  f1 x1 x 2   g1 x1 , x 2   0
L2  f 2 x1 x 2   g 2 x1 , x 2   0
Ln  c  g x1 , x 2   0

We can express the optimal choice of variables * , x1* and . y 2* as implicit functions of the
parameter c:
x1*  x1* c 
x 2*  x 2* c 
*  * c 
Note :- If   0, it means that for every one unit increase ( decrease ) in the constant of the
constraining function, the objective function will decrease ( increase by a value approximately
equal to  .

If   0, a unit increase (decrease) in the constant of the constraint will lead to an increase

(decrease ) in the value of the objective function by approximately equal to  .


S.O.C. Conditions for a constrained optimization problem
Max : f x, y  subject to, g x, y 
L  f x, y   g x, y 
F.O.C.
L  g  x, y   0 x
Lx  f x  2 x  0 y
L y  f y  2 y  0 

S.O.C: L  0, Lx  g x , L y  g y

Lxx  f xx  g xx , Lxy  f xy  g xy

L yy  f yy  g yy

In matrix form:
 L Lx Ly   0 gx gy 
   
 Lx Lxx Lxy    g x Lxx Lxy 
L Lyx Lyy   g y Lyx Lyy 
 y

This matrix is called Bordered Hessian and its determinant is denoted by H .

Lxx Lxy
The bordered Hessian H is simply the plain Hessian   bordered by the
L yx L yy

first order derivatives of the constraint with zero on the principal diagonal.
Determinant Criterion for the sign definiteness of d 2 z .
0 gx gy
 Positive .definite   0  min
subject.to.dg  0.iff g x
2
d z.is  Lxx Lxy
 Negative .definite   0  Max
gy L yx L yy

‘n’- Variable case


Given f x1 , x2   xn 

Subject to g x1 , x2   xn   c

0 g1 g2   gn
g1 L11 L12 L1n
H  Its bordered leading principal minors can be defined as:

gn Ln1 K n2 Lnn
0 g1 g2 g3
0 g1 g2
g1 L11 L12 L13
H 2  g1 L11 L12 , H 3  etc.
g2 L21 L22 L23
g2 L21 L22
g3 L31 L32 L33

Conditions Maximization Minimization


F.O.C. L  L1  L2   Ln  0 The same i.e.

L  L1  L2   Ln  0

S.O.C. H 2  0, H 3 |  0 , H2 , H3   Hn  0

H 4  0,    1 H n  0
n

Example: Max : zx, y   2xy subject to 3x  4 y  90

L  2 xy   90  3x  4 y 
The subsequent equations can be solved by either of the following ways.
a) Direct substitution
b) Inverse method
c) Cramer’s rule
d) Gauss-Jordan Elimination Method
L  90  3 x  4 y  0 x *  15
F.O.C:- L x  2 y  3  0 y *  11 .25
L y  2 x  4  0 *  7.5

0 g1 g2  0 3 4
 
S.O.C:  g1 L xx L xy   H  3 0 2
g L yx L yy  4 2 0
 2

H  H 2  48  0  Negative definite ( Max )

More than one Equality constraint


Max / Min : f x1 , x 2 , x3 , subject to , g 1 x1 , x 2 , x3 ,  c1 and

g 2  x1 , x 2 , x3   c 2
   
L  f x1 , x 2 , x3   1 c1  g 1 x1 , x 2 x3   2 C 2  g 2 x1 , x 2 x3 

F.O.C.: L1  f1  1 g11  2 g12  0

L2  f 2  1 g 12  2 g 22  0

L3  f 3  1 g 31  2 g 32  0

L 1  C 1  g 1 x1 , x2 , x3   0

L 2  C 2  g 2 x1 , x2 , x3   0

0 0 g11 g 12 g 31
0 0 g12 g 22 g 32
S.O.C. H  g11 g12 L11 L12 L13
1 2
g 2 g 2 L21 L22 L23
1 2
g 3 g 3 L31 L32 L33

H 2  is one that contains L22 as the last element of its principal diagonal

H 3  is one that contains L33 as the last element of its principal diagonal.

A Jacobian Determinant
A Jacobian determinant permits testing for functional dependency for linear & non – linear
functions. A Jacobian determinant is composed of all the first order partial derivatives of a
system of equations arranged in ordered sequence.

Example given y1  f1 x1 , x2 , x3 

y 2  f 2 x1 , x2 , x3 

y3  f 3 x1 , x2 , x3 

y1 y1 y1


x1 x 2 x3
y y 2 y 2
J  2
x1 x 2 x3
y 3 y 3 y 3
x1 x 2 x3
- If J  0, then there is functional dependency among equations and hence no unique

solution exists.
y1 y
E.g y1  4 x1  x2  y11  4 & y12  1 , y 12 means and y11 means 1
x2 x1
y2
y2  16 x12  8 x1 x2  x22   32 x1  8 x2
x1
y 2
  8 x1  2 x 2
x 2

4 1
J   64 x1  16 x2  0, thus, the system has a solution because the value
32 x1  8 x2 8 x12 x2

of the Jacobian determinant is non-singular, i.e., different from zero  J  0.

As stated in the preceding paragraphs, partial derivatives can provide a means of testing
whether there exists functional (linear or nonlinear) dependence among a set of n functions in
n variables. This is related to the notion of Jacobian determinant.
Exercise
1. Optimize: y  3x12  5x1  x1 x2  6 x22  4 x2  2 x2 x3  4 x32  2 x3  3x1 x3, using.

a) The Cramer’s rule for the first order condition.


b) The Hessian for the S.O.C.
2. A monopolistic firm produced two related goods. The demand and total cost functions
are:
Q1  100  2 p1  2 p2

Q2  75  0.5 p1  p2

TC  Q12  2Q1Q2  Q22


Optimize the firms profit function by:
a) Finding the inverse demand function.
b) Using cramer’s rule for the f.o.c.
c) Using the Hessian for the s.o.c.
3. Maximize a firm’s total cost:
c  45 x 2  90 xy  90 y 2 , when the firm has to meet a production quota: 2 x  3 y  60 , by:
a) finding the critical values
b) using the Bordered Hessian to test the S.O.C.
4. A monopolist firm produces two substitute goods and faces the following demand
functions.
X  80  Px  2 p y

y  110  2 p x  6 p y

Its total cost function is :TC  0.5 x 2  xy  y 2 what is the profit maximizing level of
output if the firm has to meet a production quota of 3 x  4 y  350 ?

5. Maximize utility: ux, y   x 0.25 y 0.4 , subject to, 2 x  8 y  104 (use any method).

2.3. Inequality Constraints & The theorem of Kuhn and Tucker


Inequality constraints are those requiring certain variables to be non-negative. These often
have to be imposed in order that the solution should make economic sense. In addition,
bounds on the availability of resources are often expressed as inequalities rather than
equalities. Consider the following examples.
Ex-1: Max :   5x1  3x2 ,  stands for profit.

6 x1  2 x 2  36
5 x1  5 x 2  40 
subject to  Here we add slack variables to make the constraint binding
2 x1  4 x 2  28
x1 , x 2  0 
(active)
2 x1  x 2  14
x1  x 2  12 
Example-2: Min : C  2 x1  4 x 2 : subject to  Here we subtract surplus values to
x1  3x 2  18 
x1 , x 2  0 

make the constant binding. C denotes cost.

Note :- If at optimality the slack/surplus of a constraint is zero, then the constraint is said to
be active or binding. Otherwise, it is inactive.
Effects of non – negativity Restrictions
i. single variable case
Max :   f x1 , subject to , x1  0
In view of the restriction x1  0, three possible situations may arise.

A
П=f(x1) A B C
D
E

X1 X1 X1

f x1   0, and.x1  0 po int . A These three conditions can


be consolidated into a single
f x1   0, and.x1  0 po int .B statement:
f x1   0, x1  0.and.x1 f x1 
f x1   0, and .x1  0 po int .c & D
(for max) and

f x1   0, x1  0, and .x1 f x1   0 for. min imun


- the equation x1 f x1   0 is referred to as the complimentary slackness between x1 & f x1 .
ii. Two variable, one constraint case
max   f x1 , x2 , subject to, g x1 , x2   r
x1 , x2  0
And slack variable to change the constraint into strict equality, thus .
Max   f x1 , x2 , subject to, g x1 , x2   s  r
x1 , x2 , s  0

Step1: Form the Lagrangian function


L  f x1 , x2   r  g x1 , x2   s
Step2:Find the first order partials with respect to each of the choice variables and set them equal
to zero.
L
Lx1  0, x1  0.and .x1  0  1
x1
L
Lx2  0, x 2 , and .x1  0  2
x 2
L
Ls  0, s  0, and .s  0   3
s
L  r  g x1 , x 2   s  0    4
Now, we can consolidate equation (3) and (4) and in the process, eliminate the dummy variables.
L
i.e.    , thus equation (3) tells us that    0, s  0, and  s  0, or equivalently
s
  0, s  0, and.s  0   5
 From equation(4) we have
s  r  g x1 , x2 
substituting (4) in (5) we have
  0, r  g x1 , x2   0, and .r  g x1 x2   0
- thus, the Kuhn-Tucker( K-T) conditions for maximization problem with inequality
constraint are:
L1 L
 f1  g1  0, x1  0, and .x1 0
x1 x1

L2 L
 f 2  g 2  0, x2  0, and .x2 0
x2 x2

L
 r  g  x1 , x 2   0,   0, and . r  g  x1 , x 2   0

- Kuhn-Tucker(K-T) conditions for minimization problem with inequality constraint:
L x1  f1  g1  0, x1  0, and .x1 Lx 1  0

Lx2  f 2  g 2  0, x 2  0, and .x 2 Lx 2  0

L  r  g x1 , x2   0,   0, and .L  0


[Students’ activity:- For n- variable, m-constraint case, please read A.C . Chiang]
Example- 1
max y  10 x1  x12  180 x2  x22 , subject to, x1  x2  80
x1 , x2  0
Step -1:- Form the Lagrangian function
L  10 x1  x12  180 x2  x22   80  x1  x2 
Step -2: solve for the first order condition (F.O.C.) and set the partials with respect to each of
the choice variables equal to zero.
L x1  10  2 x 2    0     1
L x2  180  2 x 2    0    2 
L  80  x1  x 2  0     3

From (1) & (2) we have, x2  x1  85   4

 Substituting (4) in (3), we have x1   5 (not feasible) because the x’s should be non-
2
negative.
 Hence setting x1*  0, we have x2*  80

And *  20
Step -3: Check whetherthe constraints are satisfied
i) non – negativity constraint x1*  0

x2*  80

*  20
ii) inequality constraint : x1  x2  80

0  80  0
L L
Step -4: check the complementary slackness condition  x,  0  since x1 is zero, x, 0
x1 x1

.
L L
 x2 ,  0, x 2*  80  0
x1 x 2

180  2 x2    0
180  280  20  0

00
L L
-  0, *  20  0
L 
 80  x1  x2  0

80  0  80  0
00
* Since all the K- T conditions are satisfied, the function is maximized at x1*  0 & x2*  80.
Example -2

Minimize C   x1  4    x2  4 
2 2

2 x1  3x 2  6
Subject to:  3x1  2 x 2  12
x1 , x 2  0
Solution:
Step1: Formulate the Lagrange
L  x1  4  x 2  4  1 6  2 x1  3x 2   2  12  3x1  2 x 2 
2 2

F.O.C.(First Order Condition)


L1  2 x1  4   21  3 2  0     1
L2  2 x 2  4   31  2 2  0    2 
L1  6  2 x1  3 x 2  0       3
L2  12  3 x1  2 x 2  0      4 

To solve such multivariable non –linear programming problem, let us use iteration or trial&
error ), method
Iteration-1 1  0, and .2  0
L L
Thus we have =0 and =0 i.e. 2x1 + 3x2 =6 and 3x1  2x2  12
1  2

24 6
Solving simultaneously yields x1  and x 2  (this is not feasible).
5 5
From equation (1) & (2), we have:
2 x1  21  32  8, and      5
2 x2  31  22  8       6
L
And  2  0   3x1  2 x 2  12    7 
 2
 solving equations (5) , (6),and (7) assuming 1  0, yields

x1*  28
13
  13
36
*
2

1  0
2  1613

check whether the constraints are satisfied


i) Non – negative constraint  satisfied
ii) Inequality constraint:
 18   36 
2   3   6
 13   13 
 28   36 
 3   2   12
 13   13 
 Check the complementary slackness conditions. This is satisfied.
* Therefore, since the solution values for the four variables satisfy all the K-T conditions, they
are acceptable as the final solutions.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy