0% found this document useful (0 votes)
37 views26 pages

Material and Energy Balance

This document outlines constrained and unconstrained optimization. It discusses finding the maximum or minimum of functions subject to equality and inequality constraints. The key steps of the Lagrangian method are presented: making an informed guess about binding constraints, solving the first-order conditions along with binding constraints, and checking solutions satisfy complementary slackness conditions. Examples include utility maximization, cost minimization, and shortest path problems.

Uploaded by

yogesh kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views26 pages

Material and Energy Balance

This document outlines constrained and unconstrained optimization. It discusses finding the maximum or minimum of functions subject to equality and inequality constraints. The key steps of the Lagrangian method are presented: making an informed guess about binding constraints, solving the first-order conditions along with binding constraints, and checking solutions satisfy complementary slackness conditions. Examples include utility maximization, cost minimization, and shortest path problems.

Uploaded by

yogesh kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Multivariable and Constrained Optimization

Mathematical Economics

Vilen Lipatov

Fall 2014
Outline

I multi-variable optimization
I constrained optimization
Reading: Sydsaeter and Hammond, chapters 13-14
Unconstrained Optimization

I Objective: Find the maximum of the function z = f (x, y ).


I That is, find a pair (x ∗ , y ∗ ) such that f (x ∗ , y ∗ ) ≥ f (x, y )
holds for all pairs (x, y ).
I All pairs of numbers (x, y ) (including (x ∗ , y ∗ ) ) must be in
the domain of the function.
I For the minimum, analogously, we require f (x ∗ , y ∗ ) ≤ f (x, y ).
I In the optimization context, f (x, y ) is called the objective
function.
I Show that the function

z = f (x, y ) = −(x − 2)2 − (y − 3)2

has a maximum at (x ∗ , y ∗ ) = (2, 3).


First-order conditions

I If we keep the variable y at the optimal value y ∗ and vary


only x, then the function of one variable

g (x) = f (x, y ∗ )

must have a maximum at x ∗ :


dg ∗
(x ) = 0.
dx
I Thus we obtain the first-order conditions necessary for a
maximum:
∂f ∗ ∗ ∂f ∗ ∗
(x , y ) = 0, (x , y ) = 0.
∂x ∂y
I These conditions determine the stationary points of f (x, y ).
Exercise

I Consider a firm with a production function Y = F (K , L),


where Y is the number of units produced, K is the capital
invested and L is the labor input.
I The profit function is given by
Π(K , L) = PF (K , L) − rK − wL, where P is the price of one
unit of output, r is the cost of capital (the interest rate), and
w is the wage rate.
I Find the first-order conditions for the profit maximum.
Exercise

I A monopolist with total cost function C (Q) = Q 2 sells her


product in two different countries.
I When she wants to sell QA units of the good in country A,
she cannot charge the price higher than PA = 22 − 3QA for
each unit.
I When she wants to sell QB units of the good in country B,
she cannot charge the price higher than PB = 34 − 4QB .
I How much should the monopolist sell in the two countries in
order to maximize profits?
SOC: a maximum

(0, 0) is a stationary point of f (x, y ) = −x 2 − y 2 .


SOC: a minimum

(0, 0) is a stationary point of f (x, y ) = x 2 + y 2 .


SOC: a saddle point

(0, 0) is a stationary point of f (x, y ) = x 2 − y 2 .


Types of extrema and Hessian: a Theorem

I Suppose that the function y = f (x, y ) has a stationary point


at (x ∗ , y ∗ ).
I Consider the determinant of the Hessian matrix
∂2f ∂2f  2 2
∂x 2 ∂y ∂x ∂2f ∂2f ∂ f
det H = ∂ 2 f ∂2f = 2 2
− .
∂x∂y ∂y 2
∂x ∂y ∂x∂y

I If det H < 0 at (x ∗ , y ∗ ), then (x ∗ , y ∗ ) is a saddle point.


I If det H > 0 at (x ∗ , y ∗ ), then (x ∗ , y ∗ ) is a maximum or a
minimum.
I When det H > 0, the signs of ∂ 2 f /∂x 2 and ∂ 2 f /∂y 2 are the
same.
I If both signs are positive, then (x ∗ , y ∗ ) is a minimum.
I If both signs are negative, then (x ∗ , y ∗ ) is a maximum.
Exercises

I Classify the stationary points of f (x, y ) = −x 2 − y 2 .


I Classify the stationary points of f (x, y ) = x 2 + y 2 .
I Classify the stationary points of f (x, y ) = x 2 − y 2 .
Constrained optimization

y ≥1
Constrained optimization
I Objective: Find the maximum of the function z = f (x, y )
subject to the inequality constraints

g1 (x, y ) ≥ 0,

g2 (x, y ) ≥ 0,
...
gK (x, y ) ≥ 0.
I That is, find a pair (x ∗ , y ∗ ) satisfying the constraints, such
that f (x ∗ , y ∗ ) ≥ f (x, y ) holds for all pairs (x, y ) that also
satisfy the constraints.
I All pairs of numbers (x, y ) (including (x ∗ , y ∗ ) ) must be in
the domain of the function.
I For the minimum, we require f (x ∗ , y ∗ ) ≤ f (x, y ).
I Just as in unconstrained case, f (x, y ) is called objective
function.
Geometry of constraints

Budget set
Typical examples

I A consumer maximizes her utility subject to a budget


constraint.
I A producer minimizes costs subject to the constraint that a
certain amount is produced.
I Asymmetric information setting: An insurer selects an
insurance contract that maximizes profits subject to the
constraint that the insurance is valuable for the consumer
(participation constraint) and that the consumer has an
incentive to be careful (incentive compatibility constraint).
Exercise: Utility maximization

I A consumer maximizes her utility

u(x, y ) = (x + 1)(y + 1)

subject to her budget constraint

px x + py y ≤ B

and the non-negativity constraints

x ≥ 0, y ≥ 0.

I Find the optimal amount of goods consumed and the optimal


value of utility that consumer attains.
Exercise: Cost minimization

I A producer in a perfectly competitive market has a production


Y = F (K , L) = K 1/6 L1/2 , where Y is the number of units
produced, K is the capital invested and L is the labor input.
I She wants to minimize costs subject to producing at least Y0
units.
I The optimization problem is equivalent to

max(−rK − wL)

subject to
F (K , L) ≥ Y0 ,
K ≥ 0, L ≥ 0.
I Find the optimal amount of inputs and total costs of
production.
Exercise: swimming

I A swimmer who is currently at the coordinates (a, b) wants to


swim along the shortest route to the square island with corner
points (−1, 1), (1, −1), (−1, −1), (1, 1).
I Instead of minimizing the distance, we can maximize the
negative of the square of the distance

−(x − a)2 − (y − b)2 ,

subject to the constraints

−1 ≤ x ≤ 1,

−1 ≤ y ≤ 1.
The island
The swimmer
Binding constraints

I A constraint is binding at the optimum if it holds with


equality in the optimum.
I Here, only two of the four constraints are binding. All
non-binding constraints can be ignored, because the optimal
solution does not change, if we leave them out.
The Lagrangian approach

I The Lagrangian approach transfers a constrained optimization


problem into
I an unconstrained optimization problem and
I a pricing problem.
I The new function to be optimized is called the Lagrangian.
I For each constraint, a shadow price is introduced, called a
Lagrange multiplier.
I In the new unconstrained optimization problem a constraint
can be violated, but only at a cost.
I The pricing problem is to find shadow prices for the
constraints such that the solutions to the new and the original
optimization problem are identical.
I L(x, y ) = f (x, y ) + λ1 g1 (x, y ) + λ2 g2 (x, y ) + ... + λK gK (x, y ).
The method
1. Make an informed guess about which constraints are binding
at the optimum.
2. Suppose there are k ∗ such constraints. Set the Lagrange
multipliers for all other k − k ∗ constraints to zero, i.e. ignore
these constraints.
3. Solve the first-order conditions

∂L/∂x = 0, ∂L/∂y = 0,

together with the conditions that the k ∗ constraints hold with


equality. Note that we obtain a system of k ∗ + 2 equations for
the same number of unknowns.
4. Check whether the solution is indeed an unconstrained
optimum of the Lagrangian. This may be difficult.
5. Check that the Lagrange multipliers are all non-negative and
that the solution (x ∗ , y ∗ ) satisfies all the constraints.
6. If 4) or 5) are violated, start again at 1) with a new guess.
The constrained optimum theorem

I Suppose we are given numbers λ1 , λ2 , ...λK and a pair of


numbers (x ∗ , y ∗ ) such that
I λ1 , λ2 , ...λK , i.e. Lagrange multipliers are nonnegative;
I (x ∗ , y ∗ ) satisfies all the constraints, i.e.
gk (x ∗ , y ∗ ) ≥ 0, ∀k = 1, 2, .., K ;
I (x ∗ , y ∗ ) is an unconstrained maximum of the Lagrangian L;
I The complementary slackness conditions λk gk (x ∗ , y ∗ ) = 0 are
satisfied, i.e. either the kth Lagrange multiplier is zero or the
kth constraint binds.
I Then (x ∗ , y ∗ ) is a maximum for the constrained maximization
problem.
Constrained optimum: comments

I The Lagrangian approach does not immediately tell us, which


constraints are binding in the optimum. We have to start with
an informed guess using all problem-specific information.
I We write down the Lagrangian assuming that only certain
constraints bind.
I We solve the system of simultaneous equations consisting of
the FOCs and the binding constraints.
I We check if the solution satisfies the other constraints and
that Lagrange multipliers are nonnegative.
I We check if the solution found is an unconstrained optimum
of the Lagrangian.
Single binding constraint

I Suppose only constraint g1 is binding.


I The FOCs are then
∂f ∂g1
= −λ1 ,
∂x ∂x
∂f ∂g1
= −λ1 .
∂y ∂y
I We can get rid of λ1 to obtain the following system to solve:

∂f /∂x ∂g1 /∂x


= ,
∂f /∂y ∂g1 /∂y

g1 (x, y ) = 0.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy