0% found this document useful (0 votes)
26 views34 pages

Chinedu 3

Uploaded by

Precious Chima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views34 pages

Chinedu 3

Uploaded by

Precious Chima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

APPLICATION OF NON-LINEAR PROGRAMMING AS

A MATHEMATICAL METHOD IN PROFIT MAXIMIZATION

NNAMDI AZIKIWE UNIVERSITY

BY

UKORAH CHINEDU PRINCE


REG NO: 2020524013

A PROJECT SUBMITTED TO DEPARTMENT OF MATHEMATICS


FACULTY OF PHYSICAL SCIENCE
NNAMDI AZIKIWE UNIVERSITY, AWKA

IN PARTIAL FULFILMENT OF THE REQUIREMENT


FOR THE AWARD OF THE BACHELOR OF SCIENCE (BSc)
DEGREE IN MATHEMATICS

October 15, 2024


CERTIFICATION

This is to certify that UKORAH CHINEDU PRINCE an undergraduate student in the department of Mathe-
matics with registration number 2020524013 has satisfactorily fulfilled the requirement for research work for the
award of the degree of Bachelor of Science in Mathematics.To the best of my knowledge, the research conducted
and the findings presented in this project are genuine and original. This work has not been submitted, either
in whole or in part, for any diploma or degree at this university or any other institution of higher learning.

Ukorah Chinedu Prince Date

i
APPROVAL

This Dissertation written by ukorah chinedu prince with Reg No. 2020524013 has been examined and approved
for the award of the degree of Bachelor of Science in Mathematics Nnamdi Azikiwe University, Awka.

Mrs. J.M. Ikpegbu (Date)


(Project Supervisor)

Dr. Chinedu G. Ezea Ph.d (Date)


(Head of Department)

External Examiner (Date)

Prof E.K Anakwuba (Date)


(Dean of faculty)

ii
DEDICATION

I dedicate this dissertation to my beloved family, whose unwavering support, encouragement, and love have been
my greatest strength. To my parents, who have always believed in me and provided me with the foundation
for success, and to my siblings, who have been my constant source of inspiration. I also dedicate this work to
my friends and mentors who have guided me throughout my academic journey. Your wisdom, patience, and
encouragement have been invaluable. Lastly, I dedicate this dissertation to everyone who believed in me and
motivated me to strive for excellence. Thank you for your endless support and encouragement.

iii
ACKNOWLEDGMENT

I would like to express my deepest gratitude to everyone who has supported me throughout the journey of
completing this dissertation. Firstly, I am profoundly grateful to my supervisor, for their invaluable guidance,
insightful feedback, and continuous support. Your expertise and encouragement have been instrumental in
shaping this research. I would also like to thank the faculty and staff of the Department of Mathematics at
Nnamdi Azikiwe University, Awka, for providing a conducive learning environment and for their unwavering
support throughout my academic journey. My heartfelt appreciation goes to my family, whose love, patience,
and sacrifices have been my driving force. To my parents, thank you for your endless encouragement and for
instilling in me the value of education. To my siblings, thank you for your constant motivation and support. I
am also thankful to my friends and colleagues who provided me with moral support, valuable suggestions, and
a sense of camaraderie. Your friendship has been a source of strength and inspiration. Lastly, I would like to
thank everyone who contributed to this dissertation, directly or indirectly. Your support and encouragement
have been crucial in helping me reach this milestone.

iv
ABSTRACT

This project explores the application of nonlinear programming as a mathematical method in profit maximiza-
tion. Unlike linear programming, which deals with linear relationships, nonlinear programming addresses more
complex scenarios where the relationship between variables and the objective function is nonlinear. This re-
search aims to demonstrate how nonlinear programming can be effectively utilized to optimize profit in various
business contexts.
The study begins with an introduction to the fundamental concepts of nonlinear programming, including ob-
jective functions, constraints, and the distinction between linear and nonlinear functions. It then delves into
specific methodologies and algorithms used in solving nonlinear programming problems, highlighting their ad-
vantages and limitations.
A significant portion of this research is dedicated to practical applications of nonlinear programming in profit
maximization. Case studies from different industries are analyzed to illustrate how businesses can employ these
mathematical techniques to identify optimal solutions for maximizing profit. The study also considers the im-
plications of various constraints and real-world factors that may influence the outcome of these optimization
efforts.
The findings of this research indicate that nonlinear programming is a powerful tool for decision-making in
complex scenarios where traditional linear models fall short. By applying these methods, businesses can achieve
higher efficiency and profitability, overcoming challenges posed by nonlinear relationships in their operations.
Overall, this project underscores the importance of nonlinear programming in modern business practices and
provides a comprehensive framework for its application in profit maximization.

v
Contents

Certification i

Approval ii

Dedication iii

Acknowledgement iv

Abstract v

Table of content vii

1 INTRODUCTION 1
1.1 Background of study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Terminologies and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Aim and objectives of the study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Limitation of study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Literature Review 9

3 Nonlinear Programming 11
3.1 Definition and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Basic Formulation of Nonlinear Programming Problem . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Some Nonlinear Programming Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3.1 Portfolio Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3.2 Water Resources Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3.3 Constrained Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4 NONLINEAR PROGRAMMING PROBLEMS 15


4.1 Karush-Kuhn-Tucker Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Problem with One Inequality Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Problem with Two Inequality Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

vi
5 Conclusion 25

vii
Chapter 1

INTRODUCTION

1.1 Background of study

Nonlinear programming is a powerful mathematical technique used in optimization problems where the relation-
ship between the variables and the objective function is nonlinear. Unlike linear programming, which deals with
linear relationships, nonlinear programming handles situations where the problem is more complex and involves
nonlinear functions. Understanding nonlinear programming involves delving into mathematical optimization,
calculus, and various mathematical methods. It is a crucial tool for decision-making in real-world scenarios
where the relationship between variables is not simple and linear, enabling us to find optimal solutions that
would otherwise be challenging or impossible to derive.

Linear Programming (LP)

Linear programming (LP) is a method for optimizing a linear objective function, subject to linear equality
and inequality constraints. It is widely used in industries like manufacturing, logistics, and finance for resource
allocation, profit maximization, or cost minimization.

Characteristics of Linear Programming

ˆ Objective Function: The objective function is linear, taking the form:

Z = c1 x1 + c2 x2 + · · · + cn xn

where Z is the total cost or profit, x1 , x2 , . . . , xn are decision variables, and c1 , c2 , . . . , cn are coefficients.

ˆ Constraints: The constraints are also linear, such as:

a1 x1 + a2 x2 + · · · + an xn ≤ b

These constraints represent resource limits or requirements, where a1 , a2 , . . . , an are constants, and b is
the resource cap.

1
ˆ Feasible Region: The constraints define a feasible region, often a polygon or polyhedron, where the
solution lies.

ˆ Optimal Solution: According to the fundamental theorem of LP, the optimal solution is always at one
of the vertices of the feasible region.

Simple Example

Imagine you’re planning a school fundraiser and you’re selling cookies and cakes. You want to make as much
profit as possible, but there are some restrictions:

ˆ Ingredients: You have a limited amount of flour and sugar.

ˆ Time: You can only bake for a certain number of hours.

You want to figure out how many cookies and how many cakes you should make to get the most profit,
without running out of ingredients or time.

Steps in Linear Programming

1. Decision Variables

These are the things you’re trying to figure out. In this case, how many cookies and how many cakes to make.
Let’s say:
x = number of cookies, y = number of cakes

2. Objective Function

This is what you want to maximize or minimize. In our example, you want to maximize profit. If you make
a profit of $2 for each cookie and $5 for each cake, your total profit Z would be:

Z = 2x + 5y

Here, you’re trying to make Z as large as possible.

3. Constraints

These are the limits or rules you have to follow. For example:

ˆ You have only 10 cups of flour, and each cookie uses 1 cup, and each cake uses 2 cups. This gives the
constraint:
x + 2y ≤ 10

2
ˆ You have only 8 cups of sugar, and each cookie uses 1 cup, and each cake uses 1 cup, so:

x+y ≤8

ˆ Also, you can’t make a negative number of cookies or cakes, so:

x ≥ 0, y≥0

4. Solving the Problem

Using linear programming, you find the combination of x (cookies) and y (cakes) that gives the highest profit,
while staying within your flour and sugar limits. This can be done using a graph or mathematical methods like
the Simplex Method.

Example of Linear Programming

Suppose a company produces two products, A and B, with a goal to maximize profit. The profit per unit
is 40,000 for A and 30,000 for B. The company faces the following constraints on working hours and
preparation time:
x1 + x2 ≤ 12 (working hours)

2x1 + x2 ≤ 16 (preparation time)

Objective function:
Z = 40x1 + 30x2

The solution can be found by graphing the constraints and evaluating the vertices of the feasible region.

Nonlinear Programming (NLP)

Nonlinear programming (NLP) deals with optimization problems where the objective function or some
constraints are nonlinear. Unlike LP, nonlinear programming has more complex relationships and is often used
in engineering, economics, and energy systems.

Characteristics of Nonlinear Programming

ˆ Objective Function: The objective function is nonlinear, such as:

Z = x21 + x22 + x1 x2

or
Z = ex1 + log(x2 )

3
ˆ Constraints: The constraints can also be nonlinear:

x21 + x22 ≤ 100

or
sin(x1 ) + x32 ≥ 5

ˆ Feasible Region: The feasible region may not be a convex shape, making the problem harder to solve.

ˆ Optimal Solution: Nonlinear problems can have multiple local optima, so advanced techniques like
gradient descent or genetic algorithms are often needed to find the global optimum.

Simple Example: Organizing a Concert

Imagine you’re organizing a school concert. You want to decide the ticket price to make the most money.
However, the relationship between the price and the number of tickets sold is not simple.
For example:

ˆ If the price is too low, lots of people will come, but you won’t make much money.

ˆ If the price is too high, only a few people will come, and you still won’t make much money.

The profit you make depends on the ticket price in a curved relationship, not a straight line. This makes it
a nonlinear problem.

Steps in Nonlinear Programming

1. Decision Variable:

Just like in LP, you have to decide certain values. Here, the decision variable is:

p = the ticket price you want to set.

2. Objective Function:

The objective is to maximize profit, but the relationship between profit and price is complex. If the profit
function is:
Profit = −p2 + 10p

This function is nonlinear because of the p2 term. It means that as you increase the price, the profit goes
up at first, but eventually starts going down (because fewer people will buy tickets at higher prices).

3. Constraints:

You might also have limits, like:


0 ≤ p ≤ 20

where the ticket price cannot be less than 0 or more than 20.

4
4. Solving the Problem:

In nonlinear programming, you can use calculus or numerical techniques to find the maximum point on
the curve to figure out the best ticket price to maximize your profit.

1.2 Terminologies and Definitions

Definition 1.2.1. Linear Function: A function f : R → R is a linear function if it can be expressed in the
form
y = a + bx

where a, b ∈ R. The graph of a linear function is a straight line.

Definition 1.2.2. Linear system: It is a system of equations where each equation represents a straight-line
relationship between variables. The variables are only raised to the first power, and there are no products or
non-linear functions involved.

Example:

2x + 3y = 5

x−y =1

This is a linear system because both equations represent straight lines.

Definition 1.2.3. Nonlinear System: It is a system of equations where at least one equation involves
variables that are not linear (e.g., variables are squared, multiplied together, or involve functions like sine or
exponentials).

Example:

x2 + y 2 = 4

x+y =2

This is a nonlinear system because the first equation represents a circle (due to the squared terms).

Definition 1.2.4. Programming: Programming can be said to be coding, modeling, simulating, or presenting
the solution to a problem by representing facts, data, or information using pre-defined rules and semantics on
a computer or any other device for automation.

Definition 1.2.5. Optimization: Optimization is the process of finding the maximum or minimum value of
an objective function f (x) subject to a set of constraints. Formally, it is represented as:

Maximize or Minimize f (x) subject to gi (x) ≤ 0 and hj (x) = 0

where gi (x) are inequality constraints and hj (x) are equality constraints.

5
Definition 1.2.6. Constraints: Constraints are limitations or restrictions imposed on decision variables in
an optimization problem. Mathematically, constraints can be represented as:

gi (x) ≤ 0 and hj (x) = 0

where gi (x) are inequality constraints and hj (x) are equality constraints.

Definition 1.2.7. Profit Maximization: Profit maximization is the process of finding the highest possible
profit, which is represented by the profit function P (x). The goal is to maximize P (x), where:

P (x) = R(x) − C(x)

ˆ R(x) is the revenue function.

ˆ C(x) is the cost function.

The process involves finding the value of x that maximizes P (x) by solving P ′ (x) = 0 and confirming the
maximum with P ′′ (x) < 0.

Definition 1.2.8. Minimization of losess: Minimization of losses refers to the process of reducing or avoiding
negative financial outcomes, represented by a loss function L(x). The objective is to minimize L(x), where:

L(x) = C(x) − R(x)

ˆ C(x) is the cost function.

ˆ R(x) is the revenue function.

Minimization involves finding the value of x that minimizes L(x) by solving L′ (x) = 0 and confirming the
minimum with L′′ (x) > 0.

Definition 1.2.9. Objective Function: The objective function in an optimization problem is the function
P (x) that needs to be optimized (maximized or minimized). It is formally represented as:

Optimize P (x)

Definition 1.2.10. Equality Constraints: An equality constraint in an optimization problem is a condition


that specifies that two expressions must be equal. Formally, it is represented as:

hj (x) = 0

Definition 1.2.11. Inequality Constraints: An inequality constraint in an optimization problem is a con-


dition that restricts the values that decision variables can take. Formally, it is represented as:

gi (x) ≤ 0 or gi (x) ≥ 0

6
Definition 1.2.12. Feasible Problem: A problem is said to be feasible if there exists at least one set of
values for the decision variables that satisfies all constraints. Formally, it is represented as:

∃x such that gi (x) ≤ 0 and hj (x) = 0

where x is the set of values

Definition 1.2.13. Infeasible Problem: A problem is said to be infeasible if no set of values for the decision
variables satisfies all constraints. Formally, it is represented as:

∄x such that gi (x) ≤ 0 and hj (x) = 0

where x is the set of values

Definition 1.2.14. Unbounded Problem: A problem is said to be unbounded if it is feasible and the
objective function can be made infinitely large (in the case of maximization) or infinitely small (in the case of
minimization). Formally, it is represented as:

sup f (x) = ∞ or inf f (x) = −∞

subject to the constraints gi (x) ≤ 0 and hj (x) = 0.

1.3 Aim and objectives of the study

The overall aim of the study is to investigate and understand the application of nonlinear programming as
a mathematical method in profit maximization. Standing on this point, this study is aimed at achieving the
following objectives:

ˆ Introducing a mathematical method in solving nonlinear programming problems. This will allow us to
better understand nonlinear programming.

ˆ Estimating the implications of the method in solving nonlinear programming problems. This will help us
identify the factors that may contribute to solving nonlinear programming.

1.4 Limitation of study

ˆ The time constraints of this project.

ˆ The compilation and computational resources available to me.

7
ˆ The availability of data on the research on application of nonlinear programming as a mathematical
method in profit maximization.

ˆ The accuracy of the mathematical method used in solving the nonlinear programming problem.

8
Chapter 2

Literature Review

Although the linear programming model works fine for many situations, some problems cannot be modeled ac-
curately without including nonlinear components. One example would be the isoperimetric problem: determine
the shape of the closed plane curve having a given length and enclosing the maximum area. The solution, but
not a proof, was known by Pappus of Alexandria c. 340 CE.
The branch of mathematics known as the Calculus of Variations began with efforts to prove this solution, to-
gether with the challenge in 1696 by the Swiss mathematician Johann Bernoulli to find the curve that minimizes
the time it takes an object to slide, under only the force of gravity, between two nonvertical points (the solution
is the Brachistochrone). In addition to Johann Bernoulli, his brother Jacob Bernoulli, the German Gottfried
Wilhelm Leibniz, and the Englishman Isaac Newton all supplied correct solutions. In particular, Newton’s
approach to the solution plays a fundamental role in many nonlinear algorithms.
Other influences on the development of nonlinear programming, such as convex analysis, duality theory, and
control theory, developed largely after 1940. For problems that involve constraints as well as objective functions,
the optimality conditions discovered by the American mathematician William Karush and others in the late
1940s became an essential tool for recognizing solutions and for driving the behavior of algorithms.
An important early algorithm for solving nonlinear programs was given by the Nobel Prize-winning Norwegian
economist Ragnar Frisch in the mid-1950s. Curiously, his approach fell out of favor for some decades, reemerg-
ing as a viable and competitive approach only in the 1990s. Other important algorithmic approaches include
sequential quadratic programming, in which an approximate problem with a quadratic objective and linear
constraints is solved to obtain each search step; and penalty methods, including the “method of multipliers,” in
which points that do not satisfy the constraints incur penalty terms in the objective to discourage algorithms
from visiting them.
The Nobel Prize-winning American economist Harry M. Markowitz provided a boost for nonlinear optimization
in 1958 when he formulated the problem of finding an efficient investment portfolio as a nonlinear optimization
problem with a quadratic objective function. Nonlinear optimization techniques are now widely used in finance,
economics, manufacturing, control, weather modeling, and all branches of engineering.

9
Profit maximization is the act of achieving the highest revenue or profit. The sales level where profits are highest
is at the strategic level. It is typically used as a benchmark for the best situation and for planning purposes.
Profit maximization is simply using a product in order to generate a desired profit or return on investment.
Profit maximization can be achieved in a variety of ways but usually requires a high level of specialization and
knowledge because minimizing costs and maximizing revenues are two key concepts that must be addressed for
this to occur.
The common benchmark for profit maximization is called the breakeven point. This means that if a company
can increase sales above this point, then they will not just maximize profits but also create an opportunity to
grow in the future.

10
Chapter 3

Nonlinear Programming

Nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints
or the objective function are nonlinear. An optimization problem involves calculating the extrema (maxima,
minima, or stationary points) of an objective function over a set of unknown real variables, conditional on
the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of
mathematical optimization that deals with problems that are not linear.

3.1 Definition and Discussion

Let n, m, and p be positive integers, and let X be a subset of Rn (usually box-constrained). Let f , gi , and hj
be real-valued functions on X for each i ∈ {1, 2, 3, . . . , m} and each j ∈ {1, 2, 3, . . . , p}, with at least one of f ,
gi , or hj being nonlinear. A nonlinear programming problem is an optimization problem of the form:

Minimize f (x)

subject to gi (x) ≤ 0, i ∈ {1, 2, 3, . . . , m},

hj (x) = 0, j ∈ {1, 2, 3, . . . , p}, x ∈ X,

where:

ˆ f (x) is the objective function to be minimized,

ˆ x ∈ X is the vector of decision variables,

ˆ gi (x) represents the inequality constraints,

ˆ hj (x) represents the equality constraints,

ˆ m is the number of inequality constraints,

ˆ p is the number of equality constraints.

11
3.2 Basic Formulation of Nonlinear Programming Problem

A general nonlinear programming problem is to select n decision variables x1 , x2 , x3 , . . . , xn from a given feasible
region in such a way as to optimize (minimize or maximize) a given objective function f (x1 , x2 , x3 , . . . , xn ) of
the decision variables. The problem is called a nonlinear programming problem (NLP) if the objective function
is nonlinear and the feasible region is determined by nonlinear constraints. Thus, in maximization form, the
general nonlinear problem is stated as follows:

Maximize f (x1 , x2 , x3 , . . . , xn ),

subject to:

g1 (x1 , x2 , x3 , . . . , xn ) ≤ b1 ,
..
.

gm (x1 , x2 , x3 , . . . , xn ) ≤ bm ,

where each of the constraint functions g1 through gm is given. A special case is the linear program. The
obvious association for this case is:
n
X
f (x1 , x2 , x3 , . . . , xn ) = cj xj ,
j=1

and
n
X
gi (x1 , x2 , x3 , . . . , xn ) = aij xj , (i = 1, 2, 3, . . . , m).
j=1

Note that nonnegativity restrictions on variables can be included simply by appending the additional constraints:

gm+i (x1 , x2 , x3 , . . . , xn ) = −xi ≤ 0, (i = 1, 2, 3, . . . , n).

Sometimes these constraints will be treated explicitly, just like any other problem constraints. At other times,
it will be convenient to consider them implicitly in the same way that nonnegativity constraints are handled
implicitly in the simplex method. For notational convenience, we usually let x denote the vector of n decision
variables x1 , x2 , x3 , . . . , xn . That is, x = (x1 , x2 , x3 , . . . , xn ), and write the problem more concisely as:

Maximize f (x),

Subject to:

gi (x) ≤ bi , (i = 1, 2, 3, . . . , m).

As in linear programming, we are not restricted to this formulation. To minimize f (x), we can of course
maximize −f (x). Equality constraints h(x) = b can be written as two inequality constraints h(x) ≤ b and
−h(x) ≤ −b. In addition, if we introduce a slack variable, each inequality constraint is transformed to an
equality constraint. Thus, sometimes we will consider an alternative equality form:

12
Maximize f (x),

Subject to:

hi (x) = bi , (i = 1, 2, 3, . . . , m),

xj ≥ 0, (j = 1, 2, 3, . . . , n).

3.3 Some Nonlinear Programming Problems

Usually, the problem context suggests either an equality or inequality formulation (or a formulation with both
types of constraints), and we will not wish to force the problem into either form. The following three simplified
examples illustrate how nonlinear programs can arise in practice.

3.3.1 Portfolio Selection

Portfolio Optimization with Nonlinear Risk and Return

An investor is allocating wealth between two assets to maximize return while minimizing risk. Let x1 represent
the proportion invested in Asset 1 and x2 the proportion invested in Asset 2.
The objective function is:

Maximize: f (x1 , x2 ) = 12x1 + 9x2 − θ(4x21 + 2x22 )

The constraints are:


x1 + x2 ≤ 1

x1 ≥ 0, x2 ≥ 0

3.3.2 Water Resources Planning

In regional water planning, sources emitting pollutants might be required to remove waste from the water
system. Let xj be the pounds of Biological Oxygen Demand (an often-used measure of pollution) to be removed
at source j. One model might be to minimize total costs to the region to meet specified pollution standards:

n
X
Maximize dj (xj ),
j=1

Subject to:
n
X
aij xj ≥ bi , (i = 1, 2, 3, . . . , m),
j=1

0 ≤ xj ≤ uj , (j = 1, 2, 3, . . . , n),

where:

13
ˆ dj (xj ) = Cost of removing xj pounds of Biological Oxygen Demand at source j,

ˆ bi = Minimum desired improvement in water quality at point i in the system,

ˆ aij = Quality response, at point i in the water system, caused by removing one pound of Biological Oxygen
Demand at source j,

ˆ uj = Maximum pounds of Biological Oxygen Demand that can be removed at source j.

3.3.3 Constrained Regression

A university wishes to assess the job placements of its graduates. For simplicity, it assumes that each graduate
accepts either a government, industrial, or academic position. Let Nj denote the number of graduates in year
j (j = 1, 2, 3, . . . , n), and let Gj , Ij , and Aj denote the number entering government, industry, and academia,
respectively, in year j (Gj + Ij + Aj = Nj ).
One model being considered assumes that a given fraction of the student population joins each job category
each year. If these fractions are denoted as λ1 , λ2 , and λ3 , then the predicted number entering the job categories
in year j is given by the expressions:
Ĝj = λ1 Nj ,

Iˆj = λ2 Nj ,

Âj = λ3 Nj .

A reasonable performance measure of the model’s validity might be the difference between the actual number
of graduates Gj , Ij , and Aj entering the three job categories and the predicted numbers Ĝj , Iˆj , and Âj , as in
the least-squares estimate:

n h
X i
Minimize (Gj − Ĝj )2 + (Ij − Iˆj )2 + (Aj − Âj )2 ,
j=1

Subject to:

λ1 + λ2 + λ3 = 1,

λ1 ≥ 0, λ2 ≥ 0, λ3 ≥ 0.

This is a nonlinear program in three variables λ1 , λ2 , and λ3 .

14
Chapter 4

NONLINEAR PROGRAMMING
PROBLEMS

4.1 Karush-Kuhn-Tucker Method

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker
conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear
programming to be optimal, provided that some regularity conditions are satisfied.
Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of
Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained
maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a saddle point.
KKT conditions are the necessary conditions for optimality in general constrained problems for a given
nonlinear programming problem:

Maximize f (x)

subject to gi (x) ≤ bi for i = 1, 2, . . . , m and x ≥ 0

4.2 Problem with One Inequality Constraint

A nonlinear programming problem for solving one inequality constraint must satisfy the following Kuhn-Tucker
conditions:
Let:

ˆ K be the optimization function,

ˆ f be the objective function,

15
ˆ g be the constraint, and

ˆ λ be the Lagrangian multiplier.

Then, let K = f − λ · g.
We find x1 , x2 , and λ by solving the equations:

∂K ∂K ∂K
= 0, = 0, =0
∂x1 ∂x2 ∂λ

Hence, the Karush-Kuhn-Tucker (KKT) conditions are:

kx1 = fx1 − λ · gx1 = 0

kx2 = fx2 − λ · gx2 = 0

λ·g =0

g≤0

λ≥0 (for Zmax ) and λ ≤ 0 (for Zmin )

x1 , x2 ≥ 0

Note, in condition 3 either λ = 0 or g = 0 for solving different cases and verifying whether all the above
conditions are satisfied.

Example 1. Solve the nonlinear programming problem with one inequality constraint using the Karush-Kuhn-
Tucker (KKT) method.

Maximize Z = 2x21 − 7x22 + 12x1 x2

subject to 2x1 + 5x2 ≤ 98, x1 , x2 ≥ 0

Solution:
Let f = 2x21 − 7x22 + 12x1 x2 and g = 2x1 + 5x2 − 98. Recall that K = f − λ · g.
Then,
K = 2x21 − 7x22 + 12x1 x2 − λ(2x1 + 5x2 − 98)

We get the Kuhn-Tucker conditions and differentiate K w.r.t x1 :

∂K
= 4x1 + 12x2 − 2λ = 0 ⇒ 2x1 + 6x2 − λ = 0 ——(1)
∂x1

Differentiate K with respect to x2 :

∂K
= −14x2 + 12x1 − 5λ = 0 ⇒ 12x1 − 14x2 − 5λ = 0 ——(2)
∂x2

16
Then we get:
λ · g = λ · (2x1 + 5x2 − 98) = 0 ——(3)

The constraint:
g≤0 ⇒ (2x1 + 5x2 − 98) ≤ 0 ——(4)

Finally,
λ≥0 (for Zmax ) and x1 , x2 ≥ 0

Now, from equation (3):


λ · g = λ · (2x1 + 5x2 − 98) = 0 ——(3)

Case 1: If λ = 0
Recall equation (1) and equation (2):

2x1 + 6x2 − λ = 0 ——(1)

12x1 − 14x2 − 5λ = 0 ——(2)

Let λ = 0, in equation (1) and (2), we have:

2x1 + 6x2 = 0 ——(7)

12x1 − 14x2 = 0 ——(8)

From equation (7), making x1 the subject of the formula:

x1 = −3x2 ——(9)

Substitute equation (9) into equation (8):

12(−3x2 ) − 14x2 = 0

−36x2 − 14x2 = 0

−50x2 = 0

x2 = 0

Substitute x2 = 0 into equation (9):

x1 = −3(0) ⇒ x1 = 0

Hence, x1 = 0, x2 = 0.
Therefore, from the given problem when x1 = 0, x2 = 0:

Z = 2x21 − 7x22 + 12x1 x2

Z=0

17
Hence, this case does not create a feasible solution. Since the assumption that λ = 0 is not correct, we need
to REJECT these values at λ = 0.
Case 2: If 2x1 + 5x2 − 98 = 0
From equation (1):
2x1 + 6x2 − λ = 0

Multiply all through by 5:


10x1 + 30x2 − 5λ = 0 ——(2)

Now, solve equation (1) and (2) simultaneously:

10x1 + 30x2 − 5λ = 0

12x1 − 14x2 − 5λ = 0

−2x1 + 44x2 = 0 ——(3)

Now, solve equation (1) and (3) simultaneously:

2x1 + 5x2 = 98

−2x1 + 44x2 = 0

0 + 49x2 = 98 ⇒ x2 = 2

Substitute x2 = 2 into equation (3):


−2x1 + 44(2) = 0

−2x1 = −88 ⇒ x1 = 44

Hence, x1 = 44 and x2 = 2.
From equation (1):
2(44) + 6(2) = λ

88 + 12 = λ

λ = 100

The constraint is g ≤ 0:
2(44) + 5(2) − 98 = 0

Hence,
Z = 2(44)2 − 7(2)2 + 12(44)(2)

This gives:
Z = 4900

So, Zmax = 4900 at x1 = 44 and x2 = 2.

18
4.3 Problem with Two Inequality Constraints

A nonlinear programming problem with two inequality constraints must satisfy the following Kuhn-Tucker
conditions. Let:

ˆ K - optimization function,

ˆ f - objective function,

ˆ g1 , g2 - constraints,

ˆ λ1 , λ2 - Lagrange multipliers.

Let:
K = f − λ1 g1 − λ2 g2

Then, we find x1 , x2 , λ1 , λ2 by solving the equations:

∂K ∂K ∂K ∂K
= 0, = 0, = 0, =0
∂x1 ∂x2 ∂λ1 ∂λ2

The Karush-Kuhn-Tucker (KKT) conditions are:

kx1 = fx1 − λ1 gx1 − λ2 gx1 = 0

kx2 = fx2 − λ1 gx2 − λ2 gx2 = 0

λ1 g1 = 0, λ2 g2 = 0

g1 ≤ 0, g2 ≤ 0

λ1 , λ2 ≥ 0 (for maximizing Z)

x1 , x2 ≥ 0

Example 4.2

Solve the following nonlinear programming problem using the KKT method.
Maximize:
Z = −2x21 − 2x22 + 12x1 + 21x2 + 2x1 x2

subject to:
x2 ≤ 8

x1 + x2 ≤ 10

x1 , x2 ≥ 0

19
Solution

Let:
f = −2x21 − 2x22 + 12x1 + 21x2 + 2x1 x2

g1 = x2 − 8, g2 = x1 + x2 − 10

The Lagrangian function K is given by:


K = f − λ1 g1 − λ2 g2

K = −2x21 − 2x22 + 12x1 + 21x2 + 2x1 x2 − λ1 (x2 − 8) − λ2 (x1 + x2 − 10)

Differentiating K with respect to x1 and x2 :

∂K
= −4x1 + 2x2 + 12 − λ2 = 0 (1)
∂x1
∂K
= 2x1 − 4x2 + 21 − λ1 − λ2 = 0 (2)
∂x2
Now using the complementary slackness conditions:

λ1 g1 = λ1 (x2 − 8) = 0 (3)

λ2 g2 = λ2 (x1 + x2 − 10) = 0 (4)

From the constraints:


g1 ≤ 0 =⇒ (x2 − 8) ≤ 0 (5)

g2 ≤ 0 =⇒ (x1 + x2 − 10) ≤ 0 (6)

And the non-negativity conditions for the Lagrange multipliers:

λ1 , λ2 ≥ 0 (7)

Finally, non-negativity for x1 and x2 :


x1 , x2 ≥ 0 (8)

Case 1: λ1 = 0 and λ2 = 0

Substitute λ1 = 0 and λ2 = 0 into equations (1) and (2):

−4x1 + 2x2 + 12 = 0 (9)

2x1 − 4x2 + 21 = 0 (10)

From equation (9), solve for x2 :


x2 = 2x1 − 6 (11)

Substitute equation (11) into equation (10):

2x1 − 4(2x1 − 6) + 21 = 0

20
2x1 − 8x1 + 24 + 21 = 0

−6x1 + 45 = 0
15
x1 =
2
15
Substitute x1 = 2 into equation (11):

15
x2 = 2 × −6=9
2
15
Thus, x1 = 2 and x2 = 9. However, from condition (5) and (6):

g1 ≤ 0 =⇒ (9 − 8) = 1 ̸≤ 0
 
15 13
g2 ≤ 0 =⇒ + 9 − 10 = ̸≤ 0
2 2
Since g1 and g2 are not less than or equal to 0, we reject λ1 = 0 and λ2 = 0.

CASE 2

If λ1 ̸= 0 and λ2 ̸= 0, we recall equations (1) and (2):

−4x1 + 2x2 + 12 − λ2 = 0 (1)

2x1 − 4x2 + 21 − λ1 − λ2 = 0 (2)

Since λ1 ̸= 0, from equation (3) we have:

λ1 g1 = λ1 (x2 − 8) = 0

⇒ x2 − 8 = 0

Thus, x2 = 8.
Again, since λ2 ̸= 0, from equation (4) we get:

λ2 g2 = λ2 (x1 + x2 − 10) = 0

⇒ x1 + x2 − 10 = 0

But x2 = 8, so:
x1 + 8 − 10 = 0

Thus, x1 = 2.
Hence, x1 = 2 and x2 = 8. We now substitute x1 and x2 into equations (1) and (2):

−4x1 + 2x2 + 12 − λ2 = 0

−4(2) + 2(8) + 12 − λ2 = 0

⇒ λ2 = 20 ≥ 0

21
Also,
2x1 − 4x2 + 21 − λ1 − λ2 = 0

2(2) − 4(8) + 21 − λ1 − 20 = 0

⇒ λ1 = −27 ≤ 0

Since λ2 = 20 ≥ 0 but λ1 = −27 ≤ 0, and recall the condition (7):

λ1 , λ2 ≥ 0 (for Z max)

Therefore, the assumption of λ1 ̸= 0 and λ2 ̸= 0 is not correct. We reject those values at λ1 ̸= 0 and λ2 ̸= 0.

CASE 3

If λ1 ̸= 0 and λ2 = 0, we recall equation (3) and obtain:

λ1 g1 = λ1 (x2 − 8) = 0

⇒ x2 − 8 = 0

Thus, x2 = 8.
Now, recall equation (1) and substitute λ2 = 0 as well as x2 = 8:

−4x1 + 2x2 + 12 − λ2 = 0

−4x1 + 2(8) + 12 − (0) = 0

⇒ x1 = 7

Hence, x1 = 7 and x2 = 8. Now, recall equation (2) with x1 = 7, x2 = 8, and λ2 = 0:

2x1 − 4x2 + 21 − λ1 − λ2 = 0

2(7) − 4(8) + 21 − λ1 − (0) = 0

⇒ λ1 = 3 ≥ 0

But from equation (6), we see that:

g2 = (x1 + x2 − 10) ≤ 0

⇒ x1 + x2 − 10 = 5 ≰ 0

Therefore, the assumption of λ1 ̸= 0 and λ2 = 0 does not satisfy all the necessary conditions. We reject the
values at λ1 ̸= 0 and λ2 = 0.

22
CASE 4

If λ1 = 0 and λ2 ̸= 0, we recall equation (4) and get:

λ2 g2 = λ2 (x1 + x2 − 10) = 0

⇒ x1 + x2 − 10 = 0

That is,

x1 + x2 = 10 ...............(i)

tag1.
From equation (2), with λ1 = 0, we have:

2x1 − 4x2 + 21 − λ1 − λ2 = 0

So that,
2x1 − 4x2 + 21 − λ2 = 0 (ii)

Also, equation (1) says:


−4x1 + 2x2 + 12 − λ2 = 0 (1)

Now, we solve equations (ii) and (1) simultaneously:

2x1 − 4x2 + 21 − λ2 = 0

−(−4x1 + 2x2 + 12 − λ2 ) = 0

6x1 − 6x2 + 9 = 0

⇒ 2x1 − 2x2 = −3 (iii)

From equation (i), making x2 subject to the formula, we have:

x2 = 10 − x1 (iv)

Substituting x2 into equation (iii):


2x1 − 2(10 − x1 ) = −3

2x1 − 20 + 2x1 = −3

4x1 = 17
17
x1 =
4
Now, substituting x1 into equation (iv):
17
x2 = 10 −
4
23
x2 =
4

23
17 23
Hence, x1 = 4 and x2 = 4 . Recall equation (1) and substitute x1 and x2 :
   
17 23
−4 +2 + 12 − λ2 = 0
4 4
13
We see that λ2 = 2 ≥ 0.
Since λ2 is greater than 0 and the values satisfy all the necessary conditions when λ1 = 0 and λ2 ̸= 0, the
17 23
optimal solution is x1 = 4 and x2 = 4 .

Since:
Z = −2x21 − 2x22 + 12x1 + 21x2 + 2x1 x2

We have:
 2  2      
17 23 17 23 17 23
Z = −2 −2 + 12 + 21 +2 ×
4 4 4 4 4 4
1734
Zmax =
16

24
Chapter 5

Conclusion

The study shows the method of Non-linear programming in solving various problems. It is stated that non-
linear programming is a mathematical technique used to solve optimization problems, and we now know that
understanding nonlinear programming involves delving into mathematical optimization, calculus, and various
mathematical methods, see Examples 4.1 and 4.2.
Nonlinear constraints are mathematical expressions that involve nonlinear functions of decision variables.
These constraints are commonly encountered in optimization problems, where the goal is to find the values of
decision variables that optimize (maximize or minimize) an objective function subject to certain conditions.
Nonlinear constraints add complexity to optimization problems compared to linear constraints.
Finally, we mention that some problems require different approaches rather than classical nonlinear opti-
mization. If the objective and all constraints can be formulated as linear functions of the decision variables, one
should resort to linear programming. Still, some nonlinear problems require different approaches. For instance,
nonconvex, multi-modal, nondifferentiable, and multi-objective problems present some interesting challenges.
In this case, Swarm and Evolutionary computing will suffice and are usually effective.

25
References

ˆ Anitescu, M. (2005). On solving mathematical programs with complementarity constraints as nonlinear


programs. SIAM Journal on Optimization, 1203–1236.

ˆ Brent, R. P. (1973). Algorithms for minimization without derivatives. Prentice Hall.

ˆ Ciarlet, P. G. (1994). Introduction to numerical matrix analysis and optimization. Cambridge University
Press. (Original work published 1989).

ˆ Demmel, J. W. (1997). Applied numerical linear algebra. SIAM Publications.

ˆ Dennis, J. E., & Schnabel, R. B. (1989). A view of unconstrained optimization. In P. T. Harker (Ed.),
Optimization (Vol. 1, pp. 1–72). Elsevier Science Publishers.

ˆ Strang, G. (1986). Introduction to applied mathematics. Wellesley-Cambridge Press.

ˆ Hassell, C., & Rees, E. (1993). The index of a constrained critical point. The Mathematical Monthly
Journal, 772–777.

ˆ Hiller, F. S., & Lieberman, G. J. (1967). Introduction to operation research. Tata McGraw-Hill Education.

ˆ Katherine, W., Carol, W., & Joseph, S. (2003). Profit maximization: Definition, formula, & theory.
Study.com. https://www.study.com

ˆ Morgan, P. B. (2015). An explanation of constrained optimization for economists. University of Toronto


Press.

ˆ Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Cambridge University Press.

ˆ Johnson, S. G. (2008). A brief overview of optimization problems. MIT course 18-335.

ˆ Trefethen, L. N., & Bau III, D. (1997). Numerical linear algebra. SIAM Publications.

ˆ Van Der Waerden, B. L. (2003). Algebra (Vol. 1, 7th ed.). Springer. (Original work published 1930).

ˆ Vanderbei, R. J., & Shanno, D. F. (1999). An interior point algorithm for nonconvex nonlinear program-
ming. Computational Optimization and Applications, 231–252.

ˆ Vardi, A. (1985). A trust region algorithm for equality constrained minimization: Convergence properties
and implementation. SIAM Journal on Numerical Analysis, 22, 575–591.

ˆ Ruszczvnski, A. (2006). Nonlinear optimization. Princeton University Press.

ˆ Patil, V. P. (2022). Nonlinear programming problem — Karush-Kuhn-Tucker (KKT) [Video]. YouTube.


https://www.youtube.com/themathvirtuoso

26

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy