0% found this document useful (0 votes)
10 views10 pages

Optimality Conditions

The document discusses optimality conditions in optimization, focusing on both unconstrained and constrained problems. It covers necessary and sufficient conditions for local and global minima, including the first and second-order conditions, and introduces Lagrange duality for constrained optimization. The document also addresses the existence of optimal solutions and the concepts of weak and strong duality, along with complementary slackness.

Uploaded by

Damoon Raeesi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views10 pages

Optimality Conditions

The document discusses optimality conditions in optimization, focusing on both unconstrained and constrained problems. It covers necessary and sufficient conditions for local and global minima, including the first and second-order conditions, and introduces Lagrange duality for constrained optimization. The document also addresses the existence of optimal solutions and the concepts of weak and strong duality, along with complementary slackness.

Uploaded by

Damoon Raeesi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Optimality conditions

Bang-Shien Chen∗

When it comes to optimality conditions, we often discuss well-known conditions such as the
Karush-Kuhn-Tucker (KKT) conditions and Fritz John (FJ) conditions. However, it is helpful
to start with a simpler case: unconstrained optimization problems, and derive its necessary and
sufficient conditions for optimality.

1 Optimality Conditions for Unconstrained Optimization


In this section, we consider the optimization problem where X ⊆ Rn :
min f (x) ; x ∈ X. (1.1)
x

In addition, we often make some basic assumptions:


• f is continuously differentiable (for first-order condition).
• f is twice continuously differentiable (for second-order condition).
The definition of local and global minimum:
• x∗ is a local minimum if there exists an ε > 0 such that f (x∗ ) ≤ f (x), for all x with
∥x − x∗ ∥ < ε.
• x∗ is a global minimum if f (x∗ ) ≤ f (x), for all x ∈ X.
We say a local or global minimum is strict if the corresponding inequality is strict for x ̸= x∗ .

1.1 Necessary Conditions for Optimality


If the objective function is differentiable, we can use gradients and Taylor approximations
to compare the function value x∗ with the cost of its close neighbors.

Theorem 1.1: First-order Necessary Condition

Let x∗ be a local minimum of Problem 1.1, and suppose that f is continuously differentiable
at x∗ . Then
∇f (x∗ ) = 0.

Proof. Given some direction d ∈ Rn and scalar α, we have the first-order approximation
f (x∗ + αd) ≈ f (x∗ ) + ∇f (x∗ )⊤ ((x∗ + αd) − x∗ ).
Since x∗ is a local minimum, we have the inequality
f (x∗ ) ≤ f (x∗ ) + ∇f (x∗ )⊤ (αd).
For small α > 0 and α < 0, we have d⊤ ∇f (x∗ ) ≥ 0 for all d and d⊤ ∇f (x∗ ) ≤ 0 for all d,
respectively. Together, d⊤ ∇f (x∗ ) = 0 for all d, thus ∇f (x∗ ) = 0.

https://dgbshien.com/

1
Similar to the first-order condition, we could also derive a second-order condition using
second-order approximation.

Theorem 1.2: Second-order Necessary Condition

Let x∗ be a local minimum of Problem 1.1, and suppose that f is twice continuously
differentiable at x∗ . Then
∇2 f (x∗ ) ⪰ 0.

Proof. Given some direction d ∈ Rn and scalar α, we have the second-order approximation
α2 ⊤ 2
f (x∗ + αd) ≈ f (x∗ ) + α∇f (x∗ )⊤ d + d ∇ f (x∗ )d.
2
Since x∗ is a local minimum, we have the inequality
α2 ⊤ 2
f (x∗ ) ≤ f (x∗ ) + α∇f (x∗ )⊤ d + d ∇ f (x∗ )d.
2
By Theorem 1.1 and α2 > 0, we have d⊤ ∇2 f (x∗ )d ≥ 0 for all d, i.e., ∇2 f (x∗ ) ⪰ 0.

1.2 Sufficient Conditions for Optimality


Theorem 1.3: Second-order Sufficient Condition

Let f : Rn → R be twice continuously differentiable. Suppose that a vector x∗ satisfies the


conditions
∇f (x∗ ) = 0, ∇2 f (x∗ ) ≻ 0.
Then x∗ is a strict local minimum of f . In particular, there exists scalars γ > 0 and ε > 0
such that
γ
f (x) ≥ f (x∗ ) + ∥x − x∗ ∥2 ∀ x with ∥x − x∗ ∥ ≤ ε.
2

Proof. Given some direction d ∈ Rn and by the second-order approximation the first condition,
1
f (x∗ + d) = f (x∗ ) + ∇f (x∗ )⊤ d + d⊤ ∇2 f (x∗ )d + O(∥d∥2 )
2
1
= f (x∗ ) + d⊤ ∇2 f (x∗ )d + O(∥d∥2 ).
2
Since ∇2 f (x∗ ) is real symmetric, we have the spectral decomposition ∇2 f (x∗ ) = QΛQ⊤ , where
Λ is a diagonal matrix of eigenvalues and Q is orthogonal whose columns are the corresponding
eigenvectors. Let z = Q⊤ d, we have ∥z∥ = ∥Q⊤ d∥ = ∥d∥ since Q is orthogonal and
n
X n
X
d⊤ ∇2 f (x∗ )d = d⊤ QΛQ⊤ d = z ⊤ Λz = λi zi2 ≥ λmin zi2 = λmin ∥z∥2 = λmin ∥d∥2 .
i=1 i=1

Substituting this inequality back to the second-order approximation, we have


1
f (x∗ + d) = f (x∗ ) + d⊤ ∇2 f (x∗ )d + O(∥d∥2 )
2
λmin
≥ f (x∗ ) + ∥d∥2 + O(∥d∥2 )
2
λmin O(∥d∥2 )
 

= f (x ) + + ∥d∥2
2 ∥d∥2
Therefore, take γ ≤ λmin + 2O(∥d∥2 )/∥d∥2 and ∥d∥ = ∥x − x∗ ∥ ≤ ε, the proof is done.

2
1.3 Existence of Optimal Solutions
In general, a minimum need not exist. For example, f (x) = x and f (x) = ex have no global
minimum. To derive existence, we first review some definitions. Let X ⊆ Rn ,

• f is continuous at x if limy→x f (y) = x.

• f is continuous on X if f is continuous at every x ∈ X.

• f : X → R is lower semi-continuous at x ∈ X if f (x) ≤ lim inf k→∞ f (xk ) for every


sequence {xk } of X converging to x.

• f : X → R ∩ {±∞} (extended valued function) is coercive if limx→∞ f (x) = ∞.

Proposition 1.4: Weierstrass’ Theorem

Let X ∈ Rn be nonempty and let f : Rn → R be lower semi-continuous at all points of X.


Suppose that one of the following conditions holds:

1. X is compact (close and bounded).

2. X is closed and f is coercive.

3. There exists a scalar γ such that the level set {x ∈ X | f (x) ≤ γ} is nonempty and
compact.

Then there exists a vector x∗ ∈ X such that f (x∗ ) = inf x∈X f (x), i.e., x∗ is a global
minimum.

This can be reduced to the well-known Weierstrass Extreme Value Theorem, which states
that every continuous function on a nonempty compact set attains its extreme values on that
set, including a global minimum. We next give an example of using optimality conditions to
prove a well-known inequality.

Example 1.5: Arithmetic-Geometric Mean Inequality

Show the AM-GM inequality


√ x1 + x2 + · · · + xn
n
x1 x2 · · · xn ≤ .
n
Let yi = ln(xi ) for all i, we can rewrite the inequality as
y1 +y2 +···+yn
ey1 + ey2 + · · · + eyn ≥ ne n .

Let y1 + y2 + · · · yn = s and consider the optimization problem

min ey1 + ey2 + · · · + eyn


yi

s.t. y1 + y2 + · · · yn = s,

we aim to show the optimal value is nes/n . We rewrite an equivalent unconstrained problem

min f (y1 , y2 , . . . , yn−1 ) = ey1 + · · · + eyn−1 + es−y1 −···−yn−1 .


yi

3
Note that since f is coercive, by Proposition 1.4, there exists a global minimum. Let
(y1∗ , y2∗ , . . . , yn−1
∗ ) be the global minimum, by Theorem 1.1, we have

∂f
= eyi + es−y1 −···−yn−1 (−1) = 0 for i = 1, . . . , n − 1,
∂yi
which implies yi∗ = s−y1 −· · ·−yn−1 for i = 1, . . . , n−1. The system has only one solution:
∗ ∗
yi = s/n for all i, which is also the unique global minimum. Also, ey1 + · · · + eyn = nes/n .

2 Lagrange Duality
In the previous section, we derived optimality conditions for unconstrained problems. We
will next show the optimality conditions for constrained problems, such as the Fritz John con-
ditions and Karush-Kuhn-Tucker conditions. However, since it is easier to understand these
conditions with a basic knowledge of Lagrange duality, we first give a brief introduction to
Lagrange duality.

2.1 Lagrange Dual Problem


Consider the optimization problem
min f0 (x)
x
s.t. fi (x) ≤ 0, i = 1, . . . , m (2.1)
hi (x) = 0, i = 1, . . . , l.
Tp
with domain D = m ∗
T
i=0 dom fi ∩ i=1 dom hi , and denote the optimal value by p . The
idea of Lagrangian duality is to leverage the constraints in Problem (2.1) into the objective
function, and thereby transforms a constrained problem to a unconstrained problem. We define
the Lagrangian L : Rn × Rm × Rl → R by
m
X l
X
L(x, λ, ν) = f0 (x) + λi fi (x) + λi hi (x),
i=1 i=1

where λ and ν are called the Lagrange multipliers or dual variables. We then define the (La-
grange) dual function g : Rm × Rl → R by
m l
!
X X
g(λ, ν) = inf L(x, λ, ν) = inf f0 (x) + λi fi (x) + µi hi (x) .
x∈D x∈D
i=1 i=1

Note that the dual function is the point-wise infimum of a family of affine functions of (λ, ν),
which is always concave even if Problem (2.1) is not convex. Now suppose that x∗ is an optimal
solution, for any λ ≥ 0 and any ν, we have
m
X l
X
L(x∗ , λ, ν) = f0 (x∗ ) + λi fi (x∗ ) + λi hi (x∗ ) ≤ f0 (x∗ ),
i=1 i=1
| {z } | {z }
≤0 =0

which implies
g(λ, ν) = inf L(x, λ, ν) ≤ inf L(x∗ , λ, ν) ≤ f0 (x∗ ) = p∗ . (2.2)
x∈D x∈D
That is, the optimal value p∗ is an upper bound of the dual function g, which is also the
main idea of weak duality. Also, the optimal value p∗ is a lower bound of the objective function
f0 . Figure 2.1 illustrates this property.

4
Example 2.1

Consider the optimization problem

min x3 + 2x2 − x + 1
x
s.t. x2 ≤ 1 ⇐⇒ 0 ≤ x ≤ 1.

The dual function is

g(λ) = inf x3 + 2x2 − x + 1 + λ(x2 − 1) .



x

We can observe from Figure 2.1 that the optimal value is the lower bound of the objective
function, and also is the upper bound of the dual function.

0
7 objective function
optimal value
6 −20

5
−40
f0 (x)

g(λ)

4
−60
3

2 −80
dual function
1
optimal value
−100
−1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 0 20 40 60 80 100
x λ

Figure 2.1: Lower and upper bound in Example 2.1.

Since the optimal value is an upper bound of the dual function, the (Lagrange) dual problem
aims to maximize the dual function:
max g(λ, ν)
λ,ν
(2.3)
s.t. λ ⪰ 0.
The dual problem is always convex whether the primal problem is convex or not.

2.2 Weak Duality and Strong Duality


Denote d∗ as the optimal value of the dual problem. Since the optimal value of the primal
problem p∗ is an upper bound of the dual function, we have the weak duality:

d∗ ≤ p∗ .

We define p∗ − d∗ as the duality gap. We say that strong duality holds if

d∗ = p∗ .

While strong duality does not hold in general, there are some results that establish condi-
tions on the problem under which strong duality holds. These conditions are called constraint
qualifications. Here we give a simple and widely used constraint qualification in the context of
convex optimization, the Slater’s condition: There exists an x ∈ relintD such that fi (x) < 0
for all i = 1, . . . , m and hi (x) = 0 for all i = 1, . . . , l, i.e., there exists a strictly feasible point in
the relative interior of the feasible set.

5
2.3 Complementary Slackness
Suppose that the primal optimal value p∗ and dual optimal value d∗ are attained where x∗
is the primal optimal solution and (λ∗ , ν ∗ ) is the dual optimal solution. This means that
m
X l
X
∗ ∗ ∗
g(λ , ν ) ≤ f0 (x ) + λ∗i fi (x∗ ) + νi∗ hi (x∗ ) ≤ f0 (x∗ ).
i=1 i=1
| {z } | {z }
≤0 =0

The first inequality is because infimum is a lower bound, and the second inequality is because
x∗ is feasible such that fi (x∗ ) ≤ 0 for i = 1, . . . , m and h(x∗ ) = 0 for i = 1, . . . , l. Now suppose
that strong duality holds, i.e., g(λ∗ , ν ∗ ) = f0 (x∗ ), we have that the two inequalities in the chain
hold equality. This implies the complementary slackness:
λ∗i fi (x∗ ) = 0, i = 1, . . . , m. (2.4)
We say an inequality constraint is binding or active if fi (x∗ ) = 0. An important result of
complementary slackness is that the i-th inequality constraint is binding if λ∗i is not zero, or λ∗i
is zero if the i-th inequality constraint is non-binding.
• λ∗i > 0 =⇒ fi (x∗ ) = 0.
• fi (x∗ ) < 0 =⇒ λ∗i = 0.
Roughly speaking, only the constraints that are binding at the optimal point have a direct
impact on the optimal solution through their Lagrange multipliers, while constraints that are
non-binding do not affect the solution, as their Lagrange multipliers are zero. Complementary
slackness allows us to identify which constraints are active and further simplify an optimization
problem, or identify which constraints are critical and provide insights for decision-making.
Another result of complementary slackness is that since
m
X l
X
f0 (x∗ ) = g(λ∗ , ν ∗ ) ≤ f0 (x∗ ) + λ∗i fi (x∗ ) + νi∗ hi (x∗ ) = L(x∗ , λ∗ , ν ∗ ),
i=1 i=1

we can conclude that x∗ minimizes L(x, λ∗ , ν ∗ ). Therefore, by Theorem 1.1, it follows that
m
X l
X
∇f0 (x∗ ) + λ∗i ∇fi (x∗ ) + νi∗ ∇hi (x∗ ) = 0. (2.5)
i=1 i=1

Equation (2.5) is also known as the stationary condition.

3 Fritz John Conditions


In this section, we state the Fritz John (FJ) necessary and sufficient conditions. These
necessary conditions, including the FJ necessary conditions and KKT necessary conditions,
basically involves a stationary condition, complementary slackness, primal feasibility, and dual
feasibility. The intuition behind these conditions is quite straightforward, so we do not include
the proofs here. Please refer to Bazaraa [1] for the detailed proofs.
On the other hand, these conditions rely on different assumptions, which involves conti-
nuity, differentiability, and convexity on objective functions, binding/non-binding inequality
constraints, and equality constraints. We point out that in most textbooks, it is more common
to assume that all functions are continuously differentiable. However, Bazaraa [1] generalizes
these assumptions by distinguishing between the regularity requirements for different types of
constraints. Thereby, we will focus on clearly stating these assumptions in this and the following
section.

6
3.1 Fritz John Necessary Conditions
The FJ necessary conditions if for local minimums. We need some assumptions:

• Objective function and binding inequality constraints are differentiable.

• Non-binding inequality constraints are continuous.

• Equality constraints are continuously differentiable.

Theorem 3.1: Fritz John Necessary Conditions

Let x be a feasible solution and I = {i | fi (x) = 0} be the set of binding constraints.


Suppose that f0 , fi for i ∈ I are differentiable at x, fi for i ̸∈ I are continuous at x, and
hi for i = 1, . . . , l are continuously differentiable at x. If x is a local minimum, then there
exists λ0 , λi for i ∈ I, and νi for i = 1, . . . , l such that

1. Stationary condition: λ0 ∇f0 (x) + i∈I λi ∇fi (x) + li=1 νi ∇hi (x) = 0.
P P

2. Primal feasibility: fi (x) ≤ 0 for i = 1, . . . , m and hi (x) = 0 for i = 1, . . . , l.

3. Dual feasibility: λ0 , λi ≥ 0 for i ∈ I.

4. Lagrange multipliers are not all zero: (λ0 , λ, ν) ̸= 0.

This can be reduced to a more used FJ necessary conditions [4] with complementary slackness
by stronger assumptions:

• Objective function and inequality constraints are differentiable.

• Equality constraints are continuously differentiable.

Corollary 3.2

Let x be a feasible solution. Suppose that f0 , fi for i = 1, . . . , m are differentiable at x,


and hi for i = 1, . . . , l are continuously differentiable at x. If x is a local minimum, then
there exists λ0 , λi for i = 1, . . . , m, and νi for i = 1, . . . , l such that

1. Stationary condition: λ0 ∇f0 (x) + m


P Pl
i=1 λi ∇fi (x) + i=1 νi ∇hi (x) = 0.

2. Complementary slackness: λi fi (x) = 0 for i = 1, . . . , m.

3. Primal feasibility: fi (x) ≤ 0 for i = 1, . . . , m and hi (x) = 0 for i = 1, . . . , l.

4. Dual feasibility: λ0 , λi ≥ 0 for i = 1, . . . , m.

5. Lagrange multipliers are not all zero: (λ0 , λ, ν) ̸= 0.

We say (x, λ, ν) is a FJ point if it satisfies the FJ necessary conditions.

3.2 FritzJohn Sufficient Conditions


The FJ sufficient conditions is also for local minimums. Again, we need some assumptions:

• Objective function is pseudoconvex.

7
• Binding inequality constraints are strictly pseudoconvex.

• Equality constraints are affine.

• Gradient of equality constraints are linear independent.

Theorem 3.3: Fritz John Sufficient Conditions

Let I = {i | fi (x) = 0} be the set of binding constraints and define S = {x | fi (x) ≤ 0


for i ∈ I, hi (x) = 0 for i = 1, . . . , l}. Suppose that x is a FJ point, hi for i = 1, . . . , l are
affine, ∇hi for i = 1, . . . , l are linear independent. If there exists some neighborhood Nε (x)
such that f0 is pseudoconvex on S ∩ Nε (x) and fi for i ∈ I are strictly pseudoconvex on
S ∩ Nε (x), then x is a local minimum.

4 Karush-Kuhn-Tucker Conditions
The FJ conditions provide a general set of necessary conditions for optimality, which does
not require the non-negativity of the Lagrange multiplier associated with the objective function,
i.e., λ0 ≥ 0. This makes FJ conditions more general but less restrictive. For example, if
λ0 = 0, the stationary condition no longer reflects a balance between the objective and the
constraints. The main difference between the FJ conditions and the KKT conditions is that
the Lagrange multiplier λ0 cannot be zero, i.e., λ0 > 0. Thus, the KKT conditions can be seen
as a special case of the FJ conditions, where the presence of regularity in the problem ensures
meaningful Lagrange multipliers. We can derive the KKT conditions from the FJ conditions if
some constraint qualification holds.

4.1 Karush-Kuhn-Tucker Necessary Conditions


Under appropriate assumptions, including a valid constraint qualification, the KKT neces-
sary conditions hold for a local minimum.

• Objective function and binding inequality constraints are differentiable.

• Non-binding constraints are continuous.

• Equality constraints are continuously differentiable.

• Constraint qualification.

Theorem 4.1: Karush-Kuhn-Tucker Necessary Conditions

Let x be a feasible solution and I = {i | fi (x) = 0} be the set of binding constraints.


Suppose that f0 , fi for i ∈ I are differentiable at x, fi for i ̸∈ I are continuous at x, and hi
for i = 1, . . . , l are continuously differentiable at x. Suppose some constraint qualification
holds (e.g., ∇fi (x) for i ∈ I and ∇hi (x) for i = 1, . . . , l are linear independent). If x is a
local solution, then there exists unique λi for i ∈ I, νi for i = 1, . . . , l such that

1. Stationary condition: ∇f0 (x) + i∈I λi ∇fi (x) + li=1 νi ∇hi (x) = 0.
P P

2. Primal feasibility: fi (x) ≤ 0 for i = 1, . . . , m and hi (x) = 0 for i = 1, . . . , l.

3. Dual feasibility: λi ≥ 0 for i ∈ I.

8
Similar to the FJ necessary conditions, this can be reduced to a more used KKT necessary
conditions with complementary slackness by stronger assumptions:
• Objective function and inequality constraints are differentiable.
• Equality constraints are continuously differentiable.
• Constraint qualification.

Corollary 4.2

Let x be a feasible solution. Suppose that f0 , fi for i = 1, . . . , m are differentiable at


x, and hi for i = 1, . . . , l are continuously differentiable at x. Suppose some constraint
qualification holds (e.g., Slater’s condition). If x is a local minimum, then there exists
λ0 , λi for i = 1, . . . , m, and νi for i = 1, . . . , l such that

1. Stationary condition: ∇f0 (x) + i∈I λi ∇fi (x) + li=1 νi ∇hi (x) = 0.
P P

2. Complementary slackness: λi fi (x) = 0 for i = 1, . . . , m.

3. Primal feasibility: fi (x) ≤ 0 for i = 1, . . . , m and hi (x) = 0 for i = 1, . . . , l.

4. Dual feasibility: λi ≥ 0 for i = 1 . . . , m.

We say (x, λ, ν) is a KKT point if it satisfies the KKT necessary conditions.

4.2 Karush-Kuhn-Tucker Sufficient Conditions


In a similar manner to the FJ sufficient conditions, the KKT sufficient conditions ensure
global optimality when the problem satisfies specific convexity properties.
• Objective is pseudoconvex.
• Binding inequality constraints are quasiconvex.
• Equality constraints are quasiconvex/quasiconcave.

Theorem 4.3: Karush-Kuhn-Tucker Sufficient Conditions

Suppose that x is a KKT point, and let I = {i | fi (x) = 0} be the set of binding constraints.
If f0 is pseudoconvex, fi for i ∈ I are quasiconvex, and hi is quasiconvex if νi > 0 and
quasiconcave if νi < 0, then x is a global minimum.

For convex optimization problems, the KKT conditions are sufficient. Here is an example
of solving a convex program with the KKT conditions.
Example 4.4

Consider the convex program

min f0 (x1 , x2 ) = (x1 − 1)2 + (x2 − 2)2


x
s.t. f1 (x1 , x2 ) = x1 + 3x2 − 1 ≤ 0

We first check the Slater’s condition. We have a feasible point (0, 0) where f1 (0, 0) = −1
is strictly feasible (and both the objective and constraint are differentiable). Then by the

9
KKT conditions, there exists a unique λ such that stationary condition holds

∇f0 (x1 , x2 ) + λ∇f1 (x1 , x2 ) = 0 =⇒ 2x1 − 2 + λ = 0, 2x2 − 4 + 3λ = 0,

complementary slackness holds

λf1 (x1 , x2 ) = 0 =⇒ λ(x1 + 3x2 − 1) = 0,

and primal/dual feasibility holds. We first solve the system



2x1 − 2 + λ = 0

2x2 − 4 + 3λ = 0

λ(x1 + 3x2 − 1) = 0,

then check the primal/dual feasibility. Suppose that the constraint is binding, the system
becomes 
2x1 − 2 + λ = 0
 
2 1 6

2x2 − 4 + 3λ = 0 =⇒ (x1 , x2 , λ) = , , ,
 5 5 5
x1 + 3x2 − 1 = 0

where (x1 , x2 , λ) indeed satisfies primal and dual feasibility.

5 Constraint Qualifications
The FJ necessary conditions are a general set of conditions that hold for a local minimum of
constrained optimization problems. The KKT necessary conditions reduce the FJ conditions by
requiring that the Lagrange multiplier associated to the objective function is non-negative, which
requires a constraint qualification (regularity condition) must hold. We list some constraint
qualifications:

• Slater’s condition.

• Linearity constraint qualification: fi for i = 1, . . . , m and hi for i = 1, . . . , l are affine.

• Linear independence constraint qualification: ∇fi (x) for i ∈ I and ∇hi (x) for i = 1, . . . , l
are linear independent.

For more information about constraint qualifications, please refer to Bazaraa [1].

References
[1] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty. Nonlinear Programming: Theory and
Algorithms. John Wiley & Sons, 2006.

[2] D. P. Bertsekas. Nonlinear programming. Athena Scientific, 1999.

[3] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 2004.

[4] G. Giorgi. Remarks on fritz john conditions for problems with inequality and equality
constraints. International Journal of Pure and Applied Mathematics, 71:643–657, 2011.

10

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy