0% found this document useful (0 votes)
135 views12 pages

Penalty and Barrier

The document discusses two methods for approximating constrained optimization problems: the penalty method and barrier method. The penalty method replaces the constrained problem with an unconstrained problem by adding a penalty term for infeasible solutions. The penalty term is multiplied by a penalty parameter c, which is gradually increased to drive solutions toward feasibility. The barrier method instead approaches feasibility from the interior of the feasible region using a "barrier" function B(x) that goes to infinity at the boundary. This transforms the problem to minimizing f(x) + (1/c)B(x) within the interior of the feasible set.

Uploaded by

Burak Yüksel
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views12 pages

Penalty and Barrier

The document discusses two methods for approximating constrained optimization problems: the penalty method and barrier method. The penalty method replaces the constrained problem with an unconstrained problem by adding a penalty term for infeasible solutions. The penalty term is multiplied by a penalty parameter c, which is gradually increased to drive solutions toward feasibility. The barrier method instead approaches feasibility from the interior of the feasible region using a "barrier" function B(x) that goes to infinity at the boundary. This transforms the problem to minimizing f(x) + (1/c)B(x) within the interior of the feasible set.

Uploaded by

Burak Yüksel
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

PENALTY AND BARRER METHODS

A)Penalty Method
Penalty and barrier methods are procedures for approximating constrained optimization
problems by unconstrained problems.

This is achieved by either
adding a penalty for infeasibility and forcing the solution to feasibility and
subsequent optimum, or
adding a barrier to ensure that a feasible solution never becomes infeasible.

Consider the problem

(1)

where f is continuous function on 9
n
and S is a constraint set in 9
n
. In most applications S is
defined explicitly by a number of functional constraints, but in this section the more general
description in (1) can be handled. The idea of a penalty function method is to replace problem
(1) by an unconstrained approximation of the form

(2)


where c is a positive constant and P is a function on 9
n
satisfying,

1) Continous
2) P(x) 0 for all
3) P(x)=0 if and only if

EXAMPLE 1:

Suppose S is defined by a number of inequality constraints: S = {x : g
i
(x) < 0, i = 1,..., m}. A
very useful penalty function in this case is,





Which gives a quadratic augmented objective function denoted by




Here, each unsatisfied constraint influences x by assessing a penalty equal to the square of the
violation. These influences are summed and multiplied by c, the penalty parameter. Of course,
this influence is counterbalanced by f(x). Therefore, if the magnitude of the penalty term is
small relative to the magnitude of f(x), minimization of (c,x) will almost certainly not result in
an x that would be feasible to the original problem. However, if the value of c is made suitably
large, the penalty term will exact such a heavy cost for any constraint violation that the
minimization of the augmented objective function will yield a feasible solution



The Function cP(x) is illustrated with one dimensional case




For large c it is clear that the minimum point of problem (2) will be in a region where P is
small. Thus for increasing c it is expected that the corresponding solution points will approach
the feasible region S and, subject to being close, will minimize f. Ideally then, as c the
solution point of the penalty problem will converge to a solution of the constrained problem.



Figure 1: Illustration of the Penalty function



EXAMPLE 2

In order to get some understanding on how to select the penalty parameter c, let us consider the
following problem.













The feasible region is shown graphically in Fig. 2 along with several isovalue contours for the
objective function. The problem is a quadratic program and the isovalue contours are
concentric circles centered at (6,7), the unconstrained minimum of f(x).















Figure 2: Feasible Region


Using the quadratic penalty function the augmented objective function is;









The first step in the solution process is to select a starting point. A good rule of thumb is to start
at an infeasible point. By design then, we will see that every trial point, except the last one, will
be infeasible (exterior to the feasible region). A reasonable place to start is at the unconstrained
minimum so we set x
0
= (6,7). Since only constraint 3 is violated at this point, we have












Feasible Region
Isovalue contours for
objective function
















Figure 3: Feasible Region and starting point for iteration


Assuming that in the neighborhood of x
0
the max operator returns the constraint, the gradient
with respect to x is






Setting elements of Gradient to Zero





as a function of c. For any positive value of c, (c,x) is a strictly convex function (the Hessian of
(c,x) is positive definite for all c > 0), so x*
1
(c) and x*
2
(c) determine a global minimum. It turns
out for this example that the minima will continue to satisfy all but the third constraint for all
positive values of c. If we take the limit of x*
1
(c) and x*
2
(c) as c , we obtain x*
1
= 3 and
x*
2
= 4, the constrained global minimum for the original problem.


PENALTY PARAMETER SELECTION

Because the above approach seems to work so well, it is natural to conjecture that all we have
to do is set c to a very large number and then optimize the resulting augmented objective
function (c,x) to obtain the solution to the original problem. Unfortunately, this conjecture is
not correct. First, large depends on the particular model. It is almost always impossible to tell
how large c must be to provide a solution to the problem without creating numerical difficulties
in the computations. Second, in a very real sense, the problem is dynamically changing with the
relative position of the current value of x and the subset of the constraints that are violated. The
third reason why the conjecture is not correct is associated with the fact that large values of c
Feasible Region
X
Infeasible
starting point
create enormously steep valleys at the constraint boundaries. Steep valleys will often present
formidable if not insurmountable convergence difficulties for all preferred search methods
unless the algorithm starts at a point extremely close to the minimum being sought.

Fortunately, there is a direct and sound strategy that will overcome each of the difficulties
mentioned above. All that needs to be done is to start with a relatively small value of c and an
infeasible (exterior) point. This will assure that no steep valleys are present in the initial
optimization of (c,x). Subsequently, we will solve a sequence of unconstrained problems with
monotonically increasing values of c chosen so that the solution to each new problem is close
to the previous one. This will preclude any major difficulties in finding the minimum of (c,x)
from one iteration to the next.

CONVERGENCE


Lemma 1

The following lemma gives a set of inequalities that follow directly from the definition
of xk and the inequality c
k+1
> c
k
.







PROOF











Adding (6) and (7) yields,




Also



Using (4) we obtain the proof of (5),

(3)
(4)
(5)
(7)
(6)

Lemma 2

Let x* be a solution to problem (1). Then for each k.



PROOF




Theorem

Let x
k
be a sequence generated by the penalty method. Then, any limit point of the
sequence is a solution to (1)

PROOF

Suppose the subsequence {x
k
} is a convergent subsequence having limit, By the
continuity of f,

(8)


Let f* be the optimal value associated with problem (1). Then according to Lemmas 1 and 2,
the sequence of values q(c
K
, x
K
) is nondecreasing and bounded above by f*. Thus,


(9)


Subtracting (8) from (9) yields


(10)


Since P(xk) 0 and c
K


(10) implies;


Using the continuity of P,this implies to P( )=0. It is shown that the limit point is feasible for
(1).

To show that is optimal, from Lemma 2 note that, f(x
k
) f*

x
x


Sequential unconstrained minimization technique (SUMT)

1. Initialization step:
Select a growth parameter, >1
Select a stopping parameter, >0
Select an initial value of the penalty parameter, c0
Choose a starting point x
0
that violates at least one constraint and
formulate the augmented objective function. (c,x) , Let k=1,
2. Iterative step:
Starting from x
k-1
, use an unconstrained search technique to find the
point that minimizes (c
k-1
,x). Call it xk and determine which
constraints are violated at this point.
3. Stopping Criteria:
If


OR


Then, Stop with an xk, an estimate of the optimal solution.

Else, replace c
k
with .c
k-1
,
Formulate the new (c
k
,x) based on which constraints are violated at x
k
,
k=k+1, repeat the process.
























B) Barrier Method


Penalty Functions
We present the penalty functions for the solution of constrained NLP problems.
Basic principle: the constrained problem is transformed to an unconstrained problem by adding
a penalty term to the objective function. Successive unconstrained problems are solved,
adjusting the penalty parameters so that convergence is attained.
Denote by x*(r) the solution to the modified unconstrained problem, which depends on the
penalty parameter r, and x* the optimal solution to the original constrained problem.
Goal for the penalty function methods:
X
*
(r) X
*

Barrier Functions
There are two types of penalty functions: interior(commonly called barrier) and exterior
(commonly called penalty).
Barrier methods are applicable to problems of the form
minimize f(x)
subject to x e S
where the constraint set S has a nonempty interior.
In interior point or 'barrier' methods, we approach the optimum from the interior of the feasible
region. Only inequality constraints are permitted in this class of methods.
Thus, we consider the feasible region to be defined by S = {x: g
i
(x)s0, i = 1,2p}
Interior penalty functions (or barrier functions), B(x), are defined with the properties:
(i) B is continuous
(ii) B(x) > 0
(iii) B(x) as x approaches the boundary of S.
There are two popular choices:
Let g
i
, i = 1,2p be continuous functions on E
n
. Suppose
S = {x: g
i
(x)s0, i = 1,2p}
Then the barrier function is defined on the interior of S as
B=-E1/g
i
(x) or
B=-Elog[-g
i
(x)]

Using the barrier function we obtain this aproximate problem
minimize f(x)+(1/c)B(x)
subject to xe interior of S
where c is a positive constant.
This problem can be solved by using an unconstrained search technique.
To find the solution one starts at an initial interior point and then searches from that point using
steepest descent or some other iterative descent method applicable to unconstrained problems.
Method
The procedure for solving problem by the barrier function method is analogous to the penalty
function procedure.
For c
k
>0, c
k+1
> c
k
(k=1,2) define the function

r(c,x)=f(x)+(1/c)B(x)
For each k solve the problem
minimize r(c
k
,x)
subject to xe interior of S
obtaining a solution point x
k
.
Theorem: Any limit point of a sequence { x
k
} generated by the barrier method is a solution to
original problem.
Example: Solve the NLP problem
min f(x)=-x
1
x
2
s.t. g
1
= x
1
+ x
2
2
-1s0
g
2
= -x
1
- x
2
s0
Use the method of barrier functions by employing the log function. In solving the subproblems,
carry three iterations of the steepest descent method. Let the starting solution be x
(0)
=[0.5 , 0.5].
carry two iterations of the barrier function.
The log barrier function B(x) for this problem is


The composite function becomes




The gradient of T(x) is given by

For every iteration of the barrier method, there are three iterations of the steepest descent
method. We will use the following notation to differentiate between the outer and the inner
iterations:
x
s
(t)
= the solution computed in iteration t of the steepest descent method (inner loop) x
s
(t)
x
p
(k)
= the solution computed in iteration k of the barrier method (outer loop)
Iteration 1
Start with r
0
= 1 and x
p
(0)
=[0.5 , 0.5].
Iteration 1 (Steepest Descent):
Set ,

To find , solve min
0
T()
The optimal solution is = 0.053589839, and
Iteration 2 (Steepest Descent):


To find , solve min
0
T()
The optimal solution is = 0.329838461



Iteration 3 (Steepest Descent):


= 0.08878

This completes three iterations of the steepest descent method.
The solution after one iteration of the barrier method is


Iteration 2
We start with the previous optimal solution, which is
.
Set r
1
= 10
Iteration 1 (Steepest Descent):
Set


= 0.572927856




Iteration 2 (Steepest Decent):

Iteration 3 (Steepest Descent):

The optimal solution after the second iteration of the barrier function is

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy