0% found this document useful (0 votes)
12 views207 pages

Mathematical Programming Textbook 9965296308 Compress

Uploaded by

Phong Cao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views207 pages

Mathematical Programming Textbook 9965296308 Compress

Uploaded by

Phong Cao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 207

Al-Farabi Kazakh National University

S. A. Aisagaliev, Zh. Kh. Zhunussova

MATHEMATICAL
PROGRAMMING
Textbook

Almaty
"Kаzаkh university"
2011
1
ББК 22. 1
А 36

Рекомендовано к изданию
Ученым советом механико-математического факультета
и РИСО КазНУ им. аль-Фараби
(Протокол №1 от 19 октября 2010 г.)

Reviewers:
doctor of physical and mathematical sciences, professor M.Akhmet;
candidate of philological sciences, professor A.A.Moldagalieva

Aisagaliev S.А., Zhunussova Zh.Kh.


А 36 Mathematical programming: textbook. – Almaty: Kаzаkh
university, 2011. - 208 p.
IBSN 9965-29-630-8
Some theoretical foundations of mathematical programming are expounded
in the textbook: elements of convex analysis; convex, nonlinear, linear
programming required for planning and production control for solution of the
topical problems of the controlled processes in natural sciences, technology and
economy.
The tasks for independent work with concrete examples, brief theory and
solution algorithms of the problems, term tasks on sections of the mathematical
programming are put in the appendix.
It is intended as a textbook for the students of the high schools training on
specialties "applied mathematics", "mathematics", "mechanics", "economic
cybernetics" and "informatics". It will be useful for the post-graduate students and
scientific workers of the economic, mathematical, naturally-technical and
economic specialties.

ББК 22. 1

© Aisagaliev S.А., Zhunussova Zh.Kh., 2011


ISBN 9965-29-630-8 © Al-Farabi Kazakh National University, 2011
2
FOREWORD

The main sections of mathematical programming, the numerical


methods of function minimization of the finite number variables are
expounded in the textbook. It is written on the basis of the lectures on
optimization methods which have been delivered by the authors in Al-
Farabi Kazakh National University.
In connection with transition to credit technology of education the book
is written in the manner of scholastic-methodical complex, containing
alongside with lectures problems for independent work with solutions of the
concrete examples, brief theory and algorithm on sections of the course, as
well as term tasks for mastering of the main methods of the optimization
problems solution.
In the second half XX from necessity of practice appeared the new
direction in mathematics - "Mathematical control theory" including the
following sections: mathematical programming, optimal control with
processes, theory of the extreme problems, differential and matrix games,
controllability and observability theory, stochastic programming.
Mathematical control theory was formed at period of the tempestuous
development and creating of the new technology, spacecrafts, developing of
the mathematical methods in economy, controlling by the different process
in natural sciences. Aroused new problems could not be solved by classical
methods of mathematics and required new approaches and theories. The
different research-and-production problems were solved due to
mathematical control theory, in particular: production organizing to achieve
maximum profit with provision for insufficiency resources, optimal control
by nucleus and chemical reactors, electrical power and robotic systems,
control by moving of the ballistic rockets, spacecrafts and satellites and
others. Methods of the mathematical control theory were useful for
developing mathematics. Classic boundary problems of the differential
equations, problems of the best function approach, optimal choice of the
parameters in the iterative processes, minimization of the difficulties with
equations are reduced to studying of the extreme problems.
Theory foundation and solution algorithms of the convex, nonlinear and
linear programming are expounded on the lectures 1-17. Execution of the
three term tasks for individual work of the students is provided for these
sections.
Solution methods of the extreme problems are related to one of the high
developing direction of mathematics. That is why to make a textbook
possessed by completion and without any shortage is very difficult. Authors
will be grateful for critical notations concerning the textbook.
3
INTRODUCTION

Lecture 1
THE MAIN DETERMINATIONS.
STATEMENT OF THE PROBLEM

Let E n be the euclidean space of vectors u  ( u1 ,u2 ,...,un ) and


a scalar function J ( u )  J ( u1 ,...,un ) which is determined on a set
U of the space E n . It is necessary to find the maximum ( minimum)
of the function J u  on set U, where U  E n is the set.
Production problem. The products of the five types are made on
the enterprise. The cost of the unit product of the each type
accordingly c1 , c2 , c3 , c4 , c5 , in particular c1  3, c2  5, c4  1,
c5  8 . For fabrication specified products the enterprise has some
determined recourses expressed by the following normative data:

Type of the Labor


Materials Energy Recourses
product expenses
1 a11  3 a21  6 a31  1 b1  50
2 a12  1 a22  3 a32  3 b2  120
3 a13  4 a23  1 a33  1 b3  20
4 a14  2 a 24  4 a34  2
5 a15  1 a25  5 a35  4
4
Here a11 is an amount of the materials required for fabricating
the units of the 1-st type products; a12 is a consumption of the
material for fabrication of the 2-nd type unit products and so on; aij ,
i  1, 2, 3,j  1, 2, 3, 4, 5 is an expenses of the material, energy,
labor expenses for fabrication of the j type unit products.
It is required to find a such production plan output such that
provides the maximal profit. So as the profit is proportional to total
cost of marketed commodities, that hereinafter it is identified with
total cost of marketed commodities.
Let u1 , u 2 , u3 , u 4 , u5 be five products. The mathematical
formalization of the problem function maximizations profit has the form

J (u )  J (u1 , u 2 , u 3 , u 4 , u 5 )  c1u1  c 2 u 2  c3 u 3  c 4 u 4  c5 u 5 
 3u1  5u 2  10u3  u 4  8u5  max (1)

under the conditions (restrictions of the resources)

g1 (u )  a11u1  a12 u 2  a13u 3  a14 u 4  a15 u 5  b1 , 



g 2 (u )  a 21u1  a 22 u 2  a 23u 3  a 24 u 4  a 25 u 5  b2 , (2)
g 3 (u )  a31u1  a32 u 2  a33u 3  a34 u 4  a35 u 5  b3 , 

u1  0, u 2  0, u3  0, u 4  0, u5  0, (3)

where aij , i  1,3, j  1,5 are normative coefficients which values


presented above; b1 , b2 , b3 are recourses of the enterprise. Since
amount of the products are nonnegative numbers that necessary the
condition (3).
If we introduce the notation

 a11 a12 a13 a14 a15 


 
A   a21 a22 a23 a24 a25 ;
a a35 
 31 a32 a33 a34

5
 c1   u1 
   
 c2   u2   b1   g1 
   
c   c3 ; u   u3 ; b   b2 ; g   g 2 ,
   
 c4   u4  b  g 
 3  3
  u 
 c5   5

then optimization problem (1) – (3) can be written as


J (u )  c ' u  max; (1' )

g (u )  Au  b; ( 2' )

u  0, (3' )

where c'  (c1 , c 2 , c3 , c 4 , c5 ) is a vector-line;  '  is a sign of


transposition.
Let a vector-function be g (u )  g (u )  b  Au  b , but a set
 
U 0  u  E 5 / u  0 . Then problem (1) – (3) is written as:

J (u )  c ' u  max; (4)


u  U  u  E 5 u  U 0 , g (u )  0  E 5 .  (5)

The problem (1) – (3) (or (1' ) – (3' ) , or (4), (5)) belongs to type
so called problem of the linear programming, since J (u ) is linear
with respect to u function; the vector-function g (u ) also is linear
with respect to u .
In the case above vector u has five components, vector-function
g (u ) – three and matrix A has an order 3 5 was considered. In the
case vector u has dimensionality n , g (u ) – m -a measured
function, but matrix A has an order m  n and there are restrictions
of the equality type (for instance g1 (u )  A1u  b1 , where A1 – a
6
matrix of the order m1  n , b1  E 1 ) problem (4), (5) is written:
m

J u   c' u  max; (6)

u U ,
 n
U  u  E u  U 0 , g ( u )  Au  b  0 , g1( u )  A1u  b1  0 , (7)

 
where set U 0  u  E n u  0 , c  E n . Problem (6), (7) belongs
to type of the general problem of the linear programming.
We suppose that J (u ) is convex function determined on the
convex set U 0 (unnecessary linear); g (u ) is a convex function
determined on the convex set U 0 . Now problem (6), (7) is written as

J ( u )  max; (8)

 
u  U  u  E n u  U 0 , g ( u )  0 , g1( u )  A1u  b1  0 . (9)

Problem (8), (9) belongs to type so called problem of the convex


programming.
Let J (u ) , g (u ) , g1( u ) be arbitrary functions determined on
the convex set U 0 . In this case, problem (8), (9) can be written as

J (u )  max; (10)

u  U  u  E n u  U 0 , g (u )  0, g 1 (u )  0. (11)

Optimization problem (10), (11) belongs to type so called


problem of the nonlinear programming. In all presented problems of
the linear, convex and nonlinear programming are specified the
concrete ways of the prescription set U from E n . If it is distracted
from concrete way of the prescription set U from E n , so
optimization problem in finite-dimensional space possible to write as

J (u )  max; u  U , U  En.
7
Finally, we note that problem of the function maximization J (u )
on a set U tantamount to the problem of the function minimization –
J (u ) on set U. So further it is possible to limit by consideration of the
problem
J (u )  min; u  U , U  E n , (12)

for instance, in the problem (1) - (3) instead of maximization J (u )


minimization of the function  3u1  5u 2  10u3  u 4  8u5  min .
Finally, the first part of the course "Methods of the optimization"
is devoted to the solution methods of the convex, nonlinear, linear
programming.
The question arises: whether the problem statement (12) correct
in general case? It is necessary the following definitions from
mathematical analysis for answer.
Definition 1. The point u* U called the minimum point of
the function J (u ) on a set U if inequality J (u* )  J (u ) is executed
under all u  U . The value J (u* ) is called the least or minimum

 
value of the function J (u ) on set U.
The set U *  u*  U J (u* )  min J (u ) contains all minimum
uU
points of the function J (u ) on set U.
It follows from the definition, that the global (or absolute)
minimum of the function J (u ) on the set U is reached on the set
U *  U . We remind that a point u**  U is called the local
minimum point of the function J (u ) , if inequality J (u** )  J (u )
is valid under all u  o(u** ,  )  U , where set

 
o(u** ,  )  u  E n | u  u** |   E n

- an open sphere with the centre in u** and radius   0 ; | a | -

Euclidean norm of the vector a  a1 ,..., a n   E n , i.e. | a |


n

a
i 1
2
i
.

8

, and U  u  E 1 1 2  u  1.
Example 1. Let J (u )  cos 2
u

Then set U *  2 3 ; b) U  u  E 1 1 4  u  2 , then set 
U *  2 / 7, 2 / 5, 2 / 3, 2 ; c) U  u  E 1 / 2  u  ,
then set
U *   , where  - empty set.
Example 2. The function J (u )  ln u , set U  u  E 1 0  u  1.
The set U *   .
Example 3. The function J (u )  J 1 (u )  c, c  const , where
function
 | u  a |, если u  a;

J 1 (u )   u  b, если u  b;
c, если a  u  b,

and set U  E 1 . The set U *  a, b  .


Definition 2. It is spoken, that function J (u ) is bounded below
on set U if the number M such that J (u )  M under all u  U
exists. The function J (u ) is not bounded below on set U if the
sequence u k   U such that lim J (u k )   exists.
k 
Function J (u ) is bounded below on set U in the examples 1, 3,
but function J (u ) is not bounded on U in the example 2.
Definition 3. We say, that function J (u ) is bounded from below
on set U . Then value J *  inf J u  is called the lower bound of the
uU

function J (u ) on set U, if: 1) J *  J (u ) under all u  U ; 2) for


arbitrary small number   0 is found the point u ( )  U such that
value J u    J *   . When the function J (u ) is not bounded
from below on set U, lower bound J *   .
We notice, that for example 1 value J *  0 , but for examples 2,
3 values J *   , J *  c , accordingly. If set U *   , so
9
J *  min J u  (refer to examples). Value J * always exists, but
uU

min J u  does not always exist. Since lower bound J * for function
uU

J (u ) determined on set U always exists, independently of that,


whether set U * is emptily or in emptily, problem (12) can be written
in the manner of

J u   inf, u U , U  E n . (13)

Since value min J u  does not exist, when set U *   , that


uU
correct form of the optimization problem in finite-dimensional space
has the form of (13).
Definition 4. The sequence u k   U is called minimizing for
function J (u ) determined on set U, if limit lim J u k   inf J u   J * .
k  uU

As follows from definition lower bound, in the case  k  1 k ,


k  1,2,3... we have the sequences u  k    u 1 / k  
uk   U k  1,2,..., for which J u k   J *  1 / k
Thence follows that limit lim J u k   J * . Finally, minimizing
k 

sequence u k   U always exists.


Definition 5. It is spoken, that sequence u k   U converges to
set U *  U , if limit lim  u k , U *   0 , where  u k ,U * 
k 

 inf | u k  u* | - a distance from point u k U till set U * .


u*U *

It is necessary to note, if set U *   , so the minimizing sequence


always exists which converges to set U * . However statement about any
minimizing sequence converges to set U * , in general case, untrue.
Example 4. Let function J u   u 4 (1  u 6 ) , U  E 1 .
For the example set U *  0 , but the sequences u k  1 / k ,
k  1,2,...  U , u k  k , k  1,2,...  U are minimizing. The first
10
sequence converges to set U * , the second infinitely is distanced from
it. Finally, source optimization problem has the form of (13). We
consider the following its solutions:
1-st problem. Find the value J *  inf J u  . In this case
uU

independently of that whether set U * is emptily or in emptily,


problem (13) has a solution.
2-nd problem. Find the value J *  inf J u  and the point
uU

u* U * . In order to the problem (13) has a solution necessary the set


U*   .
3-rd problem. Find the minimizing sequence u k   U which
converges to set U * . In this case necessary that set U *   .
Most often in practice it is required solution of the 2-nd problem
(refer to production problem).

11
Lecture 2
WEIERSTRASS’S THEOREM

We consider the optimization problem

J u   inf, u U , U  E n . (1)

It is necessary to find the point u* U * and value J *  inf J u  .


uU

We notice, if set U *   , so J *  J u*   min J u  .


uU
The question arises: what requirements are imposed on the function
 
J (u ) and on set U that the set U *  u *  U J (u * )  min J (u ) be in
uU

emptily? For answer it is necessary to enter the notions of compact


set and half-continuously from below of the function J (u ) on set U.
The compact sets. Let u k   E n be a certain sequence. We
remind that: a) the point v  E n is called the limiting point of the
 
sequence u k , if subsequence u k m exists for which limit
lim u km  v ; b) sequence u k   E n is identified bounded , if the
m 

number M  0 exists, such the norm | u k | M for all k  1,2,3,...


; b) set U  E n is identified bounded , if the number R  0 exists,
such that norm | u | R under all u  U ; c) the point v  E n is
identified the limiting point of the set U, if any its  - set
neighborhood ov,   contains the points from U differenced from
v ; d) for any limiting point v of the set U is found the sequence
u k   U for which lim u k  v ; e) set U  E n is identified
k 
closed, if it contains all their own limiting points.

12
Definition 1. The set U  E n is identified compact, if any sequence
u k   U has at least one limiting point v, moreover v  U .
It is easy make sure the definition is equally to known from the
course of the mathematical analysis statement about any bounded
and closed set is compactly. In fact, according to Bolzano and
Weierstrass theorem any bounded sequence has at least one limiting
point (the set U is bounded ), but from inclusion v  U follows the
property of being closed of the set U .
Half-continuously from below. Let u k   E n be a sequence.
Then J k   J uk  is the number sequence. We notice that: a)
numerical set J k  is bounded from below, if the number  exists,
such that J k   , k  1,2,3,... ; b) numerical set J k  is not bounded
 
from below, if J km subsequence exists such the limit lim J km   .
m 
Definition 2. By the lower limit of the bounded from below numeric
sequence J k  is identified the value а denoted a  lim J k if: 1)
m 

 
subsequence J km exists for which lim J km  a ; 2) all other limiting
m 

points to sequences J k  is not less the value а. If numeric sequence


J k  is not bounded from below, so the value a   .
Example 1. Let J k  1   1 , k  0,1,2,... . The value a  0 .
k

Definition 3. It is spoken, the function J u  determined on set


U  E n , half-continuous from below in the point u  U , if for any
sequence u k   U for which the limit lim u k  u , the inequality
k 

lim J u k   J u  is executed. The function J (u ) is half-continuous


k 
from below on set U, if it is half-continuous from below in each
point of the set U.
 
Example 2. Let set U  u  E 1  1  u  1 , function J u 
u 2
under 0 | u | 1, J 0  1 . The function J (u ) is
uncontinuous on set 0 | u | 1 , consequently, it is half-continuous
13
from below on the set. We show, that function J (u ) is half-
continuous from below in the point u  0 . In fact, the sequence
1/ k, k  1,2,... . belongs to the set U and the limit of the sequence
 
is equal to zero. Numeric sequence J u k   1 / k 2 , moreover limit
lim J u k   lim J u k   0 . Consequently, lim J u k   0  1 . It
k  k  k 

means function J (u ) is half-continuous from below on set U .


Similarly possible to enter the notion of the half-continuity from
above of the function J (u ) on set U . We notice, if the function
J (u ) is uncontinuous in the point u  U , so it is half-continuous in
it as from below, so and from above.
The most suitable check way of the half-continuity from below
of the function J (u ) on set U gives the following lemma.
Lemma. Let function J (u ) be determined on closed set U  E n
. In order the function J (u ) to be half-continuous from below on set
U, necessary and sufficiently that Lebesgue’s set
 
M (с)  u  E J (u )  c be closed under all c  E .
n 1

Proof. Necessity. Let function J (u ) be half-continuous from below


on closed set U . We show, that set M (c ) is closed under all c  E 1 .
We notice, that empty set is considered as closed. Let v be any limiting
point of the set M (c ) . From definition of the limiting point follows the
existence of the sequence u k   M (c ) which converges to the point v .
From inclusion u k   M (c) follows that value J (u k )  c,
k  1,2,... . With consideration of u k   U and half-continuity from
below J (u ) on U follows J v   lim J u k   c . Consequently, the
k 

point v  M c  . The property of being closed of the set M (c ) is


proved.
Sufficient. Let U be closed set, but set M (c ) is closed under
all c  E 1 . We show, that function J (u ) is half-continuous from
below on U. Let u k   U - a sequence which converges to the
14
point u  U . We consider the numeric sequence J u k . Let the
value a  lim J u k  . By definition of the below limit exists the
k 

   for which lim J u   a . Consequently, for any


sequence J u km
m 
km

sufficiently small   0 is found the number N  N   , such that


 
J u km  a   under m  N . Thence with consideration of
lim u km  u , u k m  M a   , we have J u   a   . Then with
m 
consideration of arbitrarily   0 it is possible to write the following
inequality: J u   lim J u k   a , i.e. function J (u ) is half-
k 
continuous from below in the point u  U . Since set U is closed,
function J (u ) is half-continuous from below in any point u  U .
Lemma is proved.
We notice, in particular, that under c  J * from lemma follows
 
that set M ( J * )  u  E n u  U , J (u )  J *  U * is closed.
Theorem 1. Let function J (u ) be determined, finite and half-
continuous from below on compact set U  En . Then
J *  inf J u    , set
uU


U *  u*  E n u*  U , J (u* )  min J (u )
uU

is inempty, compactly and any minimizing sequence converges to set
U* .
Proof. Let u k   U be any minimizing sequence, i.e. the limit
of the numeric set lim J u k   J * . We notice, that such minimizing
k 

sequence always exists. Let u* - any limiting point of the


minimizing sequence. Consequently, subsequence u km  U exists  
for which lim u km  u* . With consideration of the compactness of
m 
the set U all limiting points of the minimizing sequence belongs to
15
the set U .
As follows from definition of lower bound and half-continuity
from below function J (u ) on set U the following inequalities are
faithful

m 
 
J *  J u*   lim J u km  lim J u k   J * .
k 
(2)

Since the sequence J u k  converges to value J * , that any its


subsequence also converges to value J * . We have that value
J *  J u*  from inequality (2). Consequently, set U *   and
J *  J u*    . Since the statement faithfully for any limiting
point of the minimizing sequence, so it is possible to confirm that
any minimizing sequence from U converges to set U * .
We show that set U *  U is compact. Let wk  - any sequence
taking from set U * . From inclusion wk  U follows that wk   U
. Then with consideration of compactness set U subsequence wkm  
exists which converges to a point w* U . Since wk   U , so
values J ( wk )  J * , k  1,2, ... . Consequently, the sequence
wk   U is minimizing. Then, with consideration of proved above,
limiting point to this sequences w* U * . Finally, closeness of the set
U * is proved. Restriction of the set U * follows from inclusion
U *  U . Compactness of the set U * is proved. Theorem is proved.
Set U is not often bounded in the applied problems. In such
cases the following theorems are useful.
Theorem 2. Let function J (u ) be determined, finite and half-
continuous from below on inempty closed set U  E n . Let for
certain point v  U Lebesgue’s set


M (v )  u  E n u  U , J (u )  J (v ) 
16
is bounded . Then J *   , set U * is inempty, compact and any
minimizing sequence u k   M (v) converges to set U * .
Proof. Since all condition of the lemma are executed, set M (v )
is closed. From restrictedness and closeness of the set M (v ) follows
its compactness. The set U  M (v)  M 1 (v) , where set M 1 (v)
 
 u  E n u  U , J (u )  J (v) , moreover on set M 1 (v) function
J (u ) does not reach its lower bound J * . Hereinafter proof of the
theorem 1 is repeated with change set U on compact set M (v ) .
Theorem is proved.
Theorem 3. Let function J (u ) be determined, finite and half-
continuous from below on inempty closed set U  E n . Let for any
sequence vk   U , vk   , under k   the equality
lim J (v k )   is executed. Then J *   , set U * is inempty,
k 

compact and any minimizing sequence u k   U converges to set U * .


Proof. Since lim J (v k )   , so the point v  U such that
k 


J (v)  J * exists. We enter Lebesgue’s set M (v)  u  E n u  U ,

J (u )  J (v ) . With consideration of lemma set M (v ) is closed. It is
easy to show that set M (v ) is bounded. In fact, if set M (v ) is not
bounded, then the sequence wk   M (v) , such that wk   under
k   exists. By condition of the theorem for such sequences the
value J ( wk )   under k   . Since J ( wk )  J (v)   it is not
possible. Finally, set M (v ) is bounded and closed, consequently, it is
compact.
Hereinafter proof of the theorem 1 is repeated for set M (v ) .
Theorem is proved.

17
Chapter I. CONVEX ROGRAMMING.
ELEMENTS OF THE CONVEX
ANALYSIS

Amongst methods of the optimization problems solution in


finite-dimensional space the most completed nature has a method of
the problem solution of the convex programming. In the convex
analysis developing intensive last years characteristic of the convex
sets and functions are studied which allow generalizing the known
methods of the problems solution on conditional extreme.

Lecture 3
CONVEX SETS

In the applied researches often meet the problems of the convex


programming in the following type:

J (u )  inf,

u  U  u  E n u  U 0 , g i (u )  0, i  1, m, (1)

g i (u )  a i , u  bi  0, i  m  1, s ,

where J (u ) , g i (u ) , i  1, s , - convex functions determined on


convex set U 0  E n .

18
Definition 1. Set U  E n called by convex, if for any u  U ,
v  U and under all  , 0    1 , the point u  u  (1   )v 
 v   (u  v)  U .
Example1. We show that closed sphere


S (u 0 , R )  u  E n u  u 0  R 
Is a convex set. Let the points u  S (u 0 , R ) ,   S (u 0 , R ) , i. e.
norms u  u 0  R , v  u 0  R . We take a number  ,   0, 1 ,
and define the point u  u  (1   )v . Norm

u  u 0  u  (1   )v  u 0   (u  u 0 )  (1   )(v  u 0 ) 
  u  u 0  (1   ) v  u 0  R  (1   ) R  R.

Consequently, the point u  S (u 0 , R ) , and set S (u 0 , R ) is convex.


Example2. We show that hyperplane


  u  E n  c, u   
is a convex set, where с  E n is the vector,  is the number. Let the
points u   , v   . Consequently, scalar products  c, u   ,
 c, v   . Let u  u  (1   )v ,   0, 1 . Then scalar product

 c, u  c, u  (1   )v    c, u  
 (1   )  c, v    (1   )   .

Thence follows that the point u   and set  is convex.



Example 3. We show that affine set М  u  E n Au  b , 
where A is a constant matrix of the order m  n , b  E n
is the

19
vector which is convex. For the point u  u  (1   )v , u  М ,
v  М ,   0, 1 we have

Au  A(u  (1   )v)  Au  (1   ) Av  b  (1   )b  b

Thence follows that u  М and set М is convex.


Let u 1 , u 2 , …, u n  r be linear independent solutions of the linear
homogeneous system Au  0 . Then set M can be presented in the
manner of

 
 
nr
M  u  E n u  u 0  v, v  L , L  u  E n v    i u i 
 i 1 

where vector u 0  E n is a partial solution of the nonhomogeneous


system Au  b , L is a subspace to dimensionality n  r formed by
the vectors u 1 , u 2 , …, u n r ,  i , i  1, n  r are the numbers and
dimension of the affine set M is taken equal to the dimension of the
space L .
Definition 2. By affine cover of the arbitrary set U  E n called
the intersection of all affine sets containing set U and it is denoted
by aff U . Dimensionality of the set U is called the dimension its
affine cover and it is denoted by dim U .
Since the intersection of any number convex sets is convex set,
so the set aff U is convex. Instead of source problem J (u )  inf ,
u  U , where U is an arbitrary set it is considered the approximate
problem J (u )  inf , u  aff U , since solution of the last problem
in the many cases more simpler, than source.
Definition 3. The set A called by the sum of the sets
A1 , A2 ,..., Am , i.e. A  A1  A2  ... Am if it contains that and only
m
that points a   ai , ai  Ai , i  1, m. Set A is called by the set
i 1
difference В and С i.e A  B  C , if it contains that and only that

20
points a  b  c, b  B, c  C . The set A   D , where  is a
real number, if it contains the points a   d , d  D.
Theorem 1. If the sets A1 , A2 ,..., Am , B, C , D are convex, so
the sets A  A1  A2  ...  Am , A  B  C , A   D are convex.
m m
Proof. Let the points be a   ai  A, e   ei   A,
i 1 i 1

ai , ei  Ai , i  1, m. Then the point u   a  1   e


m
   ai  1   ei ,   0,1 . Since the sets Ai , i  1, m are
i 1

convex, so  ai  1   ei  Ai , i  1, m . Consequently, the point


m
a   ai , ai   ai  1   ei  Ai , i  1, m. Thence follows
i 1

that point a  A . Convexity of the set A is proved.


We show that set A  B  C is convex. Let the points
a  b  c  A, e  b1  c1  A , where b, b1  B, c, c1  C . The
point a   a  1   e   b  c   1   b1  c1    b 
 1   b1    c  1   c1 ,   0,1 . Since the sets B and C
are convex, the points b  b  1   b1  B, c  c 
1   c1  C . Then the point a  b  c  A, b  B, c  C . It is
followed that set A is convex. From a  d1  A, e  d 2  A,
d1 , d 2  D follows that a   a  1   e   d1  1   d 2 ,
 d1  1   d 2 ,   0,1 . Then a  d  A, d  d 1  1 -
  d 2  D Theorem is proved.
Let U be the set from E n , v  E n is a certain point. There is
one and only one of the following possibilities:
a) The number   0 is found such that set o ( ,  )  U . In this
case point  is an internal point of the set U . We denote through
int U the ensemble of all internal points of the set U.
21
b) Set o ( ,  ) does not contain nor one point of the set U . The
point  is called by external with respect to U .
c) Set o ( ,  ) contains both points from set U , and points from
set E n \ U . The point  is called by bound point of the set U. The
set of all border points of the set U is denoted by p U .
d) The point   U , but set o ( ,  ) does not contain nor one
point from set U , except the points  . The point  is called the
isolated point of the set U . Convex set does not contain the isolated
points.
Definition 4. The point   U called comparatively internal
point of the set U if the intersection o( ,  )  aff U  U . We
denote through riU the ensemble of all comparatively internal
points of the set.
Example 4. Let set

U  u  E 1 0  u  1; 2  u  3  E 1 .
For the example the sets


int U  u  E 1 0  u  1; 2  u  3 . 
affU  u  E 1 0  u  3, riU  u  E 1 0  u  1; 2  u  3.
the following statements are faithful:
1) If А is a convex set, so closing A is convex too. In fact, if
a, b  A , so subsequences ak   A, bk   A, such that ak  a,
bk  b under k   exists. With consideration of convexity of the
set A the point  a k  1   bk  A, k  1,2,3,... . Then limiting
point a  lim a k  1   bk    a  1   b  A under all
k 

  0,1 . Thence follows the convexity of the set A .


2) If U is a convex set and int U   , so the point

v  v   u 0  v   int U , u 0  inf U , v  U , 0    1 .

22
To prove.
3) If U is a convex set, so int U is convex too. To prove.
4) If U is a convex inempty set, so ri U   and ri U is
convex. To prove.
5) If U is a convex set and ri U   , so the point

v  v  v  v   u 0  v  riU , u 0  riU , v  U , 0    1

To prove.
Definition 5. The point u  E n called the convex combination of
the points u 1 , u 2 ,..., u m from E n , if it is represented in the manner
m
of u   u ,
i 1
i
i
where the numbers  i  0, i  1, m but their sum

1   2  ...   m  1 .
Theorem 2. The set U is convex if and only if, when it contains
all convex combinations of any finite number their own points.
Proof. Necessity. Let U be a convex set. We show, that it
contains all convex combinations of the finite number of their own
points. We use the method of induction to proof of the theorem.
From definition of the convex set follows that statement faithfully for
any two points from U. We suppose, that set U contains the convex
combinations m  1 of their own points, i.e. the point
m 1
v    i u i U ,  i  0, i  1, m  1, 1  ...   m1  1 . We prove,
i 1
that it contains the convex combinations т their own points. In fact,
m 1
i
expression u   1u 1  ...   m u m  1   m  1 u i   mu m .
i 1 m

We denote  i   i / 1   m  . We notice, that  i  0 , i  1, m  1 ,


but sum  1  ...   m 1  1, since 1  ...   m1  1   m ,  i  0,
1   2  ...   m  1 . Then the point u  1   m v   m u m ,
v  U , u m  U . Thence with consideration of the set convexity U
the point u  U . Necessity is proved.
23
Sufficiency. Let the set U contains all convex combinations of
any finite number its own points. We show that U - a convex set. In
particular, for m  2 we have u  u 1  1   u 2  U under any
u 1 , u 2 U under all   0,1 . The inclusion means that U - a
convex set. Theorem is proved.
Definition 6. By convex cover of arbitrary set U  E n called
the intersection of all convex sets containing set U and it is denoted
by Co U .
From given definition follows that CoU is least (on remoteness
from set U ) convex set containing the set U . We notice, that source
problem J (u )  inf , u U , with arbitrary set U  E n can be
replaced on the approximate problem J (u )  inf , u  Co U . We
note, that if U is closed and bounded(compact) set, that CoU is also
boundedand closed (compactly).
Theorem 3. The set CoU contains that and only that points which
are convex combination of the finite number points from U .
Proof. To proof of theorem it is enough to show that CoU =W,
where W is set of the points which are convex combination of any
finite number points from set U . The inclusion W  Co U follows
from theorem 2, since W  Co U and set CoU is convex. On the
m
other hand, if the points u, v  W , i.е. u   i u i ,
i 1
m p p
 i  0, i  1, m;   i  1; v    i v i ,  i  0, i  1, p,
i 1 i 1
 i 1
i  1,
m
that under all   0,1 the point u  u  1   v    i u i 
i 1
p

 v i
i
, where  i   i  0, i  1, m,  i  1    i  0, i  1, p
i 1
m p
moreover the sum    
i 1
i
i 1
i  1 . Thence follows that u  W ,

consequently, the set W is convex. Since set U  W , so the

24
inclusion Co U  W exists. From inclusion W  Co U , Co U  W
follows that Co U  W . Theorem is proved.
Theorem 4. Any point u  Co U can be presented as convex
combination no more than n  1 points from U .
Proof. As follows from theorem 3, the point u  Co U is
m m
represented in the manner of  u ,
i 1
i
i
 i  0, i 1
i  1 . We

suppose, that number m  n  1 . Then n  1  dimensional vectors


 
ui  u i ,1 , i  1, m, m  n  1 are linearly dependent. Consequently,
there are the numbers  1 ,...,  m , not all equal to zero, such that sum
m m m
  i ui  0 . Thence follows, that   u i
i
 0,  i  0 . Using
i 1 i 1 i 1

first equality, the point u we present in the manner of


m m m m m
u   i u i   i u i  t   i u i 
i 1 i 1 i 1
  i  t i u i    i u i ,
i 1 i 1
m
where  i   i  t i  0, 
i 1
i  1 under enough small t . Let

 i*  min  i , amongst  i  0 , where index i, 1  i  m . We


choose the number t from the condition  i* /  i*  t . Since
m


i 1
i  0 , that such  i*  0 always exists. Now the point

m m m
u    i u i    i u i , i  0,  i  1. (2)
i 1 i 1 i 1
i  i* i  i*

Specified technique are used for any m  n  1 . Iterating the


given process, eventually get m  n  1 . Theorem is proved.
Definition 7. The convex cover drawing on the points
u , u1 ,..., u m from E n is called m -dimensional simplex, if vectors
0

25
u 1
 u 0 , i  1, m are linear independent and it is denoted by S m .
The points u 0 , u1 ,..., u m are called the tops of the simplex.
Set S m is a convex polyhedron with dimension m and by
theorem 3 it is represented in the manner of

 m m

S m  u  E n u    i u i ,  i  0, i  0, m,  i  1.
 i 1 i 0 

26
Lecture 4
CONVEX FUNCTIONS

The convex programming problem in general type is formulated


such: J u   inf, u  U , U  E n , where U - convex set from
E n ; J (u ) - a convex function determined on convex set U .
Definition 1. Let function J (u ) be determined on convex set U
from E n . Function J (u ) is called convex on set U, if for any points
u , v  U and under all  , 0    1 is executed the inequality

J u  1   v   J u   1   J v . (1)

If in the inequality (1) an equality possible under only   0 and


  1 , then function J (u ) is called strictly convex on convex set U .
Function J (u ) is concave (strongly concave) if function J (u )
concave (strongly concave) on set U .
Definition 2. Let function J (u ) be determined on convex set U.
Function J (u ) is called strongly convex on set U if the constant
  0 exists, such that for any points u , v  U and under all
 , 0    1 is executed the inequality

J u  1   v   J u   1   J v    1    | u  v |2 . (2)

Example 1. Let function J u   c, u be determined on convex


set U from E n . We show that J u   c, u - a convex function on
U . In fact, u , v  U are arbitrary points, number   0,1 , then

27
the point u  u  1   v  U on the strength of convexity of the
set U . Then

J u   c, u   c, u  1    c, v  J u   1   J v .

In this case relationship (1) is executed with sign of the equality.


The symbol .,. - a scalar product.
Example 2. Let function J u  | u |2 be determined on convex
set U  E n . We show that J (u ) - strongly convex function on set
U. In fact, for u , v  U and under all   0,1 the point
u  u  1   v  U . The value

J u   u  1   v  u  1   v, u  1   v 
2

  2 | u |2 2 1    u, v  1    | v |2 . (3)
2

The scalar product

u  v, u  v | u  v | 2 | u | 2 2 u, v   | v |2 .

Thence we have 2 u, v | u |  | v |  | u  v | . Having substituted


2 2 2

the equality to the right part of expression (3), we get

J u    | u |2  1    | v |2  1    | u  v |2 
 J u   1   J v    1    | u  v |2 .

Finitely, relationship (2) is executed with sign of the equality,


moreover number   1 .
Example 3. The function J u  | u | 2 determined on convex set
U  E n is strongly convex. In fact, value

28
J u     u  1   v   | u | 2  1    | v | 2   J u   1   J v ,
2

moreover equality possible under only   0 and   1 .


In general case check of convexity or strong convexity of the
function J (u ) on convex set U by the definitions 1, 2 rather
complex. In such events the following theorems are useful.
The criteria to convexity of the even functions. Let C 1 (U ),
C 2 (U ) are accordingly spaces of the continuously differentiated
and twice continuously differentiated functions J (u ) , determined on
set U . We notice, that gradient

J ' u   J u  / u1 ,..., J u  / u n   E n

under any fixed u  U .


Theorem 1. In order the function J u   C 1 (U ) to be convex
on the convex set U  E n , necessary and sufficiently executing of
the inequality

J u   J v   J ' v , u  v , u, v U . (4)

Proof. Necessity. Let function J u   C 1 (U ) be convex on U.


We show, that inequality (4) is executed. From inequality (1) we
have

J v   u  v   J v    J u   J v ,   0,1, u, v  U .

Thence on base of the formula of the finite increments we get

 J ' v   u  v , u  v   J u   J v , 0    1.

Both parts of the given inequality are divided into   0 and


turn to the limit under   0 , with consideration of
J u   C (U ) we get the inequality (4). Necessity is proved.
1

29
Sufficiency. Let for function J u   C 1 (U ) , U be a convex set
inequality (4) is executed. We show, that J (u ) is convex on U.
Since the set U is convex, the point u  u  1   v U , 
u , v  U ,   0,1 . Then from inequality (4) follows, that
J u   J u   J ' u , u  u , J v   J u   J ' u , v  u ,
u , u , v  U . We multiply the first inequality on  , but the
second inequality - on the number 1    and add them. As a result
we give J u   1   J v   J u  . Thence follows the convexity
to functions J (u ) on set U. Theorem is proved.
Theorem 2. In order the function J (u )  C 1 (U ) to be convex
on convex set U necessary and sufficiently executing of the inequality

J ' u   J ' v , u  v  0, u, v U . (5)

Proof. Necessity. Let the function J (u )  C 1 (U ) be convex on


convex set U. We show, that inequality (5) is executed. Since inequality
(4) faithfully for any u , v  U , that, in particular, we have
J v   J u   J ' u , v  u . We add the inequality with inequality (4),
as a result we get the inequality (5). Necessity is proved.
Sufficiency. Let for the function J (u )  C 1 (U ) , U be a convex
set, inequality (5) is executed. We show, that J (u ) is convex on U .
For proving it is enough to show, that difference
J u   1   J v   J u  
 J u   1   J v   J u  1   v   0, u, v U .

Since J (u )  C 1 (U ) , the following inequality faithfully:


1
J u  h   J u   J ' u  1h , h   J ' u  th , h dt ,
0

u , u  h U .

30
The first equality follows from formula of the finite increments,
but the second - from theorem about average value, 0  1  1 Then
difference

J u   1   J u   J u    J u   J u  
1
 1   J v   J u     J ' u  t u  u , u  u dt 
0
1
 1    J ' u  t v  u , v  u dt.
0

Let z1  u  t u  u   u  t 1   u  v , z 2  u 
 t v  u   u  t v  u  . Then z1  z 2  t u  v  , but
differences u  u  1    z1  z 2  / t , v  u    z1  z 2  / t .
Now previous inequality is written:

1
1
J u   1   J v   J u    1    J ' z1   J ' z 2 z1  z 2 dt.
0
t

Since the points z1 , z 2 U , that according to inequality (5) we


have J u   1   J v   J u   0 .
Theorem is proved.
Theorem 3. In order the function J (u )  C 2 (U ) to be convex
on convex set U , int U   , necessary and sufficiency executing
of the inequality

J " (u ) ,   0,   E n , u  U . (6)

Proof. Necessity. Let function J (u )  C 2 (U ) be convex on U .


We show, that inequality (6) is executed. If the point u  int U , then
the number  0  0 such that points u    U for any   E n and
31
under all  , |  |  0 is found. Since all conditions of theorem 2 are
executed, that inequality faithfully

J ' (u   )  J ' (u ),   0,   E n , |  |  0 .

Thence, considering that

J ' (u   )  J ' (u ),   J " (u   ) ,   2 , 0    1,

and transferring to limit under   0 , we get the inequality (6). If


u  U is a border point, then the following sequence u k   int U
exists, such that u k  0 under k  0 . Hereinafter, by proved
J " (u k ) ,   0,   E n , u k  inf U .
Transferring to limit and taking into account that
lim J " (u k )  J " (u ) with considering of J (u )  C (U ) , we get
2
k 
the inequality (6). Necessity is proved.
Sufficiency. Let for function J (u )  C 2 (U ) , U be a convex
set, int U   , inequality (6) is executed. We show, that J (u ) is
convex on U . So as the equality faithfully

J ' u   J ' v , u  v  J " v   u  v u  v, u  v , 0    1,

that, denoting by   u  v and considering that u  v   u 


 v   U , we get

J ' u   J ' v , u  v  J " u  ,   0,   E n , u U .

Thence with considering of theorem 2, follows the convexity to


function J (u ) on U . Theorem is proved.
We note, that symmetrical matrix

32
  2 J u  / u12  2 J u  / u1u 2 ...  2 J u  / u1u n 
 2 
  J u  / u 2 u1  2 J u  / u 22 ...  2 J u  / u 2 u n  –
J " u   
. . . . . . . . . . . . . . . . . . . . . . . . 
  2 J u  / u u  2 J u  / u u ...  2 J u  / u n2 
 n 1 n 2
a scalar product.
J " u ,  ,    ' J " u  .
The theorems 1 - 3 for strongly convex function J (u ) on U are
formulated in the manner specified below and its are proved by the
similar way.
Theorem 4. In order the function J (u )  C 1 (U ) to be strongly
convex on the convex set U necessary and sufficiently executing of
the inequality

J u   J v   J ' v , u  v  k | u  v |2 , u, v U . (7)

Theorem 5. In order the function J (u )  C 1 (U ) to be strongly


convex on the convex set U, necessary and sufficiently executing of
the inequality
J ' u   J ' v , u  v   | u  v | 2 ,
(8)
   k   const  0, u, v  U .
Theorem 6. In order the function J (u )  C 2 (U ) to be strongly
convex on the convex set U , int U   necessary and sufficiently
executing of the inequality

J " u  ,    |  | 2 ,   E n , u  U ,
(9)
   k   const  0.
The formulas (4) - (9) can be applied for defining of the
convexity and strong convexity of the even functions J (u )
determined on the convex set U  E n .
For studying of convergence of the successive approximation
methods are useful the following lemma.
33
Definition 3. It is spoken that gradient J ' u  to function
J (u )  C 1 (U ) satisfies to Lipschitz’s condition on the set U, if

| J ' u   J ' v  | L | u  v |, u, v U , L  const  0. (10)

Space of such functions is denoted by C 1,1 U  .


Lemma. If function J u   C 1,1 U , U be convex set, that
inequality faithfully

| J u   J v   J ' v , u  v | L | u  v |2 / 2, (11)

Proof. From equality


1
J u   J v   J ' v , u  v   J ' v  t u  v   J ' v , u  v dt
0

follows that
1
J u   J v   J ' v , u  v   J ' v  t u  v   J ' v  u  v dt.
0

Thence with consideration of inequality (10) and after integration on


t we get the formula (11). Lemma is proved.
The properties of the convex functions. The following
statements are faithful.
1) If J i u , i  1, m are convex functions on the convex set, then
m
function J u    J u , 
i i i  0, i  1, m is convex on set U .
i 1
In fact,
m m
J u     i J i u     i J i u   1   J i v  
i 1 i 1
m m
   i J i u   1    i J i v   J u   1   J v , u, v U .
i 1 i 1

34
2) If J i u , i  I is a certain family of the convex functions on
the convex set U, then function J u   sup J i u , i  I is convex on
set U .
In fact, by the determination of the upper boundary index i*  I
and the number   0, such that J (u )    J i* (u ),
i*  i*  ,    I are found. Thence we have

J (u )     J i* (u )  1   J i* (v ), u   u  1   v.

Consequently, J u    J u   1   J v  , under   0 .


3) Let J u  be a convex function determined on the convex set
U  E n . Then the inequality is correct

 m  m m
J    i u i     i J ui ,  i  0,  i  1.
 i 1  i 1 i 1

To prove by the method to mathematical induction.


4) If function f t , t  a, b is convex and it is not decrease,
but function F u  is convex on the convex set U  E n , moreover the
values F u   a, b , then function J u   f F u  is convex on U .
In fact, the value

J u   f F u   f  F u   1   F v  
  f F u   1    f F v    J u   1   J v  .

We notice, that t1  F u   a, b, t 2  F v   a, b and

f  F u   1    F v   f  t1  1   t 2  
  f t1   1    f t 2 , t1 , t 2  [a, b] ,

with consideration of convexity of the function f t  on segment


a, b .
35
Lecture 5
THE ASSIGMENT WAYS OF THE CONVEX
SETS. THEOREM ABOUT GLOBAL
MINIMUM. OPTIMALITY CRITERIA.
THE POINT PROJECTION ON SET

Some properties of the convex functions and convex set required


for solution of the convex programming problem and the other
questions which are important for solution of the optimization
problems in finite-dimensional space are studied separately in the
previous lectures.
The assignment ways of the convex sets. At studying of the
properties to convex function J (u ) and convex set U any answer to
question as convex set U is assigned in space E n was not given.
Definition 1. Let J (u ) be a certain function determined on set
U from E n . The set

epiJ  u,   E n1 / u U  E n ,   J (u ) E n1 (1)

is called the above-graphic (or epigraph) to function J (u ) on


set U.
Theorem 1. In order the function J u  to be convex on the
convex set U necessary and sufficiently that set epi J to be
convex.
Proof. Necessity. Let function J (u ) be convex on the convex
set U . We show, that set epiJ is convex. In order sufficiently to
make sure in that for any z  (u,  1 )epi J , w  ( ,  2 ) epi J
36
and under all  , 0    1, the point z   z  (1   ) w  epi J .
In fact, the point z   u  (1   ) , 1  (1   ) 2  , moreover
 u  (1   ) U with consideration of set convexity U, but the
value  1  (1   ) 2  J (u )  (1   ) J ( w)  J (u ) . Finally, the
point z  u ,    , where u   u  (1   ) U , but value
   J (u ) . Consequently, the point z epiJ. Necessity is proved.
Sufficiency. Let set epi J be convex, U be a convex set. We show
that function J (u ) is convex on U . If the points u , v  U , so
z  u, J (u )   epi J , w  v, J (v)   epi J . For any  ,  0,1,
the point z  u ,J (u )  (1   ) J (v)   epi J with consideration
of convexity set epi J . From the inclusion follows that value
   J (u )  (1   ) J (v)  J (u ) . It means that function J (u ) is
convex on U . Theorem is proved.
Theorem 2. If function J (u ) is convex on the convex set
U  E n , then set

M (C )  u  E n u  U , J (u )  C

is convex under all C  E 1 .


Proof. Let the points и, v  М(С), i.e. J (u )  C , J ( )  C , u ,
  U . The point u  u  (1   ) U under all  ,   0,1
with consideration of convexity the set U. Since function J (u ) is
convex on U, consequently value J (u )  J (u )  (1   ) J ( ), u ,
 U . Thence with consideration of J (u )  C , J ( )  C , we get
J (u )  C  (1   )C  C . In the end, the point u U , value
J (u )  C . Consequently, for any u , U the point u  M (C )
under all C  E 1 . It means that set M C  is convex. Theorem is
proved.
We consider the following optimization problem as appendix:
37
J (u )  inf , (2)


u  U  u  E n / u  U 0 , g i (u )  0, i  1, m;


g i (u )   ai , u  bi  0, i  m  1, s , (3)

where J (u ) , g i (u ), i  1, m are convex functions determined on


convex set U 0 ; ai  E , i  m  1, s
n
are the given vectors;
bi , i  m  1, s - the given numbers. We enter the following sets:

 
U i  u i  E n /u  U 0 , g i (u )  0 , i  1, m,

 
U m1  u  E n /  ai , u  bi  0, i  m  1, s .

The sets U i , i  1, m are convex, since g i (u ), i  1, m are


convex functions determined on convex set U 0 (refer to theorem 2,
C  0 ), but set

  
U m 1  u  E n Au  b  u  E n  ai , u  bi  0, i  m  1, s 
where А is a matrix of the order s  m  n ; the vectors
ai , i  m  1, s are the lines of the matrix А; b  bm 1 ,, bs  E n  s
is affine set.
Now the problem (2), (3) possible write as

m 1
J (u )  inf, u  U  U i  U 0 . (4)
i 0

Thereby, the problem (2), (3) is problem of the convex


programming, since the intersection of any number convex sets is a
convex set.
Theorem about global minimum. We consider the problem of
38
the convex programming (4), where J (u ) - a convex function
determined on convex set U from E n .
Theorem 3. If J (u ) is a convex function determined on convex

set U and J *  inf J (u )  , U *  u*  E / u*  U , J (u* ) 
n


uU

 min J (u )   , then any point of the local minimum to function


uU
J (u ) on U simultaneously is the point of its global minimum on U,
moreover the set U * is convex. If J (u ) is strongly convex on U,
then set U * contains not more one point.
Proof. Let u* U be a point of the local function minimum J (u )
on set U, i.e. J (u* )  J (u ) under all u o(u* ,  )  U . Let  U be
an arbitrary point. Then the point w  u*   (  u* ) o(u* ,  )  U
with consideration of convexity the set U, if    u*   .
Consequently, inequality J (u* )  J ( w) is executed. On the other
hand, since J (u ) - a convex function on U, that the inequality J (w) 
 J (u*   (  u* ))  J (   (1   )u* )   J ( )  (1   ) J (u* )
exists and   0 - sufficiently small number i.e.   0,1 . Now
inequality J (u* )  J ( w) to write as J (u* )  J ( w)  J ( )  (1 
  ) J (u* ) . This implies, that 0   J ( )  J (u* ) . Consequently,
J (u* )  J (v) ,   U .
This means that in the point u* U the global minimum to function
J (u ) on U is reached.
We show, that set U * is convex. In fact, for any u, U * , i.e.
and J (u )  J ( )  J * under all  ,  0,1 the value J (u ) 
J (u  (1   ) )  J (u )  (1   ) J ( )  J * . Thence follows that
J (u )  J * , consequently, the point u U * . Convexity of the set
is proved. We notice, that from U *  Ø follows J *  J (u* ) .
If J (u ) be strongly convex function, then must be
39
J (u )  J (u )  (1   ) J ( )  J * , 0    1 . Contradiction is taken
off, if u  v , i.e. the set U * contains not more one of the point.
Theorem is proved.
Finally, in the convex programming problems any point of the
local minimum to function J (u ) on U will be the point its global
minimum on U, i.e. it is a solution of the problem.
Optimization criteria. Once again we consider the convex
programming problem (4) for case, when J (u )  C 1 (U ) .
Theorem 4. If J (u )  C 1 (U ) is an arbitrary function, U is a
convex set, the set U *   , then in any point u* U * necessary the
inequality is executed

 J (u* ), u  u*   0, u U . (5)

If J (u )  C 1 (U ) is a convex function, U is a convex set, U *  


, then in any point u*  U * necessary and sufficiently performing of
the condition (5).
Proof. Necessity. Let the point u*  U * . We show, that
inequality (5) is executed for any function J (u )  C 1 (U ) , U - a
convex set (in particular, J (u ) - a convex function on U ). Let
u U be an arbitrary point and number   0,1 . Then the
difference J (u )  J (u* )  0, where u  u  (1   )u* U .
Thence follows that 0  J ( u  (1   )u* )  J (u* )  J (u*   (u 
 u* ))  J (u* )    J (u* ), u  u*   o( ), moreover
o( ) /   0 under   0 . Both parts are divided into   0
and transferring to the limit under   0 we get the inequality
(5). Necessity is proved.
Sufficiency. Let J (u )  C 1 (U ) be a convex function, U be a
convex set, U *   and inequality (5) is executed. We show that
point u*  U * . Let u U - an arbitrary point. Then, by theorem 2

40
(refer to section "Convexity criteria of the even functions"), we have
J (u )  J (u* )   J (u* ), u  u*   0 under all u U . This implies
that J (u* )  J (u ), u  U . Consequently, the point u* U * .
Theorem is proved.
Consequence. If J (u )  C 1 (U ) , U is a convex set, U *  
and the point u* U * , u*  int U , then the equality J (u* )  0
necessary is executed.
In fact, if u*  int U , then the number  0  0 such that for any
e  E n point u  u*   eU under all  ,    0 . is found. Then
from (5) follows  J (u* ),  e  0, e  E n , under all  ,    0 .
Thence we have J (u* )  0 .
Formula (5) as precondition of the optimality for nonlinear
programming problem and as necessary and sufficiently condition of
the optimality for convex programming problem will find applying
in the following lectures.
The point projection on set. As application to theorems 1, 2 we
consider the point projection on convex closed set.
Definition 2. Let U be certain set from E n , but the point   E n .
The point w  U is called the projection of the point   E n on set U
, if norm   w  inf   u , and it is denoted by w  PU ( )
uU
Example 1. Let the set be


U  S (u0 , R)  u  E n / u  u0  R , 
but the point   E n . The point w  PU ( )  u 0  R(  u 0 )   u 0 ,
that follows from geometric interpretation.
Example 2. If the set

U    u  E n /  c, u    ,

then point projection   E n on U is defined by the formula

41
w  PU ( )       c,  c c . Norm of the second composed
2

is equal to the distance from the point v till U .


Theorem 5. Any point   E n has a single projection on convex
closed set U  E n , moreover for point w  PU ( ) necessary and
sufficiently executing of the condition
 w  v, u  w  0, u  U . (6)

In particular, if U - affine set, that condition (6) to write as


 w  v, u  w  0, u  U . (7)

Proof. The square of the distance from the point v till u U is


2
equal to J (u )  u  v . Under fixed   E n function J (u ) -
strongly convex function on convex closed set U . Any strongly convex
function is strongly convex. Then by theorem 1 function J (u ) reaches
the lower boundary in the single point w , moreover w  U with
consideration of its completeness. Consequently, the point w  PU ( ) .
Since the point w U *  U and gradient J ( w)  2( w  v ) , that
necessary and sufficient optimality condition for the point w, ( w  u* )
according to formula (5) is written in the manner of
2 w  v, u  w  0, u  U . This implies the inequality (6).

If U  u  E n / Au  b  is affine set, then from u0 , u  U ,
u 0  u follows that 2u0  u U . In fact, A(2u 0  u )  2 Au0 
 Au  2b  b  b In particular, if u0  w  PU ( ) , then point
2w  u  U . Substituting to the formula (6) instead of the point
u U we get
 w  v, w  u   0, u  U . (8)

From inequality (6) and (8) follows the relationship (7). Theorem
is proved.

42
 
Example 3. Let the set U  u  E n / u  0 , the point   E n .
Then projection w  PU (v)  (v1 ,, vn ), where vi  max(0, vi ),
i  1, n . To prove.
Example 4. Let the set


U  u  (u1 ,, u n )  E n /  i  u i   i , i  1, n – 
n-dimensional parallelepiped, but the point   E n . Then components
of the vector w PU ( ) are defined by formula

 i , если vi   i

wi   i , если vi   i

vi , если  i  vi   i , i  1, n.
n
In fact, from formula (6) follows that sum  (w  v )(u
i 1
i i i  wi )  0

under all  i  vi   i , i  1, n . If vi   i , that wi   i , ui   i ,


consequently the product ( wi  vi )(ui  wi )  ( vi   i )(u i  
i )  0
Similarly, if vi   i , that wi   i , u i   i so ( wi  vi )(u i  wi )
 (vi   i )(ui   i )  0 . Finally, if  i  vi   i , then wi  vi , so
( wi  vi )(ui  wi )  0 . In the end, for the point
w  ( w1 ,, wn )  PU ( ) inequality (6) is executed.

43
Lecture 6, 7
SEPARABILITY OF THE
CONVEX SETS

Problems on conditional extreme and method of the indefinite


Lagrangian coefficients as application to the theory of the implicit
functions in course of the mathematical analysis were considered. At the
last years the section of mathematics is reached the essential
development and the new theory called "Convex analysis" appeared.
The main moment in the theory is a separability theory of the convex
sets.
Definition 1. It is spoken, that hyperplane c, u   with normal
vector c, c  0 divides (separates) the sets А and В from E n if
inequality are executed

sup  c, b    inf  c, a  . (1)


bB a A

If sup  c, b  inf  c, a  , that sets А and В are strongly


bB a A

separated, but if c, b  c, a, a  A, b  B , that they are


strongly separated.
We notice that if hyperplane c, u   separates the sets А and В,
that hyperplane  c, u    , where   0 - any number also
separates them, so under necessity possible to suppose the norm | c | 1 .
Theorem 1. If U - a convex set from E n , but the point v  int U ,
that hyperplane c, u   dividing set U and the point v exists. If U -
a convex set, but the point v U , that set U and the point v is
strongly separable.
44
Proof. We consider the event, when the point v U . In this
case by theorem 5 the point   E has a single projection
n

w  PU ( ) moreover w  v, u  w  0, u U . Let vector


c  wv . Then we have
c, u  v   w  v, u  v   w  v, u  w  w  v 
2
 w  v, w  v   w  v, u  w  c  0 . Thence follows, that
 c, u    c, v , u  U , i.e. set U and the point v  E are
n

strongly separable. Consequently, the set U and the point v are


strongly separable.
We notice, that, if v  int U , that v  pU . Then by definition
of the border point the sequence {vk }  U such that vk  v under
k   exists. Since the point vk  U , then by proved inequality
ck , u  ck , vk , u  U , ck  1 is executed. By Bolzano–
Weierstrass’ theorem from limited sequence ck  possible to select
 
subsequence c k m , moreover c k m  c under m   and c  1 .
For the elements of the subsequence the previous inequality is
written as: ckm , u.  ckm , vkm , u U . Transferring to limit under
m   with consideration of v km  v, we get c, u  c, v, u  U
. Theorem is proved.
From proof of theorem 1 follows that through any border point v
of the convex set U possible to conduct hyperplane for which
c, u  c, v, u U . Hyperplane c, u   , where
   c , v  , v  U is called to be supporting to set U in the point v,
but vector c  E - a supporting vector of the set U in the point
n

v U .
Theorem 2. If convex sets A and B from E n have a no points
in common, that hyperplane c, u   separating set А and В, as
well as their closures A and B , in the event of point y  A  B ,
45
number   c, y exists.
Proof. We denote U  A  B . As it is proved earlier (refer to
theorem 1 from lecture 3), set U is convex. Since the intersection
A  B   , that 0  U . Then with consideration of theorem 1
hyperplane c, u   ,   c, 0  0, separating set U and the
point 0, i.e. c, u  0, u  U exists. Thence with consideration of
u  a  b  U , a  A, b  B , we get  c, a  c, b, a  A,
b  B . Finally, hyperplane c, u   , where the number 
satisfies to inequality inf  c, a     sup  c, b separates the sets А
a A bB

and B. Let the points a  A , b  B . Then the subsequences


ak   A, bk   B such that a k  a, bk  b under k  
exist, moreover with consideration of proved above the inequalities
 c, ak      c, bk . are executed. Thence, transferring to limit
under k   we get c, a    c, b a  A , b  B .
In particular, if the point y  A  B , that   c, y . Theorem
is proved.
Theorem 3. If the convex closed sets A and B have a no points
in common and one of them is limited, that hyperplane strongly
separating sets A and B exists.
Proof. Let the set U  A  B . We notice that point 0  U , since
the intersection A  B  Ø, U - a convex set. We show, that set U is
closed. Let и - a limiting point of the set U . Then sequence u k   U
such that u k  u under k   exists, moreover u k  a k  bk ,
a k  A, bk  B . If set А is limited, that sequence a k   A is
limited, consequently, subsequence a k m  which converges to a certain
point а exists, moreover with consideration of completeness of the set А
the point a  A . We consider the sequence bk   B , where
bk  ak  u k . Since a k m  a, u k m  u under m   then
bk m  a  u  b under m   . With consideration of completeness
of the B the point b  B . Finally, the point u  a  b, a  A, b  B,
46
consequently, u U . Completeness of set U is proved.
Since the point 0  U , U  U , then with consideration of
theorem 1 hyperplane c, u   strongly separating the set U and
the point 0 exists, i.e c, u  0 ,  u U . This implies, that
 c, a   c, b , a  A, b  B . Theorem is proved.
Theorem 4. If the intersection of the inempty convex sets
A0 , A1 ,, Am - empty set, i.e. A0  A1    Am  Ø, that
necessary vectors c0 , c1 ,, cm from E n , not all equal zero and the
numbers  0 ,  1 ,,  m such that

ci , ai    i , ai  Ai , i  0, m; (2)

c0  c1    cm  0; (3)

 0   1     m  0; (4)
exist.
Proof. We enter the set A  A0  A1    Am - a direct product of
the sets Ai , i  0, m with elements a  (a0 , a1 ,, a m )  A , where
ai  Ai , i  0, m; . We notice that set A  E , where E  E n ( m1) . It
is easy to show, that set А is convex. In fact, if a1 
 ( a01 , a11 ,  , a1m )  A, a 2  ( a 02 , a12 ,  , a m2 )  A , then under all  ,
  0,1 , the point a   a 1  (1   ) a 2  ( a 0 , a1 ,  , a m )  A ,
since ai   ai  (1   )ai  Ai , i  0, m . We enter the diagonal set
1 2


B  b  (b0 , b1 ,, bm )  E b0  b1    bm  b, bi  E n , i  0, m . 
Set B is convex. It is not difficult to make sure, the intersection
A0  A1    Am   , if and only if, when intersection
A B  .
Finally, under performing the condition of theorem the sets А and
B are convex and A  B   , i.e all conditions of theorem 2 are
47
executed. Consequently, the vector c  (c0 , c1 ,  , c m )  E , c 0
such that c, a  c, b, a  A, b  B exists. The inequality can be
written in the manner of
m m m


i 0
ci , ai   ci , bi 
i 0
c , b
i 0
i , ai  A, bi  E n . (5)

m
As follows from inequality (5) linear function J (b)   c ,b
i 0
i ,

b  E n is limited, since in (5) a i  Ai , i  0, m . The linear function


m
J (b) is limited if and only if, when c
i 0
i  0 . Thence follows

fairness of the formula (3). Now the inequality (5) is written as


m

 c ,a
i 0
i i  0, ai  Ai , i  0, m . We fix the vectors ai  ai  Ai ,
m
i  k , then ck , ak   ci , ai  const, a k  Ak . Finally, the value
i 0

ck , a k  inf ck , a k   k   , a k  Ak , k  1, m . We denote
ak Ak
m
 0  ( 1   2     m ) . Then c0 , a0   inf
ak Ak
c
k 1
k , ak 
m
    k  0 .
k 1

Thereby, values ci , ai   i a i  Ai , i  0, m ,
 0   1     m  0 . Faithfulness of the relationships (2), (4) is
proved. Theorem is proved.
Convex cones. Theorems 1 - 5 are formulated and proved for
convex sets from E n . Separability theorems are often applied in the
extreme problem theories for events when convex sets are convex
cones.
48
Definition 2. Set K from E n is called by cone with top in zero,
if it together with any its point u  K contains the points u  K
under all   0 . If the set K is convex, that it is called the convex
cone, if K is closed, that it is called by closed cone, if K is open,
that it is called by open cone.
 
Example 1. The set K  u  E n / u  0 -a convex closed cone.
In fact if u  K , that point  u  K under all   0, u  0 .
 
Example 2. The set K  u  E n /  a, u  0 - a convex closed
cone. Since for any   0 , scalar product  a, u    a, u  0,
u  K , consequently, vector u  K .
 
Example 3. The set K  u  E n /  a, u   0 - an open convex
   
cone, K  u  E /  a, u   0 - closed convex cone, K  u  E n  E
n

 E - a convex cone.
n

We enter the set

K *  c  E n /  c, u  0, u  K . (6)

We notice, that set K  K * , since from c  u  K follows that


2
 c, u   c  0 . The set K *   , since it contains the element
c  0 . The condition c, u  0 , u  K means that vector c  K *
forms an acute angle (including  / 2 ) with vectors u  K . To give
geometric interpretation of the set K * .
Finally, the set K * from E n is the cone, if c  K , i.e.
*

c, u  0 , u  K , then for vector  c,   0 we have


 c, u   c, u  0 . Consequently, vector  c  K * .
Definition 3. The set K * determined by formula (6) is called
dual (or reciprocal) cone to cone K .
It is easy to make sure, that for cone K from example 1 the dual
cone K *  c  E n / c  0, for cone K from example 2 the reciprocal
cone K K *  c  E n c   ,   number, for example 3 the

49
dual cones K *  c  E n / c    ,   0, K *  c  E n / c  
   ,   0, K *  0 accordingly.
Theorem 5. If the intersection of the inempty convex with vertex
in zero cones K 0 , K1 ,, K m - empty set, that necessary the vectors
ci  K i* , i  0, m not all are equal to zero and such that
c0  c1    cm  0 exist.
Proof. Since all condition of theorem 4 are executed, that the
relationship (2) - (4) are faithful. We notice, that K i , i  0, m -
cones, then from ai  K i , i  0, m follows that vectors
ai  K i , i  0, m under all   0 . Then inequality (2) is written
so: ci , ai   i , a i  K i , i  0, m , for any   0 . Thence we
have ci , ai   i /  , i  0, m . In particular, when    , we
get ci , ai  0 , i  0, m, ai  Ai . On the other hand,
 i  inf ci , ai  0 , i  0, m . From condition ci , ai  0,
ai  A

ai  Ai , i  0, m, follows that vectors ci  K i* , i  0, m .


Theorem is proved.
Theorem 6. (Dubovicky–Milyutin’s theorem). In order the
intersection of the inempty convex with vertex in zero cones
K 0 , K1 ,, K m to be an empty set, when all these cones, except, can
be one open, necessary and sufficiently existence of the vectors
ci  K i* , i  0, m , not all are equal to zero and such that
c0  c1    cm  0 .
Proof. Sufficiency. We suppose inverse, i.e. that vectors
ci  K i* , i  0, m , c0  c1    cm  0 , though K 0  K 1    K
 K m   . Then the point w  K 0  K 1    K m exists. Since
vectors ci  K i , i  0, m , that inequalities ci , w  0, i  0, m
*

50
are executed. Since sum c0  c1    cm  0 and ci , w  0,
i  0, m , then from equality  c0  c1    cm , w  0 follows that
ci , w  0, i  0, m . By condition of the theorem not all
ci , i  0, m are equal to zero, consequently, at least, two vectors
ci , c j , i  j differenced from zero exist. Since all cones are open,
except can be one, that in particular, it is possible to suppose that
cone K i - an open set. Then K i contains the set o( w, 2 ) 
 u  K i / u  w  2 ,   0. In particular, the point u  w 
w   ci / ci  K i . Since ci , u  0, u  K i , ci  K i* , that we have
c , w   c / c   c , w   c    c  0,
i i i i i i
where   0,
ci  0 . It is impossible. Consequently, the intersection
K 0  K1    K m   . Necessity follows from theorem 5 directly.
Theorem is proved.

51
Lecture 8
LAGRANGE’S FUNCTION.
SADDLE POINT

Searching for the least value (the global minimum) to function


J(u) determined on set U from E n is reduced to determination of
the saddle point to Lagrange’s function. By such approach to
solution of the extreme problems the necessity to proving of the
saddle point existence of the Lagrange’s function appears.
We consider the next problem of the nonlinear programming:
J (u)  inf , (1)


u  U  u  E n u  U 0 , g i (u )  0, i  1, m;


g i (u )  0, i  m  1, s , (2)

where U 0 is the given convex set from E n ; J (u ), g i (u ), i  1, m


are the functions determined on set U 0 . In particular, J (u ),
g i (u ), i  1, m are convex functions determined on convex set U 0 ,
but functions g i (u )   ai , u  bi , a i  E n , i  m  1, s, bi ,
i  m  1, s, are the given numbers. In this case problem (1), (2) is
related to the convex programming problems. In the beginning we
consider a particular type of the Lagrange’s function for problem (1), (2).
Lagrange’s Function. Saddle point. Function
s
L (u ,  )  J (u )   i g i (u ), u  U 0 ,
i 1 (3)
   0    E / 1  0, ,  m  0
s

is called by Lagrange’s function to problem (1), (2).


52
 
Definition 1. Pair u* , * U 0   0 , i.e. u*  U 0 , *   0 is
called by saddle point to Lagrange’s function (3) if the inequalities
are executed

  
L u* ,    L u* , *  L u , * , u  U 0 ,    0 .  (4)

We notice that in the point u*  U 0 minimum to function


 
L u, * is reached on set U 0 , but in point *   0 the maximum
to function L u* ,   is reached on set  0 . By range of definition of
the Lagrange’s function is the set U 0   0 .
The main lemma. In order the pair u* , * U 0   0 to be a  
saddle point to Lagrange’s function (3) necessary and sufficiently
performing of the following conditions:

  
L u* , *  L u , * , u  U 0 ;  (5)

*i g i (u* )  0, i  1, s, u*  U . (6)


Proof. Necessity. Let pair u* , * U 0   0 - a saddle point. 
We show, that the condition (5). (6) are executed. Since pair
 
u* , * U 0   0 - saddle point to Lagrange’s function (3), that
inequalities (4) are executed. Then from the right inequality follows
the condition (5). It is enough to prove fairness of the equality (6).
The left inequality from (4) is written so:
s s
J (u* )   i g i (u* )  J (u* )   *i g i (u* ) ,
i 1 i 1

  (1 ,,  s )   0  E s .
Consequently, the inequality exists
s

 (
i 1
*
i  i ) g i (u* )  0,    0 . (7)

53
At first, we show that u* U , i.e. g i (u* )  0, i  1, m,
g i (u* )  0, i  m  1, s . It is easy to make sure in that vector

i  *i , i  1, s, i  j ,
  (1 , ,  s )   (8)
 j   j  1, for j of 1  j  m,
*


belongs to the set     E s / 1  0,, m  0 . 
Substituting value    0 from (8) to the inequality (7) we get
(1) g i (u* )  0 . Thence follows that g i (u* )  0, i  1, m (since it
is faithful for any j of 1  j  m ). Similarly the vector

i  *i , i  1, s, i  j ,
  (1 ,,  s )  
 j   j  g j (u* ), для j из m  1  j  s,
*

also belongs to the set  0 . Then from inequality (7) we have


2
 g i (u* )  0, j  m  1, s . Consequently, the values g j (u* )  0,
j  m  1, s . From relationship g j (u* )  0, j  1, m , g j (u* )  0,
j  m  1, s follows that point u*  U .
We choose the vector    0 as follows:

  *i , i  1, s, i  j ,
  (1 ,,  s )   i
 j  0, для j из 1  j  m.

In this case inequality (7) is written so: *j g j (u* )  0, j  1, m .


Since the value *j  0, j  1, m under proved above value g j (u* )  0,
j  1, m , consequently, the equality *j g j (u* )  0, j  1, m exists.
The equalities *j g j (u* )  0, j  m  1, s follow from equality
g j (u * )  0, j  m  1, s . Necessity is proved.
54
Sufficiency. We suppose, that the conditions (5), (6) for a certain
 
pair u* , * U 0   0 are executed. We show, that u* , * - saddle  
point to Lagrange’s function (3). It is easy to make sure in that
product (*i  i ) g i (u* )  0 for any i , 1  i  m . In fact, from
condition u* U follows that g i (u* )  0, for any i , 1  i  m .
If g i (u* )  0 , that (*i  i ) g i (u* )  0 . In the event of g i (u* )  0
the value *i  0 , since the product *i g i (u* )  0 , consequently,
m
(*i  i ) g i (u* )  i g i (u* )  0, i  0 . Then sum  (
i 1
*
i 
s s
  i ) g i (u * )   (
i  m 1
*
i   i ) g i (u * )   (
i 1
*
i  i ) g i (u* )  0, since

g i (u* )  0, i  m  1, s, u* U . Thence follows that


s s

 (*i g i (u* )   (i g i (u* ),


i 1 i 1
s s
J (u* )   i g i (u* )  J (u* )   i g i (u* ) .
i 1 i 1

The second inequality can be written in the manner of


 
L u* ,    L u* , * . The inequality in the ensemble with (5)
defines the condition
  
L u* ,    L u* , *  L u , * , u  U 0 ,    0 . 

This means that pair u* , * U 0   0 - saddle point to 
Lagrange’s function (3). Theorem is proved.

The main theorem. If pair u* , * U 0   0 - a saddle point to 
Lagrange’s function (3), that vector u* U is a solution of the
problem (1), (2) i.e.

u*  U *  u*  E n u*  U , J (u* )  min J (u ) .
uU

55
Proof. As follows from the main lemma for pair
 
u* ,  U 0   0 the conditions (5), (6) are executed. Then value
*

s
J (u* , * )  J (u* )    g (u )  J (u ) , since the point
i 1
*
i i * * u*  U .
Now inequality (5) is written so:
s
J (u* )  J (u )   *i g i (u ), u  U 0 . (9)
i 1

Since the set U  U 0 , that inequality (9), in particular, for any


u U  U 0 faithfully, i.e.
s
J (u* )  J (u )   *i g i (u ),  u  U , *   0 . (10)
i 1

As follows from condition (2) if u U , that g i (u )  0, i  1, m ,


and g i (u )  0, i  m  1, s , consequently, the product *i g i (u* )  0
for any i, 1  i  s . We notice, that *  (1* ,, *s )   0 , where
1*  0,, *m  0 . Then from (10) follows J (u* )  J (u ), u U .
It means that in the point u*  U the global (or absolute) minimum
of the function J (u ) is reached on set U. Theorem is proved.
We note the following:
1) The main lemma and the main theorem for the nonlinear
programming problem (1), (2) were proved, in particular, its are true
and for the convex programming problem.
2) Relationship between solutions of the problem (1), (2) and
saddle point to Lagrange’s function in type (3) is established. In
general event for problem (1), (2) Lagrange’s function is defined by
the formula

56
s
L (u,λ )  0 J (u )   i g i (u* ), u  U 0 ,
i 1 (11)
   0    (1 ,,  s )  E s 1 0  0,, m  0.

If the value 0  0 , that Lagrange’s function (11) possible to


present in the manner of L (u ,  )   0 L u,   , where i  i / 0 ,
function L u,   is defined by formula (3). In this case the main
lemma and the main theorem are true for Lagrange’s function in type
(11).
3) We consider the following optimization problems:
J (u )  inf, u  U , J 1 (u )  inf, u  U , where function
s
J 1 (u )  J (u )   *i g i (u ). As follows from proof of the main
i 1

theorem the conditions J1 (u )  J (u ), u  U ; J 1 (u* )  J (u* ),


u*  U are executed.
4) For existence of the saddle point to Lagrange’s function
necessary that set U *   . However this condition does not
guarantee existence of the saddle point to Lagrange’s function for
problem J (u )  inf, u  U . It is required to impose the additional
requirements on function J (u ) and set U that reduces efficiency of
the Lagrange’s coefficients method. We notice, that Lagrange’s
coefficients method is an artificial technique solution of the
optimization problems requiring spare additional condition to
problem.
Kuhn-Tucker’s theorem. The question arises: what additional
requirements are imposed on function J (u ) and on set U that
Lagrange’s function (3) would have a saddle point?
Theorems in which becomes firmly established under performing
of which conditions Lagrange’s function has the saddle point are
called Kuhn-Tacker’s theorems (Kuhn and Tucker - American
mathematicians).

57
It can turn out to be that source problem J (u )  inf, u U ,
U *  Ø , has a solution, i.e. the point u* U * exists, however
Lagrange’s function for the given problem has not saddle point.

Example. Let function J (u )  u  1 , but set U  u  E 1 / 0 

0  u  1; (u  1) 2  0 . The function g (u )  (u  1) 2 , set U 0 

 uE 1
/ 0  u  1, moreover functions J (u ) and g (u ) are
convex on set U 0 . Since set U consists of single element, i.e.
U  {1} , that set U *  {1} . Consequently, the point u*  1 .
Lagrange’s function L u ,    (u  1)   (u  1) 2 ,   0, u  U 0
for the problem has not saddle point. In fact, from formula (4)
follows

   
L u* ,    0  L u* , *  0  L u, *  (u  1)  * (u  1) 2 ,
  0, 0  u  1.

Thence we have 0      , where u  1   , 0    1 .


* 2

Under sufficiently small   0 does not exist the number   0


*

such the given inequality is executed.


1-st case. We consider the next problem of the convex
programming:
J (u )  inf; (12)


u  U  u  E n u  U 0 , g i (u )  0, i  1, m ,  (13)

where J (u ), g i (u ), i  1, m - convex functions determined on


convex set U 0 .
Definition 2. If the point u i  U 0 such that value g i (u i )  0
exists, that it is spoken that restriction g i (u )  0 on set is regularly.
It is spoken that set U is regularly, if all restrictions
g i (u )  0, i  1, m from the condition (13) are regular on set U 0 .
58
We suppose, that point u  U 0 exists such that
g i (u )  0, i  1, m . (14)

The condition (14) is called by Sleyter’s condition.


Let in the points u 1 , u 2 ,, u k  U 0 all restriction are regularly, i.e.
k
g j (u i )  0, j  1, m, i  1, k . Then in the point u    i u i  U 0 ,
i 1

 i  0, i  1, k ,  1   k  1 all restrictions g j (u )  0, j  1, m,
are regular. In fact,

 k  k
g j (u )  g j    i u i     i g j (u i )  0, j  1, m.
 i 1  i 1

Theorem 1. If J (u ), g i (u ), i  1, m - convex functions


determined on convex set U 0 , the set U is regularly and U*  Ø ,
then for each point u* U * necessary the Lagrangian multipliers
*  (1* ,, *m )   0    E m / 1  0,, m  0 such that pair
u ,  U
*
*
0   0 forms a saddle point to Lagrange’s function
m
L u ,    J (u )   i g i (u ), u  U 0 ,    0 exist.
i 1
As follows from condition of the theorem, for the convex
programming problem (12), (13) Lagrange’s function has a saddle
point, if set U is regularly (under performing the condition that set
U *  Ø ). We note, that in the example is represented above set U
is not regular, since in the single point u  1U 0 restriction
g (1)  0 . We notice, that if set is regularly, so as follows from
Sleyter’s condition (14), the point u  U  U 0 . Proof of Theorem 1
is provided in the following lectures.

59
Lectures 9, 10
KUHN-TUCKER’S THEOREM

For problem of the convex programming the additional conditions


imposed on convex set U under performing of which Lagrange’s
function has a saddle point are received. We remind that for solving of
the optimization problems in finite-dimensional space by method of the
Lagrangian multipliers necessary, except performing the condition
U *   , existence of the saddle point to Lagrange’s function. In this
case, applying of the main theorem is correct.
Proof of the theorem 1. Proof is conducted on base of the
separability theorem of the convex sets А and В in space E m 1 . We
define as follows the sets А and B:


A  a  ( a 0 , a1 ,  , a m )  E m 1 a 0  J (u ),
a1  g1 (u )  , am  g m (u ), u  U 0 ; (15)

 
B  b  (b0 , b1 ,  , bm )  E m 1 b0  J * , b1  0  , bm  0 , (16)

where J *  inf J (u )  min J (u ) since set U *  .


uU uU
a) We show that sets А and B have a no points in common. In
fact, from the inclusion a  A follows that a0  J (u ),
a1  g1 (u ),  , am  g m (u ), u  U 0 . If the point u U  U 0 , that
inequality a0  J (u )  J * is executed, consequently, the point
a  B . If the point u U 0 \ U , then for certain number i, from
1  i  m is g i (u )  0 the inequality is executed. Then ai  g i (u )  0
and once again the point a  B . Finally, the intersection
A B   .
60
b) We show that sets А and В are convex. We take two arbitrary
points a  A, d  A and the number  ,   0,1 . From inclusion
a A follows that the point u U 0 such that a 0  J (u ),
ai  g i (u ), i  1, m, a  (a0 , a1 ,, am ) exists. Similarly from
d  A we have d 0  J (v), d i  g i (v), i  1, m, where v  U 0 ,
d  (d 0 , d1 , , d m ) . Then the point a  a  (1   )d 
  a0  (1   )d 0 ,  a1  (1   )d1 , ,  a m  (1   )d m ,   0,1,
moreover  a 0  (1   )d 0  a 0  J (u )  (1   ) J (v)  J (u ),
i  1, m with consideration of convexity J (u ) on U 0 , where
u   u  (1   )v . Similarly

 a  (1   )d  a   g (u )  (1   ) g (v )  g i (u ),
i i i i i

i  1, m ,
with consideration of function convexity g i (u ), i  1, m on set U 0 .
Finally, the point a  (a0 , a1 , , a m ), a0  J (u ), ai 
 g i (u ), i  1, m, moreover u U 0 . This implies that point
a  A under all  , 0    1 . Consequently set А is convex. By
the similar way it is possible to prove the convexity of the set B.
c) Since sets А and В are convex and A  B  Ø , that, by
theorem 2 (the lecture 6), hyperplane c, w   , where normal
m 1
vector c  (*0 , 1*  , *m )  E m 1 , c  0, w  E , separating sets
А and В, as well as its closures A  A , B  b  (b0 ,
b1 ,  , bm )  E m 1
b0  J * , b1  0  , bm  0 exists. Consequently,
inequality are executed
m m
c, b   *i bi    c, a   *i ai , a  A, b  B . (17)
i 0 i 0

61
We notice that if the point u* U * , that J (u* )  J * ,
g i (u* )  0, i  1, m , consequently, the vector y  ( J * ,0,,0) 
 A  B and the value   c, y  *0 J * .
Now inequalities (17) are written as
m m
*0 b0   *i bi    *0 J *  *0 a 0   *i ai ,
i 0 i 0
(18)
a  A, b  B .

Thence, in particular, when vector b  ( J *  1, 0,  ,0)  B ,


from the left inequality we have *0 ( J *  1)  *0 J * . Consequently,
value *0  0 . Similarly, choosing vector b  ( J * ,0,,0,1, 0,0)
from the left inequality we get *0 J *  *i  *0 J * . This implies that
*i  0, i  1, m .
d) We take any point u* U * . It is easy to make sure in that
point b  ( J * ,0, ,0, g i (u* ), 0 ,0)  A  B , since value g i (u* )  0
. Substituting the point to the left and right inequalities (18), we get
*0 J *  *i g i (u * )  *0 J *  *0 J *  *i g i (u* ) . Thence we have
*i g i (u* )  0  *i g i (u* ) . Consequently, the values *i g i (u* )  0,
i  1, m .
f) We show, that value *0  0 . In item c) it is shown that *0  0 .
By condition of the theorem set U is regular, i.e. the Sleyter’s condition
is executed (14). Consequently, the point u  U  U 0 such that
g i (u )  0, i  1, m exists. We notice, that point a  ( J (u ), g1 (u ),,
g m (u ))  A . Then from the right inequality (18) we have
*0 J *  *0 J * (u )  1* g 1 (u )   *m g m (u ) .
We suppose opposite i.e. *0  0 . Since vector
62
c  (*0 , 1* ,, *m )  0 , that at least one of the numbers
*i , 1  i  m are different from zero. Then from the previous
inequality we have 0  *i g i (u )  0 . It is impossible, since
*i  0, *i  0 . Finally, the number *0 is not equal to zero,
consequently, *0  0 . Not derogating generalities, it is possible to
put *0  1 .
g) Let u  U 0 be an arbitrary point. Then vector a  ( J (u ),
g1 (u ),, g m (u ))  A , from the right inequality (18) we get

J *  J (u )   *i g i (u )  L u , * ,  u  U 0 . Since the product


m

i 1

 
m
*i g i (u* )  0, i  1, m , that J *   *i g i (u )  L u* , *
i 1

   
 L u , * ,  L u , * ,  u  U 0 . From the inequality and from
equality  g i (u* )  0, i  1, m, u*  U , where *i  0, i  1, m
*
i

 
follows that pair u* , * U 0   0 - saddle point to Lagrange’s
function. Theorem is proved.
2-nd case. Now we consider the next problem of the convex
programming:
J (u )  inf , (19)


u  U  u  E n u j  0, j  I , g i (u )  ai , u  bi  0, i  1, m;

g i (u )  ai , u  bi  0, i  m  1, s ,  (20)

where I  1,2, , n - a subset; the set U 0  u  E n / u j  



 0, j  I , J (u) - a convex function determined on convex set
U 0 , a i  E n , i  1, s - the vectors; bi , i  1, s - the numbers. For
problem (19), (20) Lagrange’s function
63
m
L u ,    J (u )   i g i (u ), u  U 0 , (21)
i 1

  (1 ,  ,  s )   0    E s 1  0,  ,  m  0.
We notice that, if function J (u )  cu is linear, the task (19),
(20) is called the general problem of the linear programming.
We suppose, that set U *   . It is appeared that for task of the
convex programming (19), (20) Lagrange’s function (21) always has
saddle point without some additional requirements on convex set U .
For proving of this it is necessary the following lemma.
n
Lemma. If a1 , , a p - a finite number of the vectors from E
and the numbers  i  0, i  1, p , that set
 p

Q  a  E n a    i ai ,  i  0, i  1, p  (22)
 i 1 
Is convex closed cone.
Proof. We show, that set Q is a cone. In fact, if a  Q and
p p
  0 - an arbitrary number, that  a     i ai    i ai , where
i 1 i 1

 i   i i  0, i  1, p . This implies that vector  a Q .


Consequently, set Q is the cone.
We show, that Q is a convex cone. In fact, from a  Q, b  Q
follows that

 a  (1   )b    i ai  (1   )  i ai  
p

i 1
p p
  ( i  (1   )  i )ai    i ai ,  i   i  (1   )  i  0,
i 1 i 1
p
i  1, m , where   0,1, b    i ai ,  i  0, i  1, p .
i 1

64
Thence we have  a  (1   )b  Q under all  , 0    1 .
Consequently, set Q - a convex cone.
We show, that Q is convex closed cone. We prove this by the
method of mathematical induction. For values p  1 we have
 
Q  a  E n a  a1 ,   0 - a half-line. Therefore, Q - closed
 p

set. We suppose, that set Q p 1  a  E n a    i ai ,  i  0 is
 i 1 
closed. We prove, that set Q  Q p 1   a p ,   0 is closed. Let
c  E n is limiting point of the set Q. Consequently, the sequence
cm   Q exists, moreover cm  c, m   . The sequence cm 
is represented in the manner of cm  bm   m a p , where
bm   Q p 1 ,  m  - a numeric sequences. As far as set Q p 1 is
closed, that bm  b, m   , under b  Q p 1 . It is remains to
prove that numeric sequence  m  is bounded. We suppose opposite
i.e.  mk   under k   . Since bmk /  mk  c mk /  mk  a p and
bmk 
/  mk  Q p 1 the sequence {c mk } is bounded, so far as
c mk  c, k   , then under k   we have  a p  Q p 1 . Since
set Q p 1  Q (in the event of   0, Q p 1  Q ), that vector
 a p  Q . It is impossible. Therefore, the sequence  m  is
bounded. Consequently, vector c  b  a p  Q , where
 mk   ,   0 under k   . Theorem is proved.
We formulate and prove Farkas’s theorem having important
significant in theories of the convex analysis previously than prove
the theorem about existence of saddle point to Lagrange’s function
for task (19), (20).
Farkas’ theorem. If cone K with vertex in zero is determined by
inequalities

65

K  e  E n ci , e  0, i  1, m; ci , e  0, i  m  1, p;

ci , e  0, i  p  1, s, ci  E n , i  1, s ,  (23)


the dual to it cone K *  c  E n c, e  0, e  K  has the type

 s

K *  c  E n c     i c i , 1  0,  ,  p  0  . (24)
 i 1 

Proof. Let cone K is defined by formula (23). We show, that set


of the vectors c  K for which c, e  0, e  K is defined by
*

formula (24), i.e. set

 s

Q  c  E n c   i ci , 1  0, ,  p  0 (25)
 i 1 

complies with K *  c  E n c, e  0, e  K . We represent
i   i   i ,  i  0,  i  0, i  p  1, s . Then set Q from (25) is
written as

 p s s
Q  с  E n c   i (ci )    i (ci )    i (ci ),
 i 1 i  p 1 i  p 1


i  0, i  1, p,  i  0, i  p  1, s,  i  0, i  p  1, s  . (26)

As follows from expressions (26) and (22) the set Q is convex
closed cone generated by the vectors  c1 ,  ,  c p ,  c p  1 ,...., 
cs , c p 1 ,  , cs (refer to the lemma).
s
We show, that Q  K * . In fact, if c  Q , i.e. c    c ,
i 1
i i

66
s
1  0,,  p  0 , that product c, e    i  ci , e  0 with
i 1

consideration of correlations (23). Consequently, vector c  Q


belongs to set


K *  c  E n c, e  0, e  K  .
This implies that Q  K * .
We show, that K *  Q . We suppose opposite, i.e. vector a  K ,
*

however a  Q . Since set Q – convex closed cone and the point


a  Q , that by theorem 1 (the lecture 6) the point a  E n is strongly
separable from set Q i.e. d , c  d , a , c  Q , where - d ,
d  0 is a normal vector to the hyperplane d , u    d , a .
Thence we have
s
d , c   i d , ci  d , a , i , i  1, s,
i 1

1  0,,  p  0 . (27)

We choose the vector   ( 1  0,,  p  0,  p 1 ,, s ) so:

  0, i  j , i  1, s
  ( 1 , ,  s )   i
 j  t , t  0, j из 1  j  p.

For the vector  inequality (27) is written in the form


 t d , ci  d , a . . We divide into t  0 and t   , as a result
we get c j , d  0, j  1, p. Hereinafter, we take the vector

i  0, i  j , i  1, s,
  ( 1 ,,  s )  
 j  t c j , d , t  0, j из p  1  j  s.

67
2
From inequality (27) we have  t c j , d  d , a . We divide into
t  0 and, directing t   , we get c j , d  0, j  p  1, s . Finally,
for the vector d  E inequalities c j , d  0,
n
j  1, p, c j , d  0,
j  p  1, s are executed. Then from (23) follows that d  K ,
where K - a closure of the set K .
Since the vector a  K * , that the inequality a , e  0, e  K
exists. Thence, in particular, for e  d  K follows a, d  0 .
However from (27) under i  0, i  1, s , we have a, d  0 . We
have got the contradiction. Consequently, vector a  K * belongs to
the set Q. From inclusions Q  K * , K *  Q follows that K *  Q .
Theorem is proved.
Now we formulate theorem about existence of the saddle point to
Lagrange’s function (21) for problem of the convex programming
(19), (20).
Theorem 2. If J (u ) is a convex function on convex set
U 0 , J (u )  C 1 (U 0 ) and set U *   for problems (19), (20), then
for each point u *  U * necessary the Lagrangian multipliers
 
*  (1* ,  , *s )   0    E s / 1  0,  ,  m  0 exist, such
 
that pair u* ,  U 0   0 forms a saddle point to Lagrange’s
*

function (21) on set U 0   0 .


Proof. Let the condition of the theorem is executed. We show the
 
pair u* , * U 0   0 - saddle point to Lagrange’s function (21).
Let u*  U * - an arbitrary point. We define the feasible directions
coming from point u* for convex set U . We notice that the vector
e  (e1 ,, en )  E n is called by the feasible direction in the point
u* , if the number  0  0 such that u  u *   e  U under all
, 0    0 exists. From inclusion u  u*   e  U with
68
consideration of the expressions (20) we have

u *j   e j  0, j  I ; ai , u*  e  bi  0, i  1, m ;
(28)
ai , u*  e  bi  0, i  m  1, p,  , 0     0 .


If the sets of the indexes I 1  j u *j  0, j  I ,  I 2   j ai ,
u *  bi , 1  i  m are entered, that conditions (28) are written so:

e j  0, j  I1 ; ai , e  0, i  I 2 ;
ai , e  0, i  m  1, s. (29)

Finally, the set of the feasible directions in the point u*


according to correlation (29) is a cone


K  e  E n  e j   e j , e  0, j  I 1 ; ai , e  0, i  I 2 ,


ai , e  0, i  m  1, s , (30)

where e j  0,,0,1,0,,0   E n - a single vector. And inverse


statement faithfully, i.e. if e  K , that е - feasible direction.
Since J (u ) is a convex function on convex set U  U 0 and
J (u )  C 1 (U ) , that by theorem 4 (the lecture 5) in the point
u *  U * , necessary and sufficiently executing of the inequality
J (u* ), u  u*  0, u  U . Thence with consideration of that
u  u*   e, 0     0 , e  K , we have J (u* ), e  0, e  K
. Consequently, the vector J (u* )  K * . By Farkas’s theorem dual
cone K * to cone (30) is defined by the formula (24), so the numbers
 *j  0, j  I 1 ; *i  0, i  I 2 ; *m 1 ,  , *s such that

69
s
J (u* )    *j e j   *i ai   a *
i i (31)
jI1 iI 2 i  m 1

exist.
Let *i  0 for values i  1,2,  , m \ I 2 . Then expression (31)
is written as
s
J (u* )   *i ai    *j e j . (32)
i 1 jI1

We notice that *i g i (u* )  0, i  1, s , since g i (u* )  0, i  I 2 and


i  m  1, s . As follows from the expression (21) u  U 0 the
difference

   
s
L u , *  L u* , *  J (u )  J (u* )   *i ai , u  u* . (33)
i 1

Since convex function J (u )  C 1 (U ) , then according to the


theorem 1 (the lecture 4) the difference J (u )  J (u* )  J (u* ), u
u  u * , u  U 0 . Now inequality (33) with consideration of
correlations (32) can be written in the manner of

   
s
L u , *  L u* , *  J (u* )   *i a i , u  u* 
i 1

 
jI1
*
j e , u  u*    (u j  u *j ) 
j

jI1
*
j  u
jI1
*
j j  0,

since e j , u  u*  u j  u *j , u *j  0, j  I 1 . This implies that


   
L u * , *  L u , * , u  U 0 . From the inequality and equality
*i g i (u* )  0, i  1, s follows that pair u* , * U 0   0 - saddle
point to Lagrange’s function (21). Theorem is proved.
3-rd case. We consider more general problem of the convex
70
programming:
J (u )  inf , (34)


u  U  u  E n u  U 0 , g i (u )  0, i  1, m; g i (u )  ai , u 

 bi  0 , i  m  1, p, 
g i (u )  ai , u  bi  0 , i  p  1, s , (35)

where J (u ), g i (u ), i  1, m are convex functions determined on


convex set U 0 ; a i  E n , i  m  1, s are the vectors; bi , i 
 m  1, s are the numbers. Lagrange’s function for tasks (34), (35)
s
L u,    J (u )   i g i (u ), u  U 0 ,
i 1 (36)
   0    E s
1  0,, m  0.

Theorem 3. If J (u ), g i (u ), i  1, m are convex functions


determined on convex set U 0 , set U *  Ø for tasks (34), (35) and the
points u  riU 0  U such that g i (u )  0, i  1, m exists, then for
each point u*  U * necessary the Lagrangian multiplier
  ( ,  ,  )   0    E 1  0,  ,  p  0 exist, such
* *
1
*
s
s

 
that the pair u* , * U 0   0 forms a saddle point to Lagrange’s
function (36) on set U 0   0 .
Proof of theorem requires of using more fine separability
theorems of the convex set, than theorems 1 - 6 (the lectures 6, 7).
For studying more general separability theorems of the convex set
and other Kuhn-Tucker’s theorems is recommended the book:
Rocafellar R. Convex analysis. Moskow, 1973.
We note the following:
10. The theorems 1-3 give the sufficient conditions of existence
of the saddle point for the convex programming problem.
Example 1. Let function be J (u) 1  u , but set U  u  
71
 E 1 0  u  1; (1  u ) 2  0. For this example the functions
J (u ), g (u )  (1  u ) 2 are convex on convex set U 0  u  E 1 0 

 u  1 . The set U  1 , consequently U *  1, set U *  1. The
point u*  1 and the value J (u * )  0 are solution of the problem
J (u )  inf, u  U . Lagrange’s function L u,    (1  u )   (1 
 
1  u ) 2 , u  U 0 ,   0 . The pair u*  1, *  0 - saddle point to
Lagrange’s function, since L u ,    0  L u,    (1  u)  
*
* *

 * (1  u ) 2 , 0  u  1, * g (u * )  0 .
We notice, that for the example neither Sleyter’s theorem from
theorem 1, nor condition of theorem 3 is not executed.
2°. In general event Lagrangian multipliers for the point u*  U *

are defined ambiguous. In specified example the pair u*  1, *  0 
is the saddle point to Lagrange’s function under any   0 .
*

3°. Performing of the theorems 1 - 3 condition guarantees the


existence of the saddle point to Lagrange’s function. Probably, this
circumstance for problem as convex, so and nonlinear programming
is a slight learned area in theories of the extreme tasks.
Solution algorithm of the convex programming problem. The
problems of the convex programming written in the manner of (34),
(35) often are occurred in the applied researches. On base of the
theories stated above we briefly give a solution sequence of the
convex programming problem.
10. To make sure in that for task (34), (35) set U *   . In order
to use the theorems 1-3 from lecture 2 (Weierstrass’ theorem).
20. To check performing the conditions of the theorems 1-3 in
depending on type of the convex programming problem to be a
warranty of existence saddle points to Lagrange’s function. For
instance, if task has the form of (12), (13), then to show that set U
is regularly; if task has the form of (19), (20), that necessary
J (u )  C 1 (U 0 ) , but for task (34), (35) necessary existence of the
point u  riU 0  U for which g i (u )  0, i  1, m .

72
s
30. To form Lagrange’s function L u ,    J (u )    g (u ) i i
i 1

with definitional domain U 0   0 , where


 0    E s 1  0,  ,  m  0 . 
 
40. To find a saddle point u* , * U 0   0 to Lagrange’s
function from the conditions (the main lemma from lecture 8):

   
L u* , *  L u , * , u  U 0 ,
(37)
*i g i (u* )  0, i  1, s, 1*  0, , *m  0.

a) As follows from the first condition, function L u , * reaches  


the minimum on set U 0 in the point u*  U *  U 0 . Since functions
J (u ), g i (u ), i  1, m are convex on convex set U 0 , that function
 
L u , * is convex on U 0 . If functions J (u )  C 1 (U 0 ), i  1, m ,
then according to the optimality criterion (the lecture 5) and
theorems about global minimum (the lecture 5) the first condition
 
from (37) can be replaced on L u u * , * , u  u *  0, u  U 0 ,

 
s
where L u u * , *  J (u * )    g (u ) .
i 1
*
i i * Now condition (37) are

written as

 
L u u* , * , u  u*  0 , u  U 0 ,
(38)
*i g i (u* )  0, i  1, s, 1*  0, , *m  0.

b) If we except J (u )  C 1 (U 0 ), g i (u )  C 1 (U 0 ), i  1, m ,
set U 0  E n , then according to optimality criteria (lecture 5) the
conditions (38) possible present in the manner of

73
 
L u u* , *  0, *i g i (u* )  0, i  1, s,
1*  0,  , *m  0 . (39)

The conditions (39) are represented by the system n  s of the


algebraic equations comparatively n  s unknowns u*  u1* ,, u n* , 
   ,,   . We notice, that if Lagrange’s function has the
* *
1
*
n
saddle point, that system of the algebraic equations (39) has a
solution, moreover must be 1*  0,, *m  0. Conditions (39) are
used and in that events, when U 0  E n , however in this case
necessary to make sure the point u*  U 0 .
 
5°. We suppose, that pair u* , * U 0   0 is determined. Then
the point u *  U and value J *  J (u* ) is the solution of the
problem (34), (35) (refer to the main theorem from lecture 8).

74
Chapter II. NONLINEAR
PROGRAMMING

There are not similar theorems for problem of the nonlinear


programming as for problems of the convex programming guaranteed
existence of the saddle point to Lagrange’s function. It is necessary to
note that if some way is installed that pair (u* , * ) U 0   0 is the
saddle point to Lagrange’s function then according to the main theorem
the point u*  U * is the point of the global minimum in the problems of
the nonlinear programming. We formulate the sufficient conditions of
the optimality for the nonlinear programming problem by means of
generalized Lagrange’s function. We notice, that point u*  U
determined from sufficient optimality conditions, in general event is not
a solution of the problem, but only "suspected" point. It is required to
conduct some additional researches; at least, to answer on the question:
will be the point u*  U by point of the local minimum to function
J (u ) on ensemble U ?

75
Lectures 11, 12.
STATEMENT OF THE PROBLEM.
NECESSARY CONDITIONS OF THE
OPTIMALITY

Statement of the problem. The following problem often is


occurred in practice:
J (u )  inf , (1)


u U  u  E n / u U 0 , g i (u )  0, i  1, m; g i (u )  0

 0, i  m  1, s , (2)

where J ( u ), g i ( u ), i  1 , s - the functions determined on


n
convex ensemble U 0 from E .
Entering notations

 
U i  u  E n g i (u )  0 , i  1, m,
(3)

U m 1  u  E n g i (u )  0 , i  m  1, s , 
ensemble U possible to present in the manner of

 
U i  u  E n g i (u )  0 , i  1, m,
(3)

U m 1  u  E n g i (u )  0 , i  m  1, s , 
Now problem (1), (2) can be written in such form:
J (u )  inf, u  U

We suppose that J *  inf J (u )   , ensemble


76
 
U *  u*  E n u*  U , J (u* )  min J (u )   . We notice that
uU

if ensemble U *   , so J (u* )  min J (u ) . It is necessary to find


uU

the point u* U * and value J *  J (u* ) .


For problem (1), (2) generalized Lagrange’s function has the
form
s
L (u,  )  0 J (u )   i g i (u ), u  U 0 ,   (0 , 1 , ,  s ),
i 1 (5)
   0    E s 1
0  0, 1  0,  ,  m  0.

Let U 01 - an open ensemble containing set U 0  U 01 .


Theorem 1 (the necessary optimality conditions). If functions
J (u )  C 1 (U 01 ), g i (u )  C 1 (U 01 ), int U 0  0, U 0 is a convex
ensemble, but ensemble U*   , then for each point u* U * the
Lagrange’s multipliers  *  (*0 , 1* ,, *s )   0 necessary exist,
such that the following condition is executed:

 *  0, *0  0, 1*  0, , *m  0 , (6)

L u (u * ,  * ), u  u* 
s
 *0 J (u* )   *i g i(u* ), u  u*  0,  u  U 0 , (7)
i 1

*i g i (u* )  0, i  1, s. (8)

Proof of the theorem is released on theorems 5, 6 (the lecture 7)


about condition of the emptiness intersection of the convex cones
(Dubovicky-Milutin’s theorem) and it is represented below. We
comment the conditions of the theorem 1.
We note the following:
a) In contrast to similar theorems in the convex programming
77
problems is not become firmly established that pair
(u* ,  ) U 0   0 is an saddle point to Lagrange’s function, i.e. the
*

conditions of the main theorem are not executed.


b) Since pair (u* ,  * ) in general event is not a saddle point, then
from the condition (6) - (8) does not follow that point u*  U - a
solution of the problem (1), (2).
c) If the value *0  0 , that problem (1), (2) is identified
nondegenerate. In this case it is possible to take *0  1 , since
Lagrange’s function is linear function comparatively  .
g) If *0  0 , that problem (1), (2) is called degenerate.
Number of the unknown Lagrangian multipliers, independently
the problem (1), (2) is degenerate or nondegenerate, possible to
reduce on unit by entering the normalization condition, i.e. condition
(6) can be changed on

 *   , *0  0, 1*  0,  , *m  0 , (9)

where   0 - any given number, in particular,   1 .


Example 1. Let J (u )  u  1,


U  u  E 1 0  u  1; g (u )  (u  1) 2  0 
Generalized Lagrange’s function for the problem has the form
L (u ,  )  0 J (u )   (u  1) 2 , u U 0  u  E 1 0  u  1, 0  0,
  0 . Ensemble U  1 , consequently, ensemble U *  1
moreover J (u* )  0, g (u* )  0 . The condition (7) is written as
(*0 J (u* )  * g  (u* )) (u  u* )  0, u  U 0 . Thence we have
(*0 )(u  1)  0, u  U 0 , or ( *0 )(1  u )  0, 0  u  1 . Since
expression (1  u )  0 under 0  u  1 , then for executing the
inequality necessary that value *0  0 , so as *0  0 by the
condition (6). Finally, the source problem is degenerate. The
78
condition (8) is executed for any   0 . Thereby, all conditions of
*

the theorem 1 are executed in the point (u*  1, *0  0, *  0) .


We notice, the ordinary Lagrange’s function L (u,  )  (u  1) 
  (u  1) 2 for the problem has not saddle point.
Example 2. Let J (u )  J (u1 , u 2 )  u1  cos u 2 ,


U  (u1 , u2 )  E 2 g (u1 )  u1  0 . 
Ensemble U 0  E 2 . Generalized Lagrange’s function

L (u1 , u 2 ,  0 ,  )   0 (u1  cos u 2 )  u1 ,  0  0,   0,

u  (u1 , u2 )  E 2 . Since ensemble U 0  E 2 , that condition (7) is


written as L u (u * ,  * )  0 . Thence follows that g i (u * ), e  0,
i  m  1, s . Consequently, *0    0, u 2*  k , k  0,1,2,  ,
. From condition (9) follows that it is possible to take 0    1 ,
* *

where   2 . From condition (8) we have u1*  0 . Finally,


necessary conditions of the optimality (6)-(8) are executed in the
point u *  (u1* , u 2* )  (0, k ), *0  1, *  1 . To define in which
points from (0, k ) is reached minimum J (u ) on U the
additional researches is required.. It is easy to make sure in the points
u*  0,  (2m  1)   0, k  minimum J (u ) on U is reached,
where m  0,1, .
It is required to construct of the cones to prove the theorem 1:
K y - directions of the function J (u ) decrease in the point u* ; K bi
- internal directions of the ensembles U i , i  0, m in point u* ;
K m1  K k - tangent directions for ensemble U m1 in the point u* .
Cone construction. We define the cones K y , K bi , i  1, m,
K k in the point u*  U .
79
Definition 1. By direction of the function decrease J (u ) in the
point u*  U is identified vector e  E , e  0 if the numbers
n

 0  0,   0 exist such that under all e  o( , e)  e  E n



e  e   the following inequality

J (u*  e )  J (u* ),  , 0     0 (10)

is executed.
We denote through K y  K y (u* ) the ensemble of all directions
of the function decrease J (u ) in the point u* . Finally, the ensemble


K y  e  E n e  e   , J (u*  e )  J (u* ),

0    0 . (11)

As follows from the expression (11), ensemble K y contains


together with the point е and its  -a neighborhood, consequently,
K y - open ensemble. The point e  K y . Since function
J (u )  C 1 (U 01 ) , that difference [refer to the formula (10), e  K y ]:

J (u*  e)  J (u* )  J (u*  e), e   0,  , 0     0 .

We notice, that open ensemble U 01 contains the neighborhood of


the point u*  U . Thence, dividing in   0 and directing   0 ,
we get J (u* ), e  0 . Consequently, ensemble


K y  e  E n J (u* ), e  0  (12)

- open convex cone. By Farkas’s theorem the dual cone to cone (12)
is defined on formula

80

K y*  c y  E n c y  *0 J (u* ), *0  0 .  (13)

Definition 2. By internal direction of the ensemble U i , in the


point u*  U is identified the vector e  E , e  0 , if there are
n

the numbers  0  0,   0 , such that under all e  o( , e)


inclusion u*  e  U i ,  , 0     0 exists.
We denote through K bi  K bi (u* ) the ensemble of all internal
ensemble U i directions in the point u*  U . Finally, the ensemble


K bi  e  E n e  e   , u*  e  U i , 
 , 0     0 .  (14)

We notice, that K bi - the open ensemble moreover the point


e  K bi . For problem (1), (2) ensemble U i , i  1, m is defined by
the formula (3), i.e.

 
U i  u  E n g i (u )  0 , i  1, m

Then ensemble K bi defined by formula (14) can be written as


K bi  e  E n g i ( u*  e )  0,  , 0     0 .  (15)

Since the point u*  U , that it is possible two events: 1) g i (u* )  0 ;


2) g i (u* )  0 . We consider the event, when g i (u* )  0 . In this
case, on the strength of function continuity g i (u ) on ensemble U 0
the number  0  0 such that value g i ( u*  e)  0 under all
 , 0     0 and for any vector e  E n is found. Then ensemble
K bi  E n - open cone, but dual to it cone K bi  0 .
In the event g i (u* )  0 we have g i ( u*  e)  g i ( u* )  0
81
[refer to formula (15)]. Thence with consideration of that function
g i (u )  C 1 (U 01 ) , we get g i (u*  e), e   0,  , 0     0 .
We divide into   0 and direct   0 , as a result the given
inequality is written in the manner of g i (u* ), e  0 . Then open
ensemble K bi is defined by formula

 
K bi  e  E n g i (u* ), e  0 , i  1, m. (16)

By Farkas’s theorem the dual cone to cone (16) is written as:

 
K b*i  ci  E n ci  *i g i (u* ), *i  0 , i  1, m. (17)

We notice, that K bi , i  1, m - opened convex cones.


Definition 3. By tangent direction to ensemble U m1 in the point
u*  U is identified the vector e  E n , e  0 , if the number
 0  0 and function r ( )  (r1 ( ),, rn ( )), 0     0 such that
r (0)  0; r ( ) /   0 under   0 and u*  e  r ( )  U m 1 ,
 , 0     0 exist.
We denote through K m 1  K m1 (u* ) the ensemble of all
tangent directions of the ensemble U m 1 in the point u*  U . From
given determinations follows that ensemble


K m 1  e  E n u*  e  r ( )  U m 1 ,  , 0     0 ;
r (0)  0, r ( ) /   0 при   0.
According to formula (3) the ensemble


U m1  u  E n g i (u )  0, i  m  1, s ,
consequently,

82

K m1  e  E n g i (u*  e  r ( ))  0, i  m  1, s,

 , 0     0 , (18)

where vector-function r ( ) possesses by the properties r ( 0 )  0;


r ( ) /   0 under   0 . Since functions g i (u )  C 1 (U 01 ),
i  m  1, s and g i (u* )  0, i  m  1, s, u* U , that differences
g i (u*  e  r ( ))  g i (u* )  g i (u* ), e  r ( )  oi ( , u* )  0,
i  m  1, s .. Thence, dividing in   0 and directing   0 , with
consideration of that r ( ) /   0, oi ( , u* ) /   0 under   0
, we get g i (u* ), e  0, i  m  1, s . Consequently, ensemble (18)


K m 1  e  E n g i (u* ), e  0, i  m  1, s .  (19)

- closed convex cone. Dual cone to cone (19) by Farkash’s theorem


is defined by formula

 s

K m* 1  c  E n c    *i g i (u* ) . (20)
 i  m 1 
Finally, we define K 0 - an ensemble internal directions of the
convex ensemble U 0 in the point u* U  U 0 . It is easy to make
sure in that, if u*  intU 0 , that K 0  E n , consequently, K *  0.
If u*  pU 0 , that


K 0  e  E n e   (u  u* ), u  int U 0 ,   0  (21)

- an open convex cone, but dual to it cone


K 0  с 0  E n с 0 , u  с 0 , u * при всех u  U 0 .  (22)

Cone’s constructions are stated without strict proofs. Full details


83
of these facts with proofs reader can find in the book: Vasiliev F. P.
Numerical solution methods of the extreme problems. M.: Nauka,
1980.
Hereinafter, We denote by K i , i  1, m the cones K bi , i  1, m .
Lemma. If u*  U is a point of the function minimum J (u ) on
ensemble U , that necessary the intersection of the convex cones

K y  K 0  K 1    K m  K m 1   . (23)

Proof. Let the point be u* U *  U . We show, that correlation


(23) is executed. We suppose opposite, i.e. existence of the vector

e  K y  K 0  K 1    K m  K m 1 .

From inclusion e  K y follows that J (u*  e )  J (u* ) under


all  , 0     y , e  e   y , but from e  K i , i  0, m follows
that u*  e  U i under all  , 0     i , e  e   i , i  0, m .
Let numbers be   min( y ,  0 ,,  m ),   min( y ,  0 ,,  m ) .
Then inequality J (u*  e )  J (u* ) is true and inclusion
m
u*  e  U i exists under all  , 0     and e  e   .
i 0

Since vector e  K m1 , the u ( )  u*  e 


point
 r ( )  U m 1 under all  , 0     m 1 , r (0)  0; r ( ) /   0
under   0 . We choose the vector e  e  r ( ) /  . If  m1  
and  m 1  0 - sufficiently small number, that norm e  e 
r ( ) /    . Then the point

u*  e  u*  e  r ( )  U 0  U 1    U m  U m 1  U .

Finally, the point u*  e  U and J (u*  e )  J (u* ) . It is

84
impossible, since the point u* U  U 0 is solution of the problem
(1), (2). The lemma is proved.
Previously than transfer to proof of the theorem, we note the
following:
1) If J (u* )  0, that under *0  1, 1*  *2  ...  *s  0 all
condition of the theorem 1 are fulfilled. In fact, norm  *  1  0 is
a scalar product

L u (u * ,  * ), u  u *  *0 J (u * ), u  u *  0,
 u  U 0 , *i g i (u* )  0, i  1, s.

2) If g i (u* )  0 under a certain i, 1  i  m , then for values


*i  1, *j  0, j  1, s, j  i , all conditions of the theorems 1 are
also executed. In fact,  *  1 , L u (u* ,  * ), u  u*  *i g i (u* ),
(u* ), u  u*  0,  u  U , *i g i (u* )  0, i  1, s .

3) Finally if the vectors g i (u* ), i  m  1, s are linearly 
dependent, also the conditions (6) - (8) of the theorem 1 are occurred.
In fact, in this case the numbers *m 1 , *m  2 , ., *s not all equal to
zero, such that

*m 1 g m 1 (u* )  *m  2 g m  2 (u* )    *s g s (u* )  0 .


exist.
We take *0  1*    *m  0 . Then norm,  *  0 ,

s
L u ( u * ,  * ), u  u *  
i  m 1
*
i g i ( u * ) , u  u *  0 ,

 u  U 0 ,  *i g i ( u * )  0 , i  1, s ,

since g i (u* ), i  m  1, s .
From items 1 - 3 results that theorem 1 should prove for event,
85
when J (u* )  0, g i (u* )  0 under all i  1, m , and vectors
g (u ), i  m  1, s are linearly independent.
i *
Proof of the theorem 1. By the data of the theorem the ensemble
u* U . Then, as follows from proved lemma, the intersection of the
convex cones
K y  K 0  K 1    K m  K m 1   (24)

in the point u* U * , moreover all cones, except K m 1 , are open. We


notice, that in the case of J (u* )  0, g i (u* )  0 , i  1, m ;
g (u ), i  m  1, s
i * are linearly independent, all cones K y ,
K 0 , , K m 1 are inempty [refer to the formulas (12), (16), (19),
(21)]. Then by Dubovicky-Milutin’s theorem, for occurring of the
correlation (24) necessary and sufficiently existence of the vectors
c y  K *y , c0  K 0* , c1  K1* ,, cm1  K m* 1 , not all equal to zero
and such that
c y  c0  c1    cm1  0 . (25)

As follows from formulas (13), (17), (20) the vectors

c y  *0 J (u* ), *0  0, ci  *i g i (u* ), *i  0, i  1, m,


s
cm1    *i g i (u* ) .
i  m 1
s
We have c 0  c y  c1    c m  c m 1  *0 J (u * )    g  (u )
i 1
*
i i *

from equality (25). Since cone K 0* is defined by formula (22), that

s
c0 , u  u 0  *0 J (u* )   *i g i (u* ), u  u 0 
i 1

 L u (u* , * ), u  u 0  0, u  U 0 .

86
Thence follows fairness of the correlations (7). The condition (6)
follows from that not all c y , c 0 , c1 ,  , c m , are equal to zero. We
notice, that if g i (u* )  0 for a certain i from 1  i  m , that cone
Ki  E n , consequently, K i*  0. It means that
ci  *i g i (u* )  0, g i (u* )  0 . Thence follows that *i  0 .
Thereby, the products i g i (u* )  0, i  1, s , i.e. the condition (8)
*

is occurred. Theorem is proved.

87
Lecture 13
SOLUTION ALGORITHM OF THE
NONLINEAR PROGRAMMING PROBLEM

We show a sequence of the solution of the nonlinear


programming problem in the following type:
J (u )  inf , (1)


u  U  u  E n u  U 0 , g i (u )  0, i  1, m;


g i (u )  0, i  m  1, s , (2)

where functions J (u )  C (U 01 ), g i (u )  C (U 01 ), i  1, s, U 01
1 1

- an open ensemble containing convex ensemble U 0 from E n , in


particular, U 01  E n on base of the correlations (6) - (8) from
theorem 1 of the previous lecture. We notice, that these correlations
are true not only for the point u* U * , but also for points of the
local function minimum J (u ) on ensemble U. The question arises:
under performing which conditions of the problem (1), (2) will be
nondegenerate and the point u*  U will be a point of the local
minimum J (u) on U?
10. It is necessary to make sure in
that ensemble

U *  u*  E n

u*  U , J (u* )  min J (u )   for which to use
uU
the theorems 1-3 (the lecture 2).
20. To form the generalized Lagrange’s function for problems
(1), (2):

88
s
L (u,  )   0 J (u )   i g i (u ), u U 0 ;
i 1

  (0 , 1 ,  ,  s )   0    E s 1 / 0  0, 1  0,  ,  m  0.

30. To find the points u*  (u1* ,  , u n* )  U ,  *  (*0 , ,


*s )   0 from the following conditions:

 *   , *0  0, 1*  0,  , *m  0 , (3)

L u (u * ,  * ), u  u * 

s
 *0 J (u* )   *i g i(u* ), u  u*  0,  u  U 0 , (4)
i 1

*i g i (u* )  0, i  1, s, (5)

where   0 - the number, in particular   1 .


a) If the point u*  int U 0 or U 0  E n , that condition (4) can be
replaced on
s
L u (u* ,  * )  *0 J (u * )   *i g i (u* )  0 . (6)
i 1

In this case we have a system n 1  s of the algebraic


equations (3), (5), (6) to determinate n 1  s of the unknowns
u1* ,, u n* ; *0 ,, *s .
b) If after solution of the algebraic equations system (3), (5), (6) [or
formulas (3) - (5)] it is turned out that value *0  0 , that problem (1),
(2) is identified by nondegenerate. The conditions (3) in it can be
changed by more simpler condition *0  1, 1*  0, *2  0,, *m  0 .
If in the nondegenerate problem a pair (u* , * ),  s*  (1* ,  , *s )
forms saddle point of the Lagrange’s function
89
s
L (u,  )  J (u )   i g i (u ), u U 0 ,
i 1

   0    E s / 1  0,  , m  0,

that u*  U is the point of the global minimum.


4°. We consider the necessary problem of the nonlinear
programming:
J (u )  inf (7)


u  U  u  E n g i (u )  0, g i (u )  C 1 ( E n ), i  1, m .  (8)

The problem (7), (8) is a particular event of the problem (1), (2).
The ensemble U 0  E n . The point in the problem (7), (8) is named

by normal point of the minimum, if the vectors g i (u* ), i  1, m are 
linearly independent. We notice, that if u*  U - normal point, that
problem (7), (8) are nondegenerate. In fact, for problem (7), (8) the
equality
m
*0 J (u * )   *i g i (u * )  0 (9)
i 1

exists.
If *0  0 , that on the strength of linearly independence of the
 
vectors g i (u* ), i  1, m we would get i  0, i  1, m . Then the
*

vector   0 that contradicts to (3).


*

Let u*  U - a normal point for problem (7), (8). Then it is


possible to take *0  1 , and Lagrange’s function has the form
m
L (u,  )  J (u )   i g i (u ), u  E n ,   (1 ,  ,  m )  E m
i 1

90
Theorem. Let functions J (u ), g i (u ), i  1, m are determined and
twice continuously differentiable in neighborhoods of the point u*  U .
In order the point u*  U to be a point of the local minimum J (u ) on
ensemble U , i.e. J (u* )  J (u ), u, u  o(u* ,  )  U sufficiently
 2 L (u* , * )
that quadratic form y y to be positively determined on
u 2
hyperplane

 g1 u*  / u1  g1 u*  / u n  y1 


  
 g u*    g 2 u*  / u1  g 2 u*  / u n  y 2 
*

  y     0. (10)
 u  
  
 
 g u  / u  g m u*  / u n  y n 
 m * 1

Proof. Let quadratic form

 2 L (u* , * )
y y
y 2
  2 L (u* , * ) / u12   2 L (u* , * ) / u1un 
 2 
  L (u* , * ) / u2 u1   2 L (u* , * ) / u2 un 
 ( y1 , , yn ) 
    
  2 L (u , * ) / u u   2 L (u* , * ) / un2 
 * n 1

 y1 
 
y 
  2   0, y  0,

 
y 
 n

on hyperplane (10). We show that u*  U - a point of the local

91
function minimum J (u ) on ensemble U. We notice, that u* - a
normal point and it is determined from the conditions
L u (u* , * )  0, *i g i (u* )  0, i  1, m , i.e. g i (u* )  0, i  1, m .
Finally, pair (u* , * ) is known.
Since functions

J (u )  C 2 (o(u* ,  )), g i (u )  C 2 (o(u* ,  )) ,

that continuous function y ( L (u* ,  ) / u ) y comparatively


2 * 2

variable у reaches the lower bound on compact ensemble


 
V  y  E n y  1, (g u*  / u ) * y  0 . Let the number

 2 L (u* , * )
  min y  y, y  V
u 2
We enter the ensemble


A  u  E n u  u*  y, y  1, (g u*  / u )* y  0 ,  (11)

where * - a transposition sign for matrixes;   0 - sufficiently small


number. If the point u  A , that quadratic form

(u  u* )L uu (u* , * )(u  u* )   2 y L uu (u* , * ) y   2  , (12)

where L uu (u * , * )   2 L (u * , * ) / u 2 on the strength of


correlation (11). For the points u  A difference

1
L (u, * )  L (u* , * )  L u (u* , * ), u  u*  (u  u* ) 
2

 L uu (u* ,  )(u  u* )  o u  u*
* 2
  2
2
 o( 2 ) 

92
  o( 2 )  
  2   2    2 , 0    1 , (13)
2   4

and since L u (u* , * )  0 inequality (12) faithfully and


o( ) /   0 under   0 .
2 2

We enter the ensemble


B  u  E n u  u*   , g i (u )  0, i  1, m  U .  (14)

Since hyperplane (10) is tangent to manifold g i (u )  0, i  1, m


in the point u* , then for each point u~  B the point u  A such
that norm u~  u  K , K  const  0 is found. In fact, if
2

u  A : g i (u )  g i (u * )  g i (u )  g i (u * ), u  u * 
1 2
 (u  u* ) g iuu (u* ) (u  u* )  oi u  u* 
2
 
1
  2 y g iuu (u * ) y  oi ( ), y  1; i  1, m; (15)
2
1
u~  B : g i (u~ )  g i (u* )  0  g i(u* ), u~  u*  (u~  u* ) 
2
 
 g iuu (u * )(u~  u * )  oi u~  u *  g i (u* ), u~  u  u  u * 
2

1 ~
(u  u  u  u * )  g iuu (u* )(u~  u  u  u * )  oi ( 2 ) 

2
1 1
 g i (u * ), u~  u  (u~  u ) g iuu (u * )(u~  u * )  (u  u * )  
2 2
~
 g (u )(u  u )  (u  u ) g (u )(u  u ) 
iuu * * iuu * *

 oi ( 2 ), i  1, m; (16)

93
From (11), (14) and (15), (16) follows that norm u~  u  K ,
2

u~  B , u  A .
Since the function L u (u , * ) continuously differentiable by u
in neighborhoods of the point u* , and derivative L u (u* , * )  0 ,
that difference L u (u,  )  L u (u* ,  )  L u (u , * )  L uu (u* ,  )(×
* * *


×(u  u* )  o u  u*  in neighborhoods of the point u* . This
implies that, in particular, u  A , norm L (u , * )  K 1 , if
0     1 ,  1  0 - sufficiently small number.
For the points u  A , u~  B the difference L (u ~, * )  L
1
 L (u, * )  L u (u , * ), u~  u  (u~  u )L uu (u* , * )(u~  u ) 
2

 o u~  u ,
*
2
 consequently, norm L (u~, * )  L (u, * )

 KK 1 3  o1  3   K 2 3 under sufficiently small  1  0,


0     1 . Then the difference (u  A , u~  B )

L (u~, * )  L (u* , * )  L (u, * )  L (u* , * )  L (u~, * ) 


 L (u, * )  L (u, * )  L (u* , * )  L (u~, * )  L (u, * ) 

  2  K 2 3  0, 0     1 , (17)
4
on the strength of correlations (13). So as values L (u~, * )  J (u~ ),
L (u* , * )  J (u* ) , since g i (u~ )  g i (u* )  0, i  1, m then from
(17) we have J (u )  J (u~ ) . Consequently, u  U - the point of the
* *
local function minimum J (u ) on ensemble U . The Theorem is
proved.
Example. Let function be J u   u12  5u1u 2  u 22 , ensemble
94
 
U  u  (u1 , u 2 )  E 2 u12  u 22  1 . To find the function
minimum J (u ) on ensemble U . For the example the ensemble
U 0  E 2 , functions J (u )  C 2 ( E 2 ), g i (u )  u12  u22  1 C 2 ( E 2 ) ,
ensemble U *   , since U  E 2 - compact ensemble. Necessary
optimality conditions

 2u1*  5u 2*  2*u1*  0,


L u (u ,  )  0 : 
* *

2u 2*  5u1*  2*u 2*  0,

   u 
g (u* )  0 : u1*
2 * 2
2  1,

where function L (u,  )  J (u )  g (u ), u  E 2 ,   E 1 . Thence we


find the points u1* , u 2* , * :

1) *  3 / 2, u1*   5 / 6 , u 2*  1 / 6 ;
2) *  3 / 2, u1*  5 / 6 , u 2*   1 / 6 ;
3) *  3 / 2, u1*  5 / 6 , u 2*  1 / 6 ;
4) *  3 / 2, u1*   5 / 6 , u 2*   1 / 6 .

It is necessary to find in which point ( , u1 , u2 ) from 1-4 the


* * *

minimum J (u ) on U is reached. First of all we select the points


where local minimum J (u ) on U is reached. We note, that problem
is nondegenerate and matrix L uu (u * , * ) and vector g u ( u* ) are
equal to

  2  2* 5   2u * 
L uu (u* , * )   , g u (u* )  g (u* )   1* .
 2u 
 5 2  2*   2

For the first point *  3 / 2, u1*   5 / 6 , u 2*  1 / 6 quadric


form is y L uu (u* , * ) y  y12  2 5 y1 y 2  5 y 22 , a hyperplane
95
equation  2 5 / 6 y1  2 1 / 6 y 2  0 . Thence we have y 2  5y1 .
Substituting the values y 2  5 y1 to the quadric form we get
y L uu (u * , * ) y  36 y12  0, y1  0 . Consequently, (*  3 / 2,
u1*   5 / 6 , u 2*  1 / 6 ) - point of the local minimum J (u ) on
U . By the similar way it is possible to make sure in that
 
*  3 / 2, u1*  5 / 6 , u 2*   1 / 6 - a point of the local

minimum, but the points *  3 / 2, u1*  5 / 6 , u 2*  1 / 6 and 
 *
1
*
2 
 3 / 2, u   5 / 6 , u   1 / 6 are not the points of the
*

local minimum J (u ) on U . In order to find minimum J (u ) on U


necessary to calculate the values of the function J (u ) in the points
1) and 2). It can be shown, that. J (  5 / 6 , 1 / 6 )  J ( 5 / 6 ,
, 1 / 6 )  3 / 2 . Consequently, in the points 1) and 2) the global
minimum J (u ) on U is reached.
50. Now we consider the problem (1), (2) in the case
U 0  E n , J (u )  C 2 ( E n ), g i (u )  C 2 ( E n ), i  1, s . We suppose,

that problem (1), (2) is nondegenerate and the points u* , *0  0, * 
are determined by algorithm 10 - 30. We select amongst constraints
g i (u )  0, i  1, m, g i (u )  0, i  m  1, s for which g i (u* )  0,
i  I , where index ensembles


I  i i  m  1, s, g i (u* )  0, 1  i  m . 
If the problem (1), (2) is nondegenerate, that vectors g i (u* ), i  I 
are linearly independent. According to specified above theorem from item
40 the point u*  U is the point of the local minimum to functions J (u )
on U , if quadratic form y L uu (u* ,  ) y  0, y  0 on hyperplane
*

g i u*  / u *   y  0, i  I .

96
Lecture 14
DUALITY THEORY

On base of the Lagrange’s function the main and dual problem


are formulated and relationship between its solutions is established.
The dual problems for the main, general and canonical forms writing
of the linear programming problem are determined. Since dual
problem - a problem of the convex programming regardless of that
whether the main problem is the problem of the convex
programming or not, then in many events reasonable to study the
dual problem and using the relationship between its solutions to
return to the source problem. Such acceptance often is used at
solution of the linear programming problem.
We consider the nondegenerate nonlinear programming problem
in the following type:
J (u )  inf , (1)


u  U  u  E n u  U 0 , g i (u )  0, i  1, m;


g i (u )  0, i  m  1, s , (2)

For problem (1), (2) Lagrange’s function


s
L (u, )  J (u)  i gi (u), u U 0 ;
i 1 (3)
  (0 , 1 ,, s )  0    E 1  0,, m  0.
s

The main task. We enter the function [refer to. the formula (3)]:

97
X (u )  sup L (u,  ), u  U 0 . (4)
 0

We show, that function

 J (u ) при всех u  U ,
X (u )   (5)
  при всех u  U 0 \ U .

In fact, if u  U , that g i (u )  0, i  1, m, g i (u )  0,
i  m  1, s , consequently,

 s

X (u )  sup  J (u )   i g i (u ) 
 0  i 1 

 m

 sup  J (u )   i g i (u )  J (u ) ,
 0  i 1 
m
since 1  0,, m  0,   g (u )  0
i 1
i i under all u  U , moreover

  0   0 . If u  U 0 \ U , that possible for a certain number i


from 1  i  m, g i (u )  0 and for a certain j from
m  1  j  s, g j (u )  0 . In the both events by choice a sufficient
big i  0 or  j  kg j (u ), k  0 - sufficiently large number, the
value X (u ) can be done by arbitrary large number.
Now the source problem (1), (2) can be written in the equal type:

X (u )  inf, u  U 0 , (6)

on the strength of correlations (4), (5). We notice that inf X (u ) 


uU 0

 inf J (u )  J * , consequently, if the ensemble U *   , that


uU

98

U *  u*  U min J (u )  J (u* )  J *  
 
uU

 u*  U 0 X (u* )  J (u* )  J *  min X (u ) .


uU

The source problem (1), (2) or tantamount its problem (6) is


named by the main task.
Dual problem. On base of the Lagrange’s function (3) we enter
function

 ( )  inf L (u,  ),    0 . (7)


uU

Optimization problem of the following type:

 ( )  sup,    0 , (8)

is called by dual problem to problem (1), (2) or tantamount its


problem (6), but Lagrange’s multipliers   (1 ,, s )   0 - dual
variables with respect to variables u  (u1 ,, u n ) U 0 . We denote
through
*  *  E s *   0 ,  (* )  max ( )
  0 

We notice, that if   Ø , that  (* )  sup ( )   * .


*

 0

Lemma. The values J *  inf X (u ),  *  sup  ( ) for main


uU 0  0
(6) and dual (8) problems accordingly satisfy to the inequalities

     *  J *  X (u ), u  U 0 ,    0 . (9)

Proof. As follows from formula (7), function  ( ) 


 inf L (u,  )  L (u,  ), u  U 0 ,    0 . Thence we have
uU 0

 *  sup ( )  sup L (u,  )  X (u ), u  U 0 , (10)


 0  0

99
on the strength of correlations (4). From correlations (10)
transferring to lower bound on u we get  *  inf X (u )  J * .
uU 0

Thence and from determinations of the lower bound the inequalities


(9) follow. Lemma is proved.
Theorem 1. In order to execute the correlations

U *   , *   , X (u* )  J *   *   * , (11)

necessary and sufficiently that Lagrange’s function (3) has saddle


point on ensemble U 0   0 . The ensemble of the saddle points to
function L (u ,  ) on U 0   0 coincides with ensemble U *  * .
Proof. Necessity. Let for the points u*  U * , *  *

correlations (11) are complied. We show, that pair u* , * - saddle 
point to Lagrange’s function (3) on ensemble U 0   0 . Since
 *   *   inf L (u, * )  L (u* , * )  sup L (u* ,  )  X (u* ) 
uU 0  0

 J * , that on the strength of correlations (11) we have

L (u* , * )  inf L (u , * )  sup L (u* ,  ) ,


uU 0  0
(12)
u*  U * , *   0 .
From inequality (12) follows that

L (u* ,  )  L (u* , * )  L (u, * ), u  U 0 ,    0 . (13)

 
It means that pair u* , * U *  * - saddle point. Moreover
ensemble U *   belongs to the ensemble of the saddle points to
*

Lagrange’s function, since u* U * , *  * - arbitrary taken points


from ensemble U * , * accordingly. Necessity is proved.
 
Sufficiency. Let pair u* , * U 0   0 be a saddle point to

100
Lagrange’s function (3). We show that correlations (11) are fulfilled.
As follows from determination of the saddle point which has the
form (13) inequality L (u* ,  )  L (u* , * ),    0 faithfully.
Consequently,

X (u* )  sup L (u* ,  )  L (u* , * ) . (14)


 0

Similarly from the right inequality (13) we have

 (* )  inf L (u, * )  L (u* , * ). (15)


uU 0

From inequalities (14), (15) with consideration of correlation (9)


we get

L (u* , * )   (* )   *  J *  X (u* )  L (u* , * ) .

Thence we have  (* )   *  J *  X (u* ) . Consequently,


ensemble U *   ,    and moreover ensemble of the saddle
*

points to function (3) belongs to the ensemble U *  * . The theorem


is proved.
The following conclusions can be made on base of the lemma
and theorem 1:
1°. The following four statements are equivalent: a) or
 
u* , * U 0   0 - the saddle point to Lagrange’s function (3) on
ensemble U 0   0 ; b) or correlations (11) are executed; c) or the
points u*  U 0 , *   0 such that X (u* )   (* ) exist; d) or
equality
max inf L (u,  )  min sup L (u,  )
 0 uU 0 uU 0 
0

is equitable.
  
2°. If u* , * , a* , b * U 0   0 are saddle points of the

Lagrange’s function (3) on U 0   0 , that u* , b* , a* , *  - also
saddle points to function (3) on U 0   0 , moreover L (u* , b* )  L

101
 L (a* , * )  L (u* , * )  L (a* , b* )   (* )  X (u* )  J *   * .
However inverse statement, i.e. that from L (u* , * )  L (a, b)
follows a, b  U 0   0 - saddle point in general event untrue.
3°. Dual problem (8) it is possible to write as

  ( )  inf,    0 . (16)

Since function L (u ,  ) is linear by  on convex ensemble  0 ,


that optimization problem (16) is the convex programming problem,
 ( ) is convex on  0 regardless of that, the main task (1), (2)
would be convex or no. We notice, that in general event the dual
problem to dual problem does not comply with source, i.e. with the
main problem. There is such coincidence only for tasks of the linear
programming.
We consider the problem of the linear programming as
applications to duality theories.
The main task of the linear programming has the form

J (u )  c, u  inf,
(17)

u  U  u  E n u  0, Au  b  0 , 
where c  E n , b  E m are the vectors; А is the matrix of the order
m n ; the ensemble
U 0  u  E n u  u1  0,  , u n  0   0

Lagrange’s function for task (17) is written as

L (u,  )  c, u   , Au  b  c  A*  , u  b,  ,
(18)

u  U 0 ,   (1 , , m )   0    E m 1  0, , m  0 . 
As follows from formulas (17), (18), function

102
 b,  , если c  A*   0
    inf L (u,  )  
uU 0
 , если ( c  A*  ) i  0,    0 .
We notice, that under c  A*  0 lower bound which is equal
to  b,  is got under u  0  U 0 . If ( c  A*  ) i  0 , that it is
possible to choose ui   , but all u j  0, j  1, n, i  j ,
herewith     , *   0 . Finally, the dual task to task (18)
has the form
     b,   inf,
(19)
      E m   0, c  A*   0.
The dual task to task (19) complies with source task (17).
By introducing of the additional variables u n i  0, i  1, m ,
optimization problem (17) can be written in the following type:

J (u )  inf 
. (20)
Au i  bi  u ni  0, u  0, u n i  0, i  1, m

General task of the linear programming has the form

J (u )  c, u  inf,
(21)

u  U  u  E n u j  0, j  I , Au  b  0, A u  b  0 , 
where c  E n , b  E m , b  E s are the vectors; A, A are the
matrixes accordingly to the orders m  n, s  n ; index ensemble
I  1,2,, n. The ensemble


U 0  u  u1 ,, u n   E n u j  0, j  I 
Lagrange’s function for task (21) is written as

103
L (u ,  )  c, u   , Au  b   , A u  b 

 c  A*   A *  , u  b,   b ,  ,

u  U 0 ,   ( ,  )   0    ( ,  )  E m  E s   0 . 
Function

    inf L(u,  ) 
uU 0

 b,   b ,  , if 
c  A*   A *  i  0, i  I ;

  
c  A*   A *  j  0, j  I ;

  under rest.

The dual task to task (21) is written as


     b,   b ,   inf; c  A*   A *  
i  0, i  I ;
(22)
c  A   A  
* *
j  0, j  I ;   (  ,  )  E n  E s ,   0.

It is possible to show that dual task to task (22) complies with


(21).
By introducing of the additional variables u n i  0, i  1, m and
representations u i  qi  vi , q i  0, vi  0, i  I the problem
(21) possible write as

J (u )  c, u  inf, Au i  bi  u ni  0, A u  b  0,


(23)
u j  0, j  I , u i  q i  vi , qi  0, vi  0, i  I .

Canonical task of the linear programming has the form

J (u )  c, u  inf,
(24)

u  U  u  E n u  0, Au  b  0 , 
104
where c  E n , b  E s are the vectors; А is the matrix of the order
s  n ; the ensemble
U 0  u  E n u  u1 ,  , u n   0 

Lagrange’s function for task (24) is written so:

L (u ,  )  c, u   , Au  b  c  A*  , u  b,  ,
 
u U 0 ,    0    E s .
Function

 b,  , если c  A*   0,
 ( )  inf L (u,  )  
uU 0
 , если c  A*   0.

Then the dual problem to the problem (24) has the form

  ( )  b,   inf; c  A*   0,   E s . (25)

It is easy to make sure in the dual task to task (25) complies with
task (24). Finally, we note that main and the general task of the linear
programming by the way of introducing some additional variables
are reduced to the canonical tasks of the linear programming [refer to
formulas (20), (23)].

105
Chapter III. LINEAR
PROGRAMMING

As it is shown above, the main and the general problem of the


linear programming are reduced to the canonical problems of the
linear programming. So it is reasonable to develop the general
solution method of the canonical linear programming problem. Such
general method is a simplex-method. Simplex-method for
nondegenerate problems of the linear programming in canonical
form is stated below.

Lectures 15, 16
STATEMENT OF THE PROBLEM.
SIMPLEX-METHOD

We consider the linear programming problem in the canonical


form


J (u )  c, u  inf, u  U  u  E n u  0, Au  b  0 , (1) 
where c  E n , b  E m are the vectors; А is the matrix of the order
m  n . Matrix A  aij , i  1, m, j  1, n can be represented in the
manner of

106
 a1 j 
 
 a2 j 
 2 n

A  a , a ,..., a , a   , j  1, n .
1 j
...
 
a 
 mj 
The vectors a j , j  1, n are identified by the condition vectors, but
vector b  E m - a vector of the restrictions. Now the equation
Au  b can be written in the manner of a1u1  a 2 u 2  ...  a n u n  b
  
. Since ensemble U 0  u  E n u  0 and U  u  E n Au  b 
- affine ensemble which are convex, that (1) is a problem of the
convex programming. We notice that if ensemble


U *  u*  E n u*  U , J (u* )  min c, u   ,
uU

that Lagrange’s function for problem (1) always has saddle point,
any point of the local minimum simultaneously is the point of the
global minimum and necessary and sufficiently condition of the
optimality is written as J ' (u* ), u  u*  c, u  u*  0, u  U 0 .
We suppose, that ensemble U *   . It is necessary to find the point
u*  U * and value J *  inf J (u )  J (u* )  min J (u ) .
uU uU
Simplex-method. For the first time solution of the problem (1)
was considered on simplex

 n

U  u  E n u  0,  ui  1
 i 1 
so solution method of such linear programming problem was called
by simplex-method. Then method was generalized for event of
ensemble U specified in the problem (1), although initial name of
the method is kept.
Definition 1. The point u U is identified by extreme (or
angular), if it is not represented in the manner of

107
u  u 1  1   u 2 , 0    1, u 1 , u 2 U . From given definition
follows that extreme point is not an internal point of any segment
belonging to ensemble U.
Lemma 1. Extreme point u U has not more m positive
coordinates.
Proof. Not derogating generalities, hereinafter we consider that
first components k , k  m of the extreme point are positive, since
by the way of recalling the variables always possible to provide the
given condition.
We suppose opposite, i.e. that extreme point u U has m  1
positive coordinates
u  u1  0, u 2  0, ..., u m1  0, 0,...,0 .
We compose the matrix A1  (a 1 , a 2 ,..., a m 1 ) of the order
m   m  1 from condition vectors corresponding to positive
coordinates of the extreme point. We consider the homogeneous
linear equation A1 z  0 for vector z  E m 1 . The equation has a
nonzero solution ~ z  0 . We define n-vector u~  ~
z, ~ z ,0 and
consider two vectors: u 1  u   u~, u 2  u   u~ , where   0 -
sufficiently small number. We notice that vectors u 1 , u 2  U under
0     1 , where  1  0 is sufficiently small number. In fact,
Au 1  Au   A u~  Au  A1 ~ z  Au  b, u 1  u   u~  0 under
sufficiently small  1  0 similarly Au 2  b, u 2  0 . Then the
extreme point u  1 / 2 u 1  1 / 2 u 2 ,   1 / 2 . It opposites to the
definition of the extreme point. The lemma is proved.
Lemma 2. Condition vectors corresponding to positive
coordinates of the extreme point are linear independent
Proof. Let u  u1  0, u 2  0, ..., u k  0, 0,...,0   U , k  m
be an extreme point. We show that vectors a1 , a 2 ,..., a k , k  m are
linear independent. We suppose opposite, i.e. that there are the
numbers 1 ,..., k not all equal to zero such that
1a1   2 a 2  ...  

108
 k ak  0 (the vectors a1 ,..., a k are linear dependent). From
inclusion u U follows that a1u1  a 2 u 2  ...  a k u k  b,
ui  0, i  1, k . We multiply the first equality on   0 and add
(subtract) from the second equality as a result we get
a 1 u1   1   a 2 u2   2   ...  a k u k   k   b . We denote
by u 1  u1   1 ,..., u k    k ,0,...,0   E n , u 2  u1   1 ,...,
   k ,0,...,0   E n . There is a number 1  0 such that u 1  U ,
u 2  U under all  , 0     1 . Then vector u  1 / 2u 1  
 1 / 2u 2  U , u 1  U , u 2  U . We obtained the contradiction in
that u U - extreme point. Lemma is proved.
From lemmas 1, 2 follows that:
a) The number of the extreme points of the ensemble U is finite
m
and it does not exceed the sum C
k 1
k
n , where C nk - a combinations

number from n-elements on k. In fact, the number of the positive


coordinates of the extreme points is equal to k , k  m (on the
strength of lemma 1), but the number of the linear independent
vectors corresponding to positive coordinates of the extreme point is
equal to C nk (on the strength of lemma 2). Adding on k within from
1 till m we get the maximum possible number of the extreme points.

b) The ensemble U  u  E n u  0, Au  b  0 is convex 
polyhedron with final number of the extreme points under any matrix
А of the order m  n .
Definition 2. The problem of the linear programming in
canonical form (1) is identified nondegenerate if the number of the
positive coordinates of he feasible vectors not less than rank of the
matrix А, i.e. in the equation a1u1  a 2 u 2  ...  a n u n  b under
ui  0, i  1, n the number differenced from zero summands not
less, than rank of the matrix A.
Lemma 3. Let rangA  m, m  n  . If in the nondegenerate

109
problem the feasible vector has exactly m positive coordinates, that
u - an extreme point of the ensemble U.
Proof. Let the feasible vector u  u1  0, u m  0, 0,...,0   U
has exactly m positive coordinates. We show, that u is an extreme
point of the ensemble U .
We suppose opposite i.e. there are the points
u , u 2  U , u 1  u 2 , and number  , 0    1 such that
1

u  u 1  1   u 2 (the point u U is not an extreme point).



From given presentations follows that u 1  u11 ,..., u 1m ,0,...,0 , 
 
u 2  u12 ,..., u m2 ,0,...,0 . Let the point u ( )  u   (u 1  u 2 ),
u u .
1 2
Consequently, u ( )  (u1   (u11  u12 ), u 2   (u 12 
 u 22 ),..., u m   (u 1m  u m2 ),0,...,0 . We notice, that Au ( )  Au 
  Au1  Au 2   b under any  . We assume, there is negative
amongst the first m coordinates of the vector u1  u 2 . Then by
increasing   0 from 0 to  we find the number  1  0 such that
one from m first coordinates of the vector u  1  becomes equal to
zero, but all rest will be nonnegative. But it isn’t possible in the
nondegenerate problem. Similarly if u 1  u 2  0 that reducing 
from 0 till   we get the contradiction. Lemma is proved.
We notice, that from rangA  m does not follow that problem
of the linear programming (1) will be nondegenerate.
Example 1. Let the ensemble


U  u  (u1 , u 2 , u 3 , u 4 )  E 4 u j  0, j  1,4;
3u1  u 2  u 3  u 4  3, u1  u 2  2u 3  u 4  1.

In this case the matrix

 3 1 1 1
A     a1 , a 2 , a 3 , a 4 , rangA  2.
 1  1 2 1

110
By the extreme points of the ensemble U are u 1  1,0,0,0 ,
u 2  0,5 / 3, 4 / 3, 0, u 3  0,1,0,2 . Here u 2 , u 3 are nondegenerate
extreme points, u 1 is degenerate extreme point. Since there is
feasible vector u 1 , number of the positive coordinates less than
rangA , the problem of the linear programming (1) is not
nondegenerate. The number of the extreme points are equal to 3, that
does not exceed the amount c14  c42  10 ; the vectors corresponding
to the positive coordinates of the extreme point u 2  a 2 ,a 3 ,  
u  a ,a
3
 2 4
 are linear independent.
Example 2. Let ensemble be


U  u  (u1 , u 2 , u 3 , u 4 )  E 4 u j  0, j  1,4;

3u1  u 2  u 3  u 4  3,  u1  u 2  2u 3  u 4  1.

Here rangA  2 , the extreme points u 1  1 / 2, 0, 0, 3 / 2 ,


u 2  5 / 7, 0, 6 / 7, 0, u 3  0, 5 / 3, 0, 4 / 3, u 4  0, 1, 0, 2 .
The problem (1) is nondegenerate. We notice that in nondegenerate
problem the number of the extreme points no more than C nm .
Lemma 4. Any point u U can be represented as convex linear
s
combination points of the ensemble U, i.e. u   u
k 1
k
k
,  k  0,
s


k 1
k  1, u1 , u 2 ,..., u s - an extreme points of the ensemble U .

Proof. We prove the lemma for event, when U is convex


bounded closed ensemble. In fact, for order the ensemble U *   ,
necessary and sufficiency the ensemble U is compactly. It means
that U is convex closed bounded ensemble. We prove the lemma by
method of mathematical induction for events when u  p U and
u  int U .
Let u  p U , u U . If n  1, that ensemble U is a segment;
111
consequently, statement of the lemma faithfully, since any point of
the segment ( u  p U , u  int U ) can be represented in the
manner of the convex combination of the extreme points (the end of
the segment). Let the lemma be true for ensemble U  E n 1 n  2  .
We conduct supporting hyperplane to ensemble U  E n through
the point u  pU , i.e. c, u  c, u , u  U . We denote by
U1  U   , where   u  E n c, u  c, u  - ensemble points
of the supporting hyperplane. We notice that ensemble U1 is
convex, bounded and closed, U1  E n 1 . Let
moreover
u1 , u 2 ,..., u s1 - extreme points of the ensemble U1 , then by
s1
hypothesis the point u    k u k  E n 1 ,  k  0, k  1, s1 ,
k 1
s1


k 1
k  1 . It is remains to show, the points u1 , u 2 ,..., u s1 are extreme

points and ensembles U. Let u i   w  1   v, w, v U ,


0    1 . We show that u  w  v for any extreme point u i U 1 .
i

In fact, since c, w  c, u , c, v  c, u and c, u i  c, u ,


that c, u i   c, w   1    c, v  c, u  c, u i . Thence
follows that c, w   c, v  c, u , i.e. w, v U1 . However the
point u i - an extreme point U1 , consequently, u i  w  v .
s
Therefore the points u1 , u 2 ,..., u 1 are extreme points of the ensemble
U . For border point u  pU . Lemma is proved.
Let the point u  int U . We conduct through u U a line l
which crosses the borders of the ensemble U in the points
a  U , b  U . At the point u  int U is representable in the manner
of. u   a  1   b, 0    1 . The points a  pU , b  pU

112
s1
on the strength of proved a    k v k ,  k  0, k  1, s1 ,
k 1
s1 s2 s2


k 1
k  1, b    k w ,  k  0,
k 1
k
k  1, s 2 , 
k 1
k  1, where

v ,..., v s1 ; w1 ,..., w s2 are extreme points of the ensemble U . Then


1

s1 s1
the point u   a  1   b    k v k   1    k wk , moreover
k 1 k 1

 k    k  0, k  1, s1 ,  k  1    k  0, k  1, s 2 ,
s1 s2

   k k    1     1 . Lemma is proved.
k 1 k 1

Lemma 5. Let U be convex bounded closed ensemble from E n ,


i.e. ensemble U *   . Then minimum to function J (u ) on
ensemble U is reached in the extreme point of the ensemble U. If
minimum J (u ) on U is reached in several extreme points u 1 ,..., u k
of the ensemble U, that J (u ) has same minimum value in any point
k k
u    i u i ,  i  0,  i  1.
i 1 i 1

Proof. Let minimum J (u ) on U is reached in the point u*  U * .


If u* U - an extreme point, that lemma is proved.
Let u* U be certain border or internal point of the ensemble
s
U. Then on the strength of lemma 4 we have u *   u ,
i 1
i
i

s
 i  0,   i  1 , where u1 ,..., u s - extreme points of the ensemble
i 1
s s
U . The value J (u* )  c, u*    i c, u i    i J i , where
i 1 i 1

J i   J (u )  c, u , i  1, s.
i i

113
s
Let J 0  min J i  J (u i0 ) . Then J (u* )  J 0   i  J 0 
1i  s
i 1

J (u i0 ) . Thence we have J (u* )  J (u i0 ) , consequently, minimum


J (u ) on U is reached in the extreme point u i0 .
Let the value J (u* )  J (u 1 )  J (u 2 )  ...  J (u k ) , where
u 1 ,..., u k - extreme points of the ensemble U. We show that
 k  k
J  u   J   i u i   J  u*  , where  i  0,   i  1 . In fact, the
 i 1  i1
k k
value J  u    i J  u i    i J  u*  J  u*  . Lemma is proved.
i 1 i 1
We notice that lemmas 4, 5 are true for any problem of the linear
programming in the canonical form of the type (1) with restrict
closed convex ensemble U. From lemmas 1 - 5 follows that solution
algorithm of the linear programming problem in canonical form must
be based on transition from one extreme point of the ensemble U to
another, moreover under such transition the value of function J (u )
in the following extreme point less, than previous. Such algorithm
converges to solution of the problem (1) through finite number steps,
since number of the extreme points of the ensemble U doesn’t
m
exceed the number C
k 1
k
n (in the event of the nondegenerate

problem Cnm ). Under each transition from extreme point u i  U to


the extreme point u i 1  U necessary to make sure in that, whether
the given extreme point u i  U be solution of the problem (1). For
this a general optimality criterion which easy checked in each
extreme point must be existed. Optimality criterion for
nondegenerate problem of the linear programming in canonical form
is brought below.
Let the problem (1) be nondegenerate and the point u* U - a
solution of the problem (1). According to lemma 5 the point u* U
114
is extreme. Since the problem (1) is nondegenerate, that extreme
point u* U has exactly m positive coordinates. Not derogating
generalities, we consider the first m components of the vector
u* U are positive, i.e.

 
u *  u1* , u 2* ,..., u m* ,0,...,0 , u1*  0, u 2*  0,..., u m*  0 .

The vector c  E n and matrix А we present in the manner of


c  c Б , c Н , A   AБ , AH , where c Б  (c1 , c 2 ,..., c m )  E m ,
c H  (c m 1 , c m  2 ,..., c n )  E n  m , AБ  (a 1 , a 2 ,..., a m ) , AH 
 (a m 1 , a m  2 ,..., a n ). We notice, that according to lemma 2
condition vectors a1 , a 2 ,..., a m corresponding to the positive
coordinates of the extreme point u* U are linear independent, i.e.
matrix AБ is nonsingular, consequently, there is the inverse matrix
AБ1 .
Lemma 6 (optimality criterion). In order the extreme point
u* U to be a solution of the nondegenerate problem of the linear
programming in canonical form (1), necessary and sufficiency to
execute the inequality

c H'  c Б' AБ1 AH  0. (2)

Proof. Necessary. Let the extreme point be u*  u Б , u H*  U ,  


*
Б  *
1
*
2
*
m 
where u  u , u ,..., u , u  0,...,0  - a solution of the
*
H
nondegenerate problem (1). We show, that inequality (2) is
executed. Let u   uБ , uН   U , where uβ = (u1, …, um), uH =
= (um+1, …un) - an arbitrary point. We define the ensemble of the
feasible directions l to the point u* U . We notice that vector
l  E n , l  0 is identified by feasible direction in the point u* U ,
if there is the number  0  0 such that vector u  u*   l U

115
under all 0     0 . We present the vector l  E n in the manner of
l   lБ , lH  , where lБ   l1 ,..., lm   E m , lH   lm1 ,..., ln   E n m .
From inclusion u*  l U follows that u Б*   l Б  0, u H*   l H 
  l H  0, Au*   l   AБ u Б*   l Б   AH  l H  b . Since u* U
, that Au*  AuБ*  b consequently from the last equality we have
AБ lБ  AH lH  0 . Thence follows the vector lБ   АБ1 АН lH . Since
under sufficiency small   0 inequality u Б*  lБ  0 is executed,
the feasible directions in the point u* are defined by the correlations

l H  0, l Б   AБ1 AH l H . (3)

Finally, having chosen the arbitrary vectors lH  E n  m , lH  0


possible to find lБ   АБ1 АН lH and construct the ensemble of the
feasible directions L, which each element has the form
  
l   АБ1 АН lH , lH  0  E n , i.e. L  l  E n l  l Б , l H , l Б 
  AБ1 AH l H , l H  0 E n
. Now any point u U can be
represented in the manner u  u*   l , l  L ,
of
  0, 0     0 ,  0   0 l  . Consequently, u  u*   l , l  L .
Since the function J (u )  c, u  C U  , then in the point u* U
1

necessary inequality J   u*  , u  u*  0, u  U (the lecture 5) is


executed. Thence with provision for that J ' u*   c, u  u*   l ,
l  L we get c, l   cБ , lБ  сН , lH    0 . Since the
number   0 , that we have cБ , lБ  сН , lH  0 . Substituting the
value lБ   АБ1 АН lH from formula (3), we get

cБ ,  АБ1 АН lН  сН , lH   cH  cБ АБ1 АН  lH  0

under all lH  0 . Thence follows the inequality (2). Necessity is


proved.
116
Sufficiency. Let the inequality (2) be executed. We show, that
u* U*  U - an extreme point of the ensemble U . Since the
function J  u   C1 U  is convex on convex ensemble U, that
necessary and sufficiency the execution of the inequality
J (u )  J (v)  J (v), u  v , u, v  U (the lecture 4). Hence in
particular, v  u* U , then we have

J (u )  J (u* )  J ' (u* ), u  u*  c, u  u*  c,  l 


   cБ , l Б  cH , l H    c 
 c Б' AБ1 AH l H  0, l  L,
'
H

u  U .

ThenJ (u* )  J (u ), u  U . Consequently, minimum is


reached in the point u* U on U . According to the lemma 5 u* U -
an extreme point. Lemma is proved.
It is easy to check the optimality criterion (2) by simplex table
formed for the extreme point u*  U 0 .

Ba
sis
с с1 … сj … с j0 … сn
AБ Сб b
а1 aj …
a j0 …
an 
а1 с1 u11 … u1 j … u1 j0 … u1n
u1*
… … … … … … … … … … …
a i0
ci0 u *
i0
ui0 1 … ui0 j … ui0 j0 … ui0 n 0
… … … … … … … … … … …
ai ci ui* ui1 … uij … uij0 … uin i
… … … … … … … … … … …
m
cm *
um1 … u mj … umj0 … umn
a u m
z z1 … zj … z j0 … zn
z c 0 … zj  cj … zj0 cj0 … zn  cn

117
The condition vectors a1 , a 2 ,..., a m corresponding to positive

coordinates of the extreme point u*  u1* ,..., um* , 0,..., 0 are leaded 
in the first column of the table. Matrix АБ   a ,..., a  .
1 m

Coordinates of the vector c  c1 ,..., cn  corresponding to positive


coordinates of the extreme point are given in the second column; in
the third - corresponding to a i positive coordinates of the extreme
point. In the following columns the decomposition coefficients of the
vector a j , j  1, n on base vector ai , i  1, n are reduced. Finally,
in the last column the values i which will be explained in the
following lecture are shown. We consider the values specified in the
last two lines more detail.
Since vectors a1 , a 2 ,..., a m are linear independent (the lemma 2),
that they form the base in the Euclidean space E m , i.e. any vector
a j , j  1, n can be singularity decomposed by this base.
Consequently,
m
a j   a i uij  АБ u j , u j   u1 j , u2 j ,..., umj   E m , j  1, n .
i 1

Thence we have u j  АБ1а j , j  1, n . We denote through


m
z j   ci u ij , j  1, n . We notice, that z j  c j , j  1, m , since
i 1

u j   (0,...,0,1,0,...,0), j  1, m .. Then vector

zc   z  c  Б 
,  z  c  H   0,  z  c  H  ,

where z  c Б  z1  c1 , z 2  c2 ,..., z m  cm   0,...,0,


 z  c H  z m 1  c m 1 ,..., z n  c n  .

118
m
Since the values z j  c u
i 1
i ij  cБ u j  cБ АБ1а j , j  1, n , that

z j  c j  c Б А a  c j , j  1, n . Consequently, vector ( z  c )  
1
Б
j

 (0, c Б АБ1 AH   c H ) .
Comparing the correlation with optimality criterion (2), we make
sure in the extreme point u*  U to be a solution of the problem
necessary and sufficiency that values z j  c j  0, j  1, n . Finally,
by sign of the values in the last line of the simplex-table it is possible
to define whether the point u* U a solution of the nondegenerate
problem (1).

119
Lecture 17
DIRECTION CHOICE.
NEW SIMPLEX-TABLE CONSRUCTION.
THE INITIAL EXTREME POINT
CONSRUCTION

In the linear programming problem of the canonical form


minimum to linear function is reached in the extreme point of the
convex polyhedron U. A source extreme point in the simplex-method
is defined by transition from one extreme point to the following,
moreover value of the linear function in the next extreme point less,
than in previous. It is necessary to choose the search direction from
the extreme point and find the following, in the last to check the
optimality criterion etc. Necessity in determination of the initial
extreme point to use the simplex-method for problem solution is
appeared.
We consider the nondegenerate problem of the linear
programming in canonical form


J (u )  c, u  inf, u  U  u  E n u  0, Au  b  0 .  (1)

Let u  U be an extreme point and in the point u  U the


minimum J (u ) on U isn’t reached, i.e. optimality criterion
z j  c j  0, j  1, n is not executed. Then necessary to go from
given extreme point to other extreme point u  U , where value
J  u   J  u  . It is necessary to choose the direction of the motion
from given extreme point u .

120
Direction choice. Since u U is an extreme point, not
derogating generalities, it is possible to consider that the first m
components of the vector u are positive, i.e. u  (u1 , u 2 ,..., u m ,
0,...,0), u i  0, i  1, m . According to formula (2) (the lecture 16),
feasible directions in the point u are defined from correlations
lH  0, lБ   АБ1 АН lH . Derivative of function J (u ) on feasible
direction l in the point u is equal to

 
J u  / l  c, l  c Б , l Б  c H , l H  c H'  c Б' AБ1 AH l H 

 z  c j l j , l j  0, j  m  1, n .
n
 j
j  m 1

Necessary to choose the feasible direction l 0 in the point u


from minimum condition J  u  l on ensemble of the feasible
directions L .
The source direction l 0  L is chosen by the following in the
simplex-method:
a) Index j0  I1 is defined, where I 1   j m  1  j  n,
z j  c j  0 from condition z j0  c j0  max ( z j  c j ) . Since in the
jI1

point u U minimum J (u ) on U is not reached, i.e. the


inequality z j  c j  0 under all j , 1  j  n aren’t executed, that
ensemble I1   .
b) Vector l H0  0 , l H0  E n  m is chosen so that
l H0  (0,...,0,1,0,...,0) , i.e. j0 - a component of the vector lH0 is
equal to 1, but all rest components are equal to zero.
Finally, the motion direction l 0 in the point u U is defined
by the correlations:

l 0  L, l 0  l Б0 , l H0 , l H0  0,...,0, 1, 0,...0,

121
l Б0   AБ1 AH l H0   AБ1a j0  u j0 
 (u1 j0 ,  u 2 j0 ,...,u m j0 ) (2)

We notice, that derivative of the function J (u ) in the point u


in the line of l 0 is equal to J  u  l 0  c, l 0   z j0  c j0  0 .  
It is follows to note that in general event J  u  l 0
isn’t the
least value J  u  l on ensemble L . However such direction
choice l 0 allows to construct a transition algorithm from one
extreme point to other.
We consider the points ensemble from U along chosen direction
0
l . These points u    u   l 0 U ,   0. Since
u   u1  0,..., um  0, 0,..., 0  U - an extreme point,  l 0 

  l Б0 , l H0     u j0 ,  that the points 
u    u1   u1 j0 ,
u 2   u 2 j0 ,... .., u m   u m j0 
,0,...,0,  ,0,...,0 ,   0 . We enter the
 
indexes ensemble I 2  i 1  i  m, uij0  0 . We define the values
 i  u i u ij , i  I 2 . Let
0

 
0  min i  min ui uij0  i0  0, i0  I 2 .
iI 2 iI 2

Then for values   0 vector u ( 0 )  (u1   0 u1 j0 ,..., u i0 1  


  0 u i0 1 j0 ,0, u i0 1   0 u i0 1 j0 ,..., u m   0 u mj0 ,0,...,0,  0 ,0,...,0). We
notice that u ( 0 )  0, Au ( 0 )  Au   0 Al 0  Au  AБ l Б0 
 AH l H0  Au  b, consequently, point u  0  U . On the other
hand, vector u  0  has exactly m positive coordinates. It means that
u  0   u is the extreme point of the ensemble U. We calculate
value J (u ( 0 ))  J (u   0 l 0 )  c, u   0l 0  c, u   0 c, l 0 

122
 J (u )   0 ( z j0  c j0 ). Thence we have J (u ( 0 ))  J (u ) 
  0 ( z j0  c j0 )  0. Then J (u ( 0 ))  J (u )   J (u ) , i.e. in the
extreme point u  0   u value J  u  less, than in extreme point u .
We note the following:
1) If ensemble of the indexes I 2   , i.e. uij0  0, i  1, m , then
for any   0 the point u     U , moreover J (u ( ))  J (u ) 
 ( z j0  c j0 )   under    . In this case source problem
has a no solution. However, if ensemble U is compact, that it isn’t
possible.
2) It can turn out so that  0   i0   i0 , i0  I 2 , i0  I 2 . In this
case vector u  0  has m  1 positive coordinates. It means that
source problem is degenerate. In such events possible appearance
"thread", i.e. through determined number steps once again render in
the extreme point u  0  . There are the different methods from
"recirculations". We recommend the following books on this
questions: Gabasov R., Kirillova F. M. Methods of the optimization.
Minsk: BGU, 1975, Karmanov V.G. Mathematical programming.
M.: Science, 1975; Moiseev N.N., Ivanilov U.P., Stolyarova E.M.
Methods of the optimization. M.: Science, 1978.
To make sure in the extreme point u  0   u  U is a solution
of the problem (1), need to build the simplex-table for the extreme
point u with aim to check the optimality criterion (2) (lecture 16) in
the point u .
Construction of the new simplex-table. The simplex-table built
for the extreme point u* U in the previous lecture, in particular for
the extreme point u  U is true. In the simplex-table column j0
and line i0 and values i , i  I 2 are indicated.
We build the simplex-table for the extreme point u  0   u  U
on base of simplex-table for the point u*  u . We notice, that
123
u  U by the base vectors were condition vectors a1 ,..., a m for the
point corresponding to positive coordinates of the extreme point u .
By the base vectors will be

a1 , a 2 ,..., a i0 1 , a j0 , a i0 1 , a m , (2*)

for extreme the point u  0   u .


as condition vectors corresponding to positive coordinates. Thereby,
the first column of the new simplex-table differs from previous, that
instead of vector a i0 is written the vector a j0 . In the second column
instead ci0 is written c j0 , in one third column nonnegative
components of the vector u are written, since a1u1  a 2 u 2  ... 
 a i0 1ui0 1  a j0  0  a i0 1ui0 1  ...  a m u m  b, where u i  u i  
  0 u ij0 , i  1, m, i  i0 . In the rest columns of the new simplex-
table must be decompositions coefficients of the vector a j , j  1, n ,
on new base (2*). There were the decompositions
m
a j   a i uij , j  1, n . in the previous base a1 ,..., a m . Thence
i 1
m
follows that a j  a u
i 1
i
ij  a i0 ui0 j , j  1, n . hen under j  j0 we
i  i0

have
m m uij0 1
a j0   a i uij0  a i0 ui0 j0 , a i0   a i  a j0 . (3)
i 1 i 1 ui0 j0 ui0 j0
i  i0 i  i0

Substituting value a i0 from formula (3) we get

m
a j   a i ui j  a i0 ui0 j 
i 1
i  i0

124
m  ui j ui j  ui0 j j
  a i  ui j  0 0  a 0, j  1, n. (3*)
 ui 0 j 0  ui j
i 1
i  i0
  0 0

The formula (3*) presents by the decompositions of the vectors


a j , j  1, n on new base (2*). From expression (3*) follows that in
the new simplex-table coefficients uij   нов
are determined by formula
uij ui0 j0  uij0 ui0 j
u 
ij нов 
ui0 j0
, i  i0 , but in line i0 of the new simplex-

tables must be (u i0 j ) нов  u i0 j u i 0 j 0 , j  1, n. Since vector a j0


in base then in column j0 of the new simplex-table all
(u i j0 ) нов  0, i  i0 , (u i0 j0 ) нов  1 . Finally, coefficients (u i j ) нов ,
i  1, m , j  1, n , are calculated by the known coefficients u i j ,
i  1, m , j  1, n of the previous simplex-table. Hereinafter, the last
two lines of the new simplex-table are calculated by the known
 
 cБ нов  с1 ,..., сi0 1 , c j0 , ci0 1 ,..., cm and (u i j ) нов , i  1, m ,
j  1, n . If it turns out ( z j  c j ) нов  0, j  1, n , that u  U - a
problem solution. Otherwise transition to the following extreme
point of the ensemble U and so on are fulfilled.
Construction of the initial extreme point. As follows from
lemmas 2, 3, the extreme point can be determined from system of the
algebraic equations AБ u Б  b, u Б  0 , where AÁ =
 (a j1 , a j2 ,..., a jm ), a jk , i  1, m , - linear independent vectors,
column of the matrix A u Б  (u j1 , u j2 ,..., u jm ) . However such
determination way of the extreme point u  (0,...,0, u j1 ,0,...,0,
u jm ,0,...,0) is complete enough, when matrix A has the great
dimensionality.
1. Let the main task of the linear programming is given
125

J (u )  c, u  inf, u  U  u  E n u  0, Au  b  0 ,  (4)

where c  E n , b  E n - the vectors; A - the matrix of the order


m  n . We suppose that vector b  0 . By entering of the additional
variables un 1  0, i  1, m problem (4) can be represented to the
canonical form

J (u )  c, u  inf, [ Au ]i  u n i  bi , i  1, m, (5)

u  (u1 ,..., u n )  0, u ni  0, i  1, m.


Entering some denotes с  (c,0)  E n  m , u  u, u n 1 ,...,
, u nm   E nm
, 
A   A, I m   A, a n 1
,a n2
,..., a nm
, a nk 
(0,..., 0,1, 0,...,0), k  1, m , where I m - a unit matrix of the order
m  m , problem (5) is written in the manner of

J (u )  c , u  inf, 
u  U  u  E n m u  0, A u  b . (6) 
For problem (6) initial extreme point u   0,..., 0, b1 ,..., bm  
Є
ЄEn+m , since condition vectors a n 1 ,..., a n  m (the columns of the
unit matrix I m ) corresponding to positive coordinates of the
extreme point u~ , b  0 are linear independent.
2. Danzig’s method (two-phase method). We consider canonical
problem of the linear programming


J (u )  c, u  inf, u  U  u  E n u  0, Au  b  0 , (7) 
where c  E n , b  E n ; А - a matrix of the order m  n . We suppose
that vector b  0 (if bi  0, i  1, m , that being multiplied by the
first line, where bi  0 , always possible to provide to bi  0, i  1, m ).

126
The following canonical problem on the first stage (phase) is
solved:

u
i 1
ni  inf,
(8)
Au i  u ni  bi , i  1, m, u  0, u n i  0, i  1, m.

By initial extreme point for problem (8) is vector


u   0,..., 0, b1 ,..., bm   0 . Hereinafter problem (8) is solved by the

simplex -method and its solution u*  u1* , u2* ,..., un* , 0,..., 0  is
found. We notice, that if the source problem (7) has a solution and it
 
is nondegenerate, that vector u*  u1* , u2* ,..., un*  E n has exactly m
positive coordinates and it is the initial extreme point for the problem
(7), since lower bound in (8) is reached under un i  0, i  1, m .
On the second stage the problem (7) with initial extreme point
u* U is solved by the simplex-method.
3. Charnes’ method (М-method). Charnes’ method is a
generalization of Danzig’s method by associations of two stages of
the problem solution.
So called М-problem of the following type instead of source
problem (7) is considered:
m
c, u  M  un i  inf , (9)
i 1

 Au i  uni  bi , u  0, uni  0, i  1, m ,
where M  0 - feasible large number. For M-problem (9) the initial
extreme point u   0,..., 0, b1 , b2 ,..., bm   E n  m , b  0 , and it is
solved by the simplex-method. If the source problem (7) has a

solution, that М- problem has such solution: u~*  u~1* ,..., u~n* ,0,...,0 , 
127
 
where components un*i  0, i  1, m . Vector u*  u~1* , u~2* ,..., u~n*  E n
is the solution of the problem (7).
We notice, that for М-problem values z j  c j   j M    j ,
j  1, n . So in the simplex-table for lines z j , z j  c j are conducted
two lines: one for coefficients  j , other for  j . Index j0 , where
z j0  c j0  max ( z j  c j ), z j  c j  0 is defined by the value of
j

the coefficients  j ,  j  0 .

128
REFERENCES

1. Alekseev B.M., Tychomirov B.M., F o m i n C .B. Optimal control.


M.: Nauka, 1979.
2. Boltyunsky B.G. Mathematical method of the optimal control. M.:
Nauka, 1969.
3. Brison A., Ho Yu-shi. Applied theory of the optimal control. M.:
Mir, 1972.
4. Vasilev F.P. Lectures on decision methods of the extreme
problems. M.: MSU, 1974.
5. Vasilev F.P. Numerical decision methods of the extreme problem.
M.: Nauka, 1980.
6. Gabasov R., Kirillova F.M. Method of the optimization. Minsk:
BSU, 1980.
7. Gelfand I.M., Fomin S.B. Various calculus. M.: PhisMath, 1961.
8. Zubov B.I. Lectures on control theory. M.: Nauka, 1975.
9. Karmanov B.G. Mathematical programming. M.: Nauka, 1975.
10. Krasovsky N.N. Motion control theory. M.: Nauka, 1968.
11. Krotov B.F., Gurman B.I. Methods and problems of the optimal
control. M.: Nauka, 1973.
12. Lie E.B., Markus L. Bases of the optimal control theory. M.:
Nauka, 1972.
13. Pontryagin L.S., Boltyansky B.G., Gamkleridze R.B., Mishenko
E.F. Mathematical theory of the optimal processes. M.: Nauka,
1976.
14. Pshenichny B.N., Danilin Yu.M. Numerical methods to extreme
problems. M.: Nauka, 1975.

129
Appendix 1

TASKS FOR INDEPENDENT WORK

For undertaking of the practical laboratory lessons it is reasonable to have a


short base to theories required for solution of the problems and examples. To
this effect in 1981 Methodical instructions on course "Methods of the
optimization" prepared by S.A.Aysagaliev and T.N.Biyarov were released in al-
Farabi Kazakh national university. Problems and bases to the theories on
sections of the course "Methods of the optimization" on base of the mentioned
workbook are brought in the appendix.

P.1.1. Multivariable function minimization


in the absence thereof restrictions

Statement of the problem. Let scalar function J u   J u1 ,..., u n 


n
be determined in all space E . To solve the following optimization
problem:
J u   inf, u  E n
The point u  E n is identified by the point of the minimum J (u ) on
E n , if J u*   J u  ,  u  E n . The variable J u*  is identified by least
or minimum value of the function J (u ) on E n . We notice that absolute
n n
(global) minimum J (u ) on E is reached in the point u*  E .
n
The point u 0  E is identified by the point of the local minimum
J (u ) on E n , if J u 0   J u  under all u  u 0   ,   0 is
sufficiently small number. Usually first define the points of the local
minimum and then amongst of them find the points of the global minimum.
Following theorems known from the course of the mathematical
analysis:
Theorem 1. If function J (u )  C
1
E n , then in the point u 0  En
the equality J u0   0 (necessary first-order condition) is executed.

130
Theorem 2. If function J (u )  C
2
E n  , then in the point u 0  En
the following conditions: J u 0   0, J u 0   0 (necessary condition
of the second order) are executed.
Theorem 3. In order to the point u0  E n be a point of the local
minimum to function J (u )  C
2
E n  , necessary and sufficiency
J u0   0, J u0   0 .

We notice, that problem J(u)  sup, u  E n is tantamount to the


n
problem - J(u)  inf , u  E .
To find points of the local and global minimum function:
1. J (u )  J (u1 ,u 2 ,u 3 )  u12  u 22  u 32  u1u 2  u1  2u 3 ,
u  (u1 , u 2 , u 3 )  E 3 .
2. J (u1 ,u 2 )  u13u 22 (6  u1  u 2 ), u  (u1 , u 2 )  E 2 .
3. J (u1 ,u 2 )  (u1  1) 2  2u 22 , u  (u1 , u 2 )  E 2 .
4. J (u1 ,u 2 )  u14  u 24  2u12  4u1u 2  2u 22 , u  (u1 , u 2 )  E 2 .
5. J (u1 , u 2 )  (u12  u 22 )e  u1 u2  , u  (u1 , u 2 )  E 2 .
2 2

1  u1  u 2
6. J (u1 ,u 2 )  , u  (u1 , u 2 )  E 2 .
1  u1  u1
2 2

u 22 u 32 2
7. J (u1 ,u 2 ,u 3 )  u1    , u1  0, u 2  0, u 3  0.
4u1 u 2 u 3
8. J (u1 ,u 2 )  u12  u1u 2  u 22  2u1  u 2 , u  (u1 ,u 2 )  E 2 .

9. J (u1 ,u 2 )  sin u1 sin u 2 sin(u1  u 2 ) , 0  u1 , u 2  π.

10. J (u1 ,...,u n )  u1u 22 ...u nn (1  u1  2u 2  ...  nu n ) , u i  0, i  1,n.

131
P.1.2. Convex ensembles and convex functions

Let U be a certain ensemble E n , but function J u   J u1 ,..., u n 


is determined on ensemble U . The ensemble U is identified by convex, if
the point αu  ( 1  α)ν  U , u,ν  U and under all  ,  [0,1] .
Function J (u ) is convex on convex ensemble U , if

J (αu  ( 1  α)ν)  αJ (u )  (1  α) J (ν) ,


u,ν  U, α  [0,1]. (1)

Function J (u ) is strictly convex on U , if in the equality (1) possible


under only   0,   1 . Function J (u ) is concave (strictly concave)
on U , if function  J u  is concave (strictly concave) on U . Function
J (u ) is strictly concave on U , if

J (αu  ( 1  α)ν)  αJ (u )  (1  α ) J (ν)  α(1  α )κ|u  ν|2 ,


(2)
κ  0, u,ν  U, α  [0,1]

Theorem 1. Let U be a convex ensemble of E n . Then in order to the


function J u   C U  be a convex on U necessary and sufficiency to
1

execute one of the following inequalities:

J (u )  J ( )  J v , u  v , u,  U , (3)


or
J'(u)-J'(ν),u-ν  0,u,ν  U. (4)

If int U   и J(u)  C
2
(U ) , then for convexity of J (u ) on U
necessary and sufficiency that

J''(u) ,   0,   E n , u  U .

Theorem 2. In order to the function J(u) C 1(U) be strictly convex


on convex ensemble U necessary and sufficiency execution one of the
following two conditions:

132
1) J (u )  J v   J' v , u-ν  κ|u  ν|2 , κ  0, u,ν  U; (5)

2) J u   J v , u  v   u  v ,   2  0, u, v U .


2

(6)

If int U   and J(u)  C (U ) , then for strictly convexity of J (u )


2

on U necessary and sufficiency that

J''(u) ,     2 ,   2  0, u,  U .

To solve the following problems on base of the determination (1), (2)


and correlations (3) - (6):
1. To prove that intersection of any number convex ensembles are
convex.
Is this statement for unions of ensembles faithfully? To prove that
closing of the convex ensemble is convex.
2. a) Is ensemble U  E n convex, if for any points u, U the
point (u   ) / 2  U ? b) Is closed ensemble U convex (under
performing of the previous condition)?
3. Let u 0  E n , but number r  E 1 , r  0 . Is ensemble
 
V  u  E n / u  u0  r \ u0  (the ball without the centre) convex?
4. To show the convexity of the following function on E1:
a) J (u )  e u ; б ) J (u )  u ;
a (u-c ) , a  0 , u  c,
в) J(u)  
b(u-c ) , b  0 , u  c;
0, u  c,
г) J (u )  
a(u-c) , u  c, a  0.
5. To prove that function J u   1 / u is strictly convex under u  0
and strictly concave under u  0 [using only determinations to strict
convexity (the concavity)].
6. Let function  v  is continuous and  v   0,  v .

133

Then function J (u )   (  u ) ( )d is convex on E 1 . To prove.
u

1
u 2 u1
7. Let J (u1 , u 2 )  (  u1 ) ( )d , u2  0,  ( )  0 ,

      . Is J u1 , u 2  convex function on ensemble


U  u  (u1 ,u 2 )/u1  E 1 , u 2  0 )?
n
8. To prove that J (u )    J (u ) - a convex function on convex
i 1
i i

ensemble U of E n , if negative coefficients  i correspond to the concave


functions J i u  , but positive  i - convex functions J i u  .
9. To show, if J (u ) is convex and ensemble of values u satisfying to
condition J (u )  b, b, b  E is convex that J (u ) certainly is a
1

linear function.
10. Function J (u ) is convex on U , if and only if function
g ( )  J (u   (  u )) by one variable  , 0    1 is convex
under any u, U . If J (u ) is strictly convex on U , that
g ( ), 0    1 is strictly convex. To prove.
n
11. Function J (u ) determined on convex ensemble U from E is
identified quasiconvex, if J (u  (1   ) )  maxJ (u ), J ( ),
u, U and under all  , 0    1. Is any convex function
quasiconvex, conversely? To prove that J (u ) is quasiconvex on U if and
only if ensemble M ( )  u  U J(u)  J (ν) is convex under all
 U .
12. To check in each of the following exercises whether the function
J (u ) convex (concave) on given ensemble U, or indicate such points from
U in the neighborhood of which J (u ) is not neither convex, nor concave:
a) J (u )  u16  u 22  u32  u 42  10u1  5u 2  3u 4  20, U  E 4 ;
2u1u2
б ) J (u )  e , u  E2 ;

134
в ) J (u )  u15  0,5u 32  7u1  u3  6,

U  u  (u1 , u 2 , u 3 )  E 3 u i  0, i  1,2,3 ; 
г) J (u )  6u12  u 23  6u32 
 12u1  8u 2  7, U  u  E 3 / u  0 .
2
13. Let J (u )  Au  b  Au  b, Au  b , where A is the
matrix of the order m  n, b  Е m - the vector. To prove that J (u ) is
convex on E n . If A*A is nondegenerate, that J (u ) is strictly convex on
E n . To find J u , J u  .
J(u)  0 ,5 Au,u  b, u , where A  A is the matrix of
*
14. Let

the order n  n, b  E n - the vector. To prove that: 1) if A  A*  0 ,


that J u  is convex on convex ensemble U from E n ; 2) if А=А*>0, that
J (u ) is strictly convex on U, moreover   1 / 2, where 1  0 - the
least proper number of the matrix А .
15. To prove that ensemble А is convex iff, when for any numbers
 
  0;   0 the equality    A  A  A is executed.
16. To prove that if A1 ,..., Am - a convex ensemble, that
m   n n

Co  Ai   u  E n u    i u i ,  i  0,   i  1.
 i 1   i 1 i 1 
17. Let J (u ) be a continuous function on convex ensemble U,

moreover for any u, U the inequality J  u     J (u )  J ( ) / 2.


 2 
is executed. To prove that J (u ) is convex on U.
18. To realize, under what values of the parameter  the following
functions are convex: a) J (u1 ,u 2 )   u u  (u1  u 2 ) ; b) J (u )  
2 2 4
1 2

  u12 u 22  (u12  u 22 ) 2 .
19. To find on plane of the parameters  ,   areas, where function
J (u1 ,u 2 )  u1α u 2β , u1  0, u 2  0 is convex (strictly convex) and
concave (strictly concave).

135
u1u2
20. To find on plane E2 areas, where function J (u1 , u 2 )  e is
convex, and areas in which it is concave.
21. Let J i (u ), i  1,m be convex, nonnegative, monotonous
m
increasing on E 1 functions. To prove that function J (u )   J i (u ).
i 1
possesses by these characteristics.
22. Let J (u ) be a function determined and convex on convex

ensemble U. To prove that: a) ensemble Г    E / sup
n
 ,u 
 uU

 J (u )  is inempty and convex; b) function J * (u )  sup   , u 


 uU
 J (u ) is identified by conjugate to J (u ) . Will J (u ) convex on U?
23. Let function J (u ) be determined and convex on convex ensemble
U from E n . To prove that for any (including for border) points   U
the inequality lim J (u )  J ( ) (the semicontinuity property from
u 
below) is executed.
24. Let function J (u ) be determined and convex on convex closed
bounded ensemble U. Is it possible to confirm that: a) J (u ) is upper lower
and bounded on U;

b) J u reaches upper and lower border on ensemble U?
25. If J (u ) is convex (strictly convex) function in E n and matrix
A  0 of the order n  n, b  E n , that J  Au  b  is also convex
(strictly convex). To prove.

136
P.1.3. Convex programming. KUNA-TUCKER’S theorem

We consider the next problem of the convex programming:

J (u )  inf (1)
at condition


u  U  u  E n u  U 0 ; g i (u )  0, i  1, m;

g i (u )  ai , u  bi  0, i  m  1, p;

g i (u )  ai , u  bi  0, i  p  1, s ,  (2)

where J (u ), g i (u ), i  1, m are convex functions determined on


convex ensemble U 0 from E n ; i  m  1, s, ai  E n are the vectors;
bi , i  m  1, s are the numbers.
Theorem 1 (the sufficient existence conditions of the saddle point).
Let J (u ), g i (u ), i  1, m be convex functions determined on convex
ensemble U 0 the ensemble U *   and let the point u  ri U 0  U exists
such that g i (u )  0, i  1, m. Then for each point u* U * Lagrange
coefficients  
*  (1* ,..., *s )   0    E s / 1  0,...,  p  0 , exist
such that pair u ,  U
*
*
0   0 forms saddle point to Lagrange’s function
s
L (u ,  )  J (u )   i g i (u ), u  U 0 ,    0 , (3)
i 1

on ensemble U 0   0 , i.e. the inequality

L(u* ,  )  L(u* , * )  L(u, * ), u  U 0 ,    0 (4)

is executed.
 *

Lemma. In order to pair u* ,  U 0   0 be saddle point of the
Lagrange’s function (3), necessary and sufficiency performing the following
conditions:

137
L (u* ,* )  L (u,λ* ), u  U 0 ,
(5)
*i g i (u* )  0, i  1, s, u*  U *  U , *   0 ,

i.e. inequalities (4) is tantamount to the correlations (5).


Theorem 2 (the sufficient optimality conditions). If pair u*, * 
U 0   0 is saddle point to Lagrange’s function (3), that vector
u* U * is solution of the problem (1), (2).
Problem algorithm of the convex programming based on the theorems
1,2 and lemma are provided in the lectures 10. We illustrate solution rule of
the convex programming problem in the following example.
Example. To maximize the function

 8u12  10u 22  12u1u 2  50u1  80u 2  sup (6)

at condition

u1  u 2  1, 8u12  u 22  2, u1  0, u 2  0. (7)

Solution. Problem (6), (7) is tantamount to the problem

J (u )  8u12  10u 22  12u1u 2  50u1  80u 2  inf (8)

at condition

u1  u 2  1, 8u12  u 22  2, u1  0, u 2  0. (9)

Problem (8), (9) possible to write as

J (u )  8u12  10u 22  12u1u 2  50u1  80u 2  inf, (10)


u U  u  u1 , u 2   E 2 /u U 0 ,g1 (u )  8u12  u 22  2  0,


g 2 (u )  u1  u 2  1  0, U 0  u  u1 , u 2  

 E 2 / u1  0, u 2  0 . (11)

138
We notice that ensemble U 0  E 2 is convex, function J (u ) is
convex on ensemble U 0 , since symmetrical matrix

16 - 12 
J ' ' (u )     0, J ' ' (u ) ,   0,   E 2 , u  U 0 .
 - 12 20 
It is easy to check that functions g1 (u ), g 2 (u ) are convex on ensemble
U 0 . Finally, problem (10), (11) is a problem of the convex programming.
Entering ensembles

  
U1  u  E 2 / g1 (u )  0 , U 2  u  E 2 / g 2 (u )  0 , 
problem (10), (11) possible to write as

J (u )  inf, u  U  U 0  U1  U 2 .

We notice that ensembles U 0 , U 1 , U 2 are convex, consequently,


ensemble U is convex. Further we solve problem (10), (11) on algorithm
provided in the lecture 10.
10. We are convinced of ensemble


U *  u*  U J (u* )  min J (u )  .
uU

In fact, ensembles U 0 , U 1 , U 2 are convex and closed, moreover
ensemble U is bounded, consequently, ensemble U is bounded and closed,
i.e. U - a compact ensemble in En. As follows from correlation (10) function
J (u ) is continuous (semicontinuous from below) on compact ensemble U.
Then according to the theorem 1 (lecture 2) ensemble U *  . Now
problem (10), (11) possible to write as J (u )  min, u U  E .
2

20. We show that Sleytera’s condition is executed. In fact, the point


u  (0,1)  riU 0  U , (aff U 0  U 0 , aff U 0  U 0  U 0 ) ,
moreover g1 (u )  1  0 . Consequently, by theorem 1 Lagrange’s
function for problem (10), (11)

139
L (u ,  )  8u12  10u 22  12u1u 2  50u1  80u 2 
 1 (8u12  u 22  2)   2 (u1  u 2  1), (12)

u  (u1 , u 2 )  U 0 , λ  ( λ1 ,λ2 )   0    E 2 / 1  0 
has saddle point.
30. Lagrange’s function has the form (12), area of its determination
U 0   0, , moreover 1  0, but 2 can be as positive, as negative.
u ,  U   to Lagrange’s function
40. We define saddle point *
*
0 0

(12) on base of (5). From inequality L u ,    L u ,  , u  U


* *
* 0

follows that convex function L u ,  , u  U (  - fixed vector)


* *
* 0

reaches the least value on convex ensemble U 0 in the point u* . Then


according to optimality criterion (lecture 5, theorem 4) necessary and
sufficiently executing of the inequality

 
L u u* , * , u  u*  0, u  U 0, (13)

where

16u *  12u 2*  50  161*u1*  *2* 


L (u* , * )   1 , u  (u * , u * ),
 20u *  12u *  80  2*u *  *  * 1 2
 2 1 1 2 2 

*  (1* , *2 )
Conditions *i g i (u* )  0, i  1,2 (the conditions of the
complementing elasticity) are written:

 2 2

1* 8u1*  u 2*  2  0, *2 (u1*  u 2*  1)  0, 1*  0. (14)

a) We suppose that u*  int U 0 . In this case from inequality (13) we have


L u (u* ,  )  0. Saddle point is defined from conditions
*

16u1*  12u2*  50  161*u1*  *2  0,


20u*2  12u1*  80  21*u2*  *2  0,

140
2 2
1* (8u1*  u 2*  2)  0, u1*  u 2*  1  0, *2  0, 1*  0. (15)

We notice that from the second equation (14) and u*  U follows that
u  u  1  0,   0 . We solve the system of the algebraic equations
*
1
*
2
*
2
(15). It is possible several events:
1) 1*  0, *2  0. In this case the point u* is defined from solution of
2 2
the system 8u1*  u 2*  2, u1*  u 2*  1 . Thence we have u1*  0,47,
u 2*  0,53. Then value 1* is solution of the equation 126,2  6,461*  0 .
126,2
Thence we have 1    0.
*

6,46
It is impossible, since 1  0.
*

2) 1  0, 2  0. The point u* is defined from equations


* *

u1*  u 2*  1 , 16u1*  12u 2*  50  *2  0 , 20u 2*  12u1*  80  *2  0 .


Thence we have u 2  158 / 60, u1  98 / 60  0. The point u*  U 0 .
* *

So the case is excluded. Thereby, conditions (13), (14) are not executed in
the internal points of the ensemble U 0 . It remains to consider the border
points of the ensemble U0 .
b) Since the pointu*  U , that remains to check the conditions (13),
(14) in the border points (1,0), (0,1) of the ensemble U 0 . We notice that
border point (1,0)  U , since in the point restriction 8u1  u 2  2  0.
2 2

is not executed. Then the conditions (13), (14) follows to check in the
singular point u*  (0,1)  U 0 . In the point g1 (u* )  1  0,
consequently, value 1*  0. and equalities (14) are executed. Since
derivative L u (u* ,  )  (38  *2 ,60  *2 ), that inequality (13) is
*

written as (38  *2 )u1  (60  *2 )(u 2  1)  0 under all u1  0,


u 2  0 . We choose *2  60 , then the inequality to write so:
98u1  0, u1  0, u 2  0 . Finally, the point u*  (u1*  0, u 2*  1) is
the saddle point to Lagrange’s function (12).

141
*
50. Problem (10), (11) has the solutions u1  0, u 2*  1, J (u* )  70 .
To solve the following problems:
1. J (u )  u12  u 22  6  sup; u1  u 2  5, u1  0.
2. J (u )  2u1  2u1  4u 2  3u3  8  sup;
2

8u1  3u 2  3u3  40,  2u1  u 2  u3  3, u 2  0.


3. J (u )  5u12  u 22  4u1u 2  5u1  4u 2  3  sup,
u12  u 22  2u1  2u 2  4, u1  0, u 2  0 .
4. J (u )  3u 22  11u1  3u 2  u 3  27  sup,
u1  7u 2  3u3  7, 5u  2u 2  u3  2.
5. J (u )  u1  u 2  u3  30u 4  8u5  56u6  sup,
2u1  3u 2  u 3  u 4  u 5  10u 6  20,
u1  2u 2  3u 3  u 4  u 5  7u 6  11,
x1  0, x 2  0, x3  0.
6. The following problem: J (u )  c, u  0,5 Du, u  sup at
conditions Au  b, u  0 , where D  D  0, is identified by the
*

problem of the quadratic programming. To define the solutions of the


problem.
7. To define the least distance from origin of coordinates till ensemble
u1  u 2  4, 2u1  u 2  5 .
8. To formulate Kuna-Tucker’s conditions and dual problem for the
following problem: J (u )  4u12  4u1u 2  2u22  3u1  e u1  2u2  inf
at condition u1  2u 2  0, u12  u 22  10, u 2  1 / 2, u 2  0
J (u  pu12  qu1u 2  sup) at condition u1  ru 22  1,
9. To find
u1  0, u2  0 , where - p , q , r - parameters. Under what values of the
parameters p , q , r solution exists?
 (u1  3) 2  u 22  sup
10. To solve the geometric problem: J (u )
at condition  u1  (u 2  1)  0, u1  0, u 2  0 .
2

Are Kuna–Tucker’s conditions satisfied in the optimum point?

142
P.1.4. NONLINEAR PROGRAMMING

We consider the next problem of the nonlinear programming:

J (u )  inf, (1)

u U , (2)


U  u  E n / u  U 0 , g i (u )  0, i  1, m; g i (u )  0, i  m  1, s , 
where function J (u)  C 1 (U 01 ), g i (u)  C 1 (U 01 ), i  1, s, U 01 -
open ensemble contained the convex ensembles U 0 from E n ,
in particular, U 01  E n .
Theorem 1 (the necessary optimality condition). If the functions
J (u )  C 1 ( E n ), g i (u )  C 1 ( E n ), i  1, s, int U 0  , U 0 is a
convex ensemble, but ensemble U *   , then for each point
u* U necessary exist the Lagrange’s coefficients
*

  (*0 , 1* ,..., *s )   0    E s 1 / 0  0, 1  0,..., m  0 ,
such the following conditions are executed:
*
  0, *0  0, 1*  0, ..., *m  0, (3)
*
L u (u* ,  )u  u * 
S
 *0 J (u* )   *i g i (u* ), u  u*  0, u  U 0 , (4)
i 1

*i g i (u* )  0, i  1, s, u*  U . (5)

Lagrange’s function for problem (1), (2) has the form


S
L (u ,  )  0 J (u )   i g i (u ), u  U 0 ,   (0 , 1 ,...,  s )   0 
i 1


   E S 1 / 0  0, 1  0,..., m  0 . 
143
At solution of the problem (1), (2) necessary to consider separately two
events: 1) *0  0 (degenerate problem); 2) *0  0 (nondegenerate
problem). In this case possible to take  1.
*
0

We suppose that the points u* , *0  0, *  (1*  0,..., *m 



 0, *m 1 ,..., *s ) from conditions (3), (4), (5) are found. The point
u* U is identified by normal, if the vectors  g i (u* ), i  I ,
g m 1 (u* ),..., g s (u* )  are linear independent, where ensemble
I  i  1,..., m/ g i (u* )  0. If u* U - a normal point, that problem
(1), (2) is nondegenerate, i.e. *0  1. In the event U 0  E n is executed
the following theorem.
Theorem 2. Let functions J (u ), g i (u ), i  1, s, be definite,
continuous and twice continuously differentiable in the neighborhood of the
normal point u*  U . In order to the normal point u*  U be a point of
the local minimum J (u ) on ensemble U, i.e. J (u* )  J (u ), u,
u  D(u* ,  )  U sufficiently that quadric form
y  L (u* ,  ) / u y, y  0 be positive definite on the hyperplane
2 * 2

* *
 g i (u* )   g (u ) 
  y  0, i  I ,  i *  y  0, i  m  1, s.
 u   u 
We consider the solutions of the following example on base of the
theorems 1, 2 and solution algorithm of the nonlinear programming (the
lectures 13).
Example 1. To find the problem solution

3  6u1  2u 2  2u1 u 2  2u 22  sup (6)

at conditions

3u1  4u 2  8,  u1  4u 22  2, u1  0, u 2  0.
(7)
Solution. Problem (6), (7) is tantamount to the problem

144
J (u )  2u 22  2u1 u 2  6u1  2u 2  3  inf, (8)


u  (u1 , u 2 ) U  u  E 2 / u U 0 , g1 (u )  0, g 2 (u )  0 ,  (9)

where U 0  
 u  E 2 / u1  0, u2  0 ; g1 (u )  3u1  4u2  8; g 2 (u) 
)  u1  4u 22  2 .
Function J (u ) is not a convex function on ensemble U 0 ,
consequently, we have the nonlinear programming problem.
10. We show that ensemble U *   . Let U1  u  E 2 / g1 (u )  0,

U 2  u  E 2 / g 2 (u )  0 . 
ThenU  U 0  U1  U 2 - closed bounded ensemble, consequently,
it is convex. Function J (u ) is continuous on ensemble U . Thence in
effect Weierstrass theorems we have U *   .
20. Generalized Lagrange’s function for problem (8), (9) has the form
L (u1 , u 2 , 0 , 12 )  0 (2u 22  2u1u 2  6u1  2u 2  3) 
 1 (3u1  4u 2  8)  2 (u1  4u 22  2), u  (u1 , u 2 )  U 0 ,
  (0 , 1 , 2 )   0    E / 0  0, 1  0, 2  0.
3

30. According to the conditions (3) - (5) quadruple u , 


*
*


  , ,
*
0
*
1
*
2 U 0   0 is defined from correlations

|  * | 0, *0  0, 1*  0, *2  0, (10)

 
L u u* ,  * , u  u*  0,  u  U 0 , (11)

1* g1 (u* )  0, *2 g 2 (u* )  0, u* U , (12)

where derivative

145
 *0 (2u 2*  6)  31*  *2 
L u (u* ,  * )   * .
* *
 0 (4u 2  2u1  2)  41  82 u 2 
* * *

a) We expect that u*  intU 0 . Then from (11) follows that


L u (u* ,  * )  0 , consequently, pair (u* ,  * ) is defined from solution of
the following algebraic equations:

*0 (2u 2*  6)  31*  *2  0, *0 (4u 2*  2u1*  2)  41*  8*2 u 2*  0 .

1* (3u1*  4u 2*  8)  0, *2 (u1*  4u 2*2  2)  0 , (13)

where u1*  0, u 2*  0, *0  0, 1*  0, *2  0 . We consider the event,


when *0  0 . Then *2  31* , 41* (1  6u 2* )  0 . If 1*  0, that
*2  0, consequently, condition (10) is broken, since
0  0, 1  0, *2  0 .
* *
So it is possible only   0.
*
1 Then
u  1 / 6. It is impossible, since u  0. Thence follows that   0,
*
2
*
2
*
0

i.e. problem (8), (9) is nondegenerate, so it is possible to take 10  1 and


conditions (13) are written so:

2u 2*  6  31*  *2  0, 4u 22  2u1*  2  41*  8*2 u 2*  0,


1* (3u1*  4u 2*  8)  0, *2 (u1*  4u 2*  2)  0 . (14)

Now we consider the different possibilities:


1) 1*  0, *2  0. u*  (u1* , u 2* ) is defined from
In this case the point
system 3u1  4u2  8  0,  u1  4u2  2  0, u1  0, u2  0
* * * *2 * *

[refer to formula (14)]. Thence we have u1  1,43, u2  0,926,


* *

J (u1* , u2* )  9,07 . However 1*  1,216, *2  0,5  0 , so the given
point u*  (1,43; 0,926) can not be solution of the problem (8), (9).
1*  0, *2  0. In this case from (14) we have 2u 2*  6  *2  0,
2)
4u 2*  2u1*  2  8*2 u 2*  0,  u1*  4u 2*2  2  0.

146
Thence we have u1  117, u 2  5,454,
* *
*2  4,908  0 . However
the point u*  (117; 5,454) U , since the inequality 3u1  4u 2  8.
* *

aren’t executed.
3) 1*  0, *2  0. Equations (14) are written so: 2u 2*  6  31*  0,
4u 2*  2u1*  2  41*  0, 3u1*  4u 2*  8  0.
Thence we have u1*  28 / 9, u 2*  1 / 3  0, 1*  16 / 9  0 . The
point u*  (28 / 9,  1 / 3) U .
4) 1  0, 2  0. In this case from equation (14) we have
* *

2u 2*  6  0, 4u 2*  2u1*  2  0.
Thence we have u 2*  3, u1*  5. However the point u*  (5; 3) U
, since inequality  u1  4u 2  2  0. are not executed. Thereby, the point
* *

u*  intU 0 .
b) We expect that point u*  ГрU 0 . Here possible the following
events:
1) u  (0, u 2 )  ГрU 0 , u 2  0; 2) u  u1 ,0  ГрU 0 , u1  0.
For the first type point of the restriction g1  4u 2  8  0,
g 2  4u22  2  0 , consequently, 0  u2  1/ 2 . Then.
 
u*  u  0, u2  1 / 2 , J u*   3,4 . For the second type of the
*
1

border point of the restriction g1  3u1  8  0, g 2  u1  2  0 .


Then u*  8 / 3, 0, but value J u*   19 .
Finally, problem solution (8), (9): 
u*  u1*  8 / 3, u 2*  0 ,
J u*   19.
Example 2. It is required from wire of the given length l to do the
equilateral triangle and square which total area is maximum.
Solution. Let u1 , u 2 be a sum of the lengths of the triangle sides of the
square accordingly. Then sum u1  u 2  l , but side of the triangle has a
length u1 / 3 , side of the square - u 2 / 4 and total area is

147
3 2 1 2
S u1 , u 2   u1  u 2 .
36 16
Now optimization problem can be formulated so: to minimize the
function
3 2 1 2
J u1 , u 2    u1  u 2  inf (15)
36 16
at conditions

u1  u 2  l , u1  0, u 2  0. (16)

Entering indications g1 u1 , u 2   u1  u 2  l , g 2 u1 , u 2   u1 ,


g 3 u1 , u 2   u 2 problem (15), (16) to write as

3 2 1 2
J u1 , u 2    u1  u 2  inf (17)
36 16
at conditions


u  U  u  E 2 g1 (u )  u1  u 2  l  0, g 2 (u1 , u 2 )  u1  0,
g 3 (u1 , u 2 )  u 2  0 , (18)

where possible U 0  E n . Unlike previous example the conditions


u1  0, u2  0 are enclosed in restrictions g 2 u1 , u 2 , g 3 u1 ,u 2  .
Such approach allows using the theorem 2 for study properties of the

functions J (u1 , u 2 ) in the neighborhood of the point u*  u1 , u 2  U .
* *

Problem (17), (18) is the nonlinear programming problem since function
J (u1 , u 2 ) is not convex on ensemble U 0  E n .
10. Ensemble U is bounded and closed, consequently, it is compact.
   
Function J u1 , u 2  С E , so ensemble U *  
2 2

20. Generalized Lagrange’s function for problem (17), (18) to write as

148
3 2 1 2
L (u ,  )  L (u1 , u 2 , 0 , 1 ,  2 , 3 )  0 (u1  u 2 ) 
36 16
 1 (u1  u 2  l )   2  u1   3  u 2 , u  u1 , u 2   E 2 ,
  0 , 1 ,  2 , 3    0    E 4 / 0  0,  2  0, 3  0.

30. Since ensemble U 0  E 2 , that conditions (3) - (5) are written:

 3 * * 
*
 u1 0  1*  *2 
L (u* ,  )   18   0, (19)
 1 * * * 
  u 2 0  1  3 
*

 8 

1* u1*  u 2*  l   0, *2  u1*   0, *3  u 2*   0 , (20)

*
  0, *0  0, *2  0, *3  0 . (21)

a) We consider the event *0  0 . In this case, as follows from


expression (19),   
*
1
*
2
*
3. If   0,
*
2 that 3  0 , consequently,
u  0, u  0 [refer to formula (20)]. It is impossible, since
*
1
*
2

u  u 2*  l . It means, 1*  *2  *3  0 . The equalities opposites to the


*
2
condition (21). Then the source problem (17), (18) is nondegenerate.
Consequently, value *0  1 .
b) Since problem (17), (18) is nondegenerate, the conditions (19) - (21)
are written:

3 * 1
 u1  1*  *2  0,  u 2*  1*  *3  0, u1*  u 2*  l  0,
18 8

1*  0, *2 u1*  0, *3u 2*  0, *2  0, *3  0. (22)

We consider the different events:

149
9l 4l 3
1. *2  0, *3  0 . Then u1*  , u2*  ,
94 3 94 3
l 3
1*  .
18  8 3
2. *2  0, *3  0 . In this case we have u1*  0, u2*  l ,
1*  l / 8, *2  l / 8 .
3. 2  0, 3  0 . Then from systems of the equations (22) we get
* *

u1*  l , u 2*  0, 1*  3  l / 18, *3  3  l / 18 . The event


  0,   0
*
2
*
3 is excluded, since u  0, u  0 and condition
*
1
*
2

u  u 2*  l. is not executed.
*
1

3 2 1 2
40. Quadric form y L uu (u * ,  ) y   y1  y 2 .
*

18 8
Hyperplane equations for events 1 - 3 are accordingly written:
g1 u*  g u 
1) y1  1 * y2  y1  y2  0 . Thence we have y1 
u1 u 2
 
  y 2 . Then y L uu (u* , * ) y   4 3  9 y12  0 , i.e. the point
( u  9l (9  4 3 ) , u  4l 3 (9  4 3 ) ) is not the point of the
* *
1 2
local minimum.
g1 (u* ) g (u )
y1  1 * y 2  y1  y 2  0,
u1 u 2
g 2 (u* ) g (u )
y1  2 * y 2   y1  0 .
u1 u 2
Thence we have y1  0, y 2  0; y L uu (u* , * ) y  0, y  0 . The
sufficient conditions of the local minimum are degenerated.
g 1 (u* ) g (u ) g 3 (u* )
3) y1  1 * y 2  y1  y 2  0 , y1 
u1 u 2 u1
g (u )
 3 * y2   y2  0 .
u 2

150
The data of the equation have the solutions y1  0, y2  0 . Again the
sufficient optimality conditions do not give the onedigit answer. Solution of
the problem is found by comparison of function values J (u1 , u 2 ) in the
last two events. In the second event value J (u1* , u 2* )  l 2 / 16 , but in the
third event J (u1* , u 2* )   3l 2 / 36. Then the problem solution is the
point ( u1  0, u 2  l ), i.e. from the whole wire is made only square.
* *

To solve the following problems:


 
1. To find J u1 , u 2 , u3  u1u 2u3  inf at conditions:
a) u1  u 2  u3  3  0 , b) u1  u 2  u3  8  0 ;
c) u1u 2  u1u 3  u 2 u 3  a, u i  0, i  1,3.
2. To prove the equality
n
u1n  u 2n  u1  u 2 
  , n  1, u1  0, u 2  0.
2  2 
3. To find the sides of the maximum rectangle area inserted in circle
u  u 22  R 2 .
2
1
4. To find the most short distance from the point (1,0) till ellipse
4u  9u 22  36 .
2
1

5. To find the distance between parabola u 2  u12 and line


u1  u2  5 .
6. To find the most short distance from ellipse 2u12  3u 22  12 till
line u1  u 2  6 .
J u   u Au  sup, A  A* at condition u u  1 . To
7. To find
show, if u*  E - the problem solution, that J (u* ) is equal to the most
n

characteristic root of the matrix A .


8. To find the parameters of the cylindrical tank which under given area
of the surface S has the maximum volume.
1 1 1
9. a) J (u )  u1  u 2  u 3  inf,    1;
u1 u 2 u 3

151
b) J (u )  u1u 2 u 3  inf,
u1  u 2  u3  6, u1u 2  u1u3  u 2u3  12;
1 1 1 1
c) J (u )    inf, 2
 2  1;
u1 u 2 u1 u 2
d) J u   2u1  3u 22  u 32  inf,
u1  u 2  u 3  8, u i  0, i  1,3;
e) J u   u12  u 22  u 32  inf,
u1  u 2  u 3  12, u i  0, i  1,3 ;
f) J (u )  u1u 2  u1u 3  u 2 u 3  inf, u1  u 2  u 3  4;
g) J (u )  u12 u 2  u 22 u1  u1u 2 u 3  inf,
u1  u2  u3  15, ui  0, i  1,3;
l) J (u )  u12  2u1u 2  u 32  inf,
u1  2u 2  u 3  1, 2u1  u 2  u 3  5, u i  0, i  1,3.
10. To find the conditional extreme to function J (u )  (u1  2) 2 
(u2  3) 2 at condition u12  u22  52 .

P.1.5. LINEAR PROGRAMMING.


SIMPLEX-METHOD

General problem of the linear programming (in particular, basic task) is


reduced to the linear programming problem in canonical form of the
following type:
J (u )  c u  inf, (1)


u  U  u  E n / u  0, Au  b ,  (2)

where  
A  a1 , a 2 ,..., a m , a m1 ,..., a n - matrix of the order m  n;
a i  E m , i  1, n ; vectors are identified by the condition vectors, but
b  E m - by the restriction vector.
152
Simplex-method is a general method of the problem solution of the
linear programming in canonical form (1), (2). Since the general and the
basic problems of the linear programming are reduced to type (1), (2), it is
possible to consider that simplex-method is the general method of the
problem solution of the linear programming.
The base of the linear programming theory is stated in the lectures 15 -
17. We remind briefly a rule of the nondegenerate problem solution of the
linear programming in canonical form.
10. To build the initial extreme point u 0  U ensemble U . We
notice, if U *   , that lower border to linear function (1) on U is
reached in the extreme point ensemble U , moreover in the nondegenerate
problem the extreme point has exactly m positive coordinates i.e.
 
u 0  u10  0,..., u m0  0,0,...,0 , and vectors a1 ,..., a m , corresponding
to the positive coordinates of the extreme point are linear independent. The
extreme point u  U is defined in the general event by М-method
0

(Charnes’ method), but on a number of events - on source data ensemble U


directly.
20. Simplex-table is built for extreme the point u 0  U . Vectors
a i , i  1, m are presented in the first column, in the second - elements of
the vector c with corresponding to lower indexes, in the third - positive
coordinates of the extreme point u  U and in the rest column –
0

decomposition coefficients of the vectors a , i  1, n on base a ,..., a 


i 1 m

 
m
i.e. a 
j
a u
i 1
i
ij  AБ u j , where AБ  a1 ,..., a m - nonsingular

u j  u1 j ,..., u mj , j  1, n . Values z j   ci uij , j  1, n


m
matrix;
i 1
are brought in penultimate line, but in the last line – values
z j  c j , j  1, n . The main purpose of the simplex-table is to check the
u 0  U . If it is turn out to be that values
optimality criterion for the point
z j  c j  0, j  1, n , the extreme point u 0  U - a solution of the
problem (1), (2), but otherwise transition to the following extreme point
u1  U – is realized, moreover value J u1   J u 0  .

153
30. The extreme point u1  U and corresponding to it simplex-table is
built on base of the simplex-table of the point u  U . The index j0 is
0

defined from condition z j 0  c j 0  max( z j  c j ) amongst


z j  c j  0 . Column j 0 of the simplex-table of the point u 0  U is
identified pivotal column, and vector is entereda j0 to the number of base
in simplex-table of the point u  U instead of vector a 0 . The index i0 is
1 i

ui0
defined from condition min   i0 amongst uij0  0 . The extreme point
uij
u 1  (u10   0 u1 j0 ,..., u i00 1   0 u i0 1 j0 , 0, ui00 1  ui0 1 j0 ,..., um0  0umj0 ,
0,...,0, 0 ,0,...,0),  0   i0 .
The base consists of the vectors a1 ,..., a i0 1 , a j0 , a i0 1 ,..., a m .
Decomposition coefficients of the vectors a j , j  1, n on the base are
defined by the formula
u ij0 u i0 j
(u ij ) нов  u ij  , i  i0 , j  j 0 ;
u i0 j0
(3)
u i0 j
(u i0 j ) нов  , j  1, n.
u i0 j0

Hereinafter values ( z j ) нов , ( z j  c j ) нов are calculated by the known


coefficients (u ij ) нов , i  1, m, j  1, n . Thereby, new simplex-table is
built for point u1  U . Further optimality criterion is checked. If it is turn
out to be that ( z j  с j ) нов  0, j  1, n , so the extreme point u  U -
1

solution of the problem (1), (2), but otherwise transition to the new extreme
point u  U realized and etc.
2

Example 1. To solve the linear programming problem


J u1 , u 2 , u3 , u4 , u5 , u6   6u1  3u 2  5u3  2u 4  4u5  2u6  inf,

154
u1  2u 2  u 4  3u 5  17, 4u 2  u 3  u 5  12,
u 2  8u 4  u 5  u 6  6,
u1  0, u 2  0, u 3  0, u 4  0, u 5  0, u 6  0 .
Matrix A , vectors b and c are equal to

1 2 0 1 3 0
 

A   0 4 1 0 1 0   a1 , a 2 , a 3 , a 4 , a 5 , a 6 , 
0 1 0 8 1 1
 
1  2  0 1 3  0
  2   3   4   5   6  
a   0 , a   4 , a   1 , a   0 , a   1 , a   0 ,
1

 0 1  0 8   1 1


           
17 
 
b  12 , c   (6,  3,5,  2,  4,2) .
6 
 
Solution. The initial extreme point u  17, 0,12, 0, 0, 6 easy is
0

defined for the example. The condition vectors corresponding to positive


coordinates of the extreme point - a1 , a 3 , a 6 .

Base
с b
6 -3 5 -2 -4 2
0
a1 a2 a3 a4 a5 a6

I u0=(17, 0, 12, 0, 0, 6)

17
A1 6 17 1 2 0 1 3 0
2
12
A3 5 12 0 (4) 1 0 1 0  i0  3
4
6
A6 2 6 0 1 0 8 -1 1
1

155
0
zj cj 0 37 0 24 25 0 J (u )  174

z j0  c j0  37,

j0  2

II u1=(11, 3, 0, 0, 0, 3)
a1 6 11 1 0 -1/2 1 5/2 0 11
a2 -3 3 0 1 1/4 0 1/4 0 -
a6 2 3 0 0 -1/4 (8) -5/4 1 3/8  i0  6
zj cj 0 0 - 37
4
24
63
4
0 1
J (u )  63

z j0  c j0  24, j0  4

III u2=( 85 ,3,0, 3


,0,0)
8 8

85
a1 6 8 1 0 - 15 0 ( 85 ) -1 4
32 32 8

1 1
a2 -3 3 0 1 4 0 4 0 -

a4 -2
3
0 0 - 1 1 - 5 1
- i0  1
8 32 32 8
2
J (u )  54
zj cj 0 0 - 17
2 0
39
2 -3

39
z j0  c j0  , j0  5
2

IV u3=(0, 2, 0, 1, 4, 0)

zj cj  0
32
a5 -4 4 0 - 15 0 1 - 4
85 85 85 j  1,6
a2 -3 2 - 8 1
25
0 0
1
85 85 85

156
10
a4 -2 1
1
0 - 5 1 0
17 85 85
3
zj cj - 624
85
0 - 430
85
0 0 - 177
85
J ( u )  24

Example 2.
J (u )  2u1  3u2  5u3  6u4  4u5  inf;
2u1  u 2  u3  u 4  5; u1  3u 2  u3  u 4  2u5  8;
 u1  4u 2  u 4  1, u j  0, j  1,5.
Corresponding М-problem has the form
J (u )  2u1  3u 2  5u 3  6u 4  4u 5  Mu6  Mu7  inf;
2u1  u 2  u3  u 4  u6  5; u1  3u 2  u3  u 4  2u5  8;
 u1  4u 2  u 4  u 7  1, u j  0, j  1,7.

For M-problem matrix А, the vectors b and c are equal

 2 1 1 1 0 1 0 
  ,
A  1 3 1  1 2 0 0   (a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 )
  1 4 0 1 0 0 1
 
 5
 
b   8 , c  (2,3,5,6,4, M , M ) .
1 
 
Solution. The initial extreme point u0=(0, 0, 0, 0, 4, 5, 1). We present
value z j  c j in the manner of z j  c j   j M   j , j  1, n , i.e.
instead of one line for z j  c j are entered two: in the first values  j are
written, in the second -  j . Since M – sufficiently great positive number,
then z j  c j  z k  ck , if  j   k . But if  j   k , that  j   k .

-2 -3 5 -6 4 М М
Base с B
a1 a2 a3 a4 a5 а6 а7

157
I J (u )  16  6M u 0  (0,0,0,0,4,5,1)
A6 M 5 2 1 -1 1 0 1 0
A5 4 4 1/2 3/2 1/2 -1/2 1 0 0
A7 M 1 -1 (4) 0 1 0 0 1
j 4 9 -3 4 0 0 0
zj  cj
j 1 5 -1 2 0 0 0

II J (u 1 )  (55 / 4)  (19 / 4) M u 1  (0,14,0,0,29 / 8,19 / 4,0)


a6 M 19/4 (9/4) 0 -1 3/4 0 1
a5 4 29/8 7/8 0 1/2 -7/8 1 0
a2 M 1/4 -1/4 1 0 1/4 0 0
zj cj j 25/4 0 -3 7/4 0
0

j 9/4 0 -1 3/4 0 0

III J (u 2 )  5 / 9 u 2  (19 / 9,7 / 9,0,0,16 / 9,0,0)


а1 -2 19/9 1 0 -4/9 3/9 0
5
а 4 16/9 0 0 8/9 -21/18 1
а2 -3 7/9 0 1 -1/9 3/9 0

zj cj j 0 0 -2/9 -1/3 0


j 0 0 0 0 0

As follows from the last simplex-table by solution of the М-problem is


a vector u 2  (19 / 9,7 / 9,0,0,16 / 9,0,0) . Then solution of the source
problem is vector u  (19 / 9,7 / 9,0,0,16 / 9) , J (u )  5 / 9 .
2 2

To solve the following linear programming problems:


1. J (u )  2u1  u 2  2u3  3u 4  sup;
3u1  u3  u 4  6;
u 2  u3  u 4  2;

158
 u1  u 2  u3  5; u j  0, j  1,4 .
2. J (u )  u1  u 2  u3  3u 4  u5  u6  3u7  inf;
3u3  u5  u6  6;
u 2  2u3  u 4  10;
u1  u6  0;
u3  u6  u7  6; u j  0, j  1,7 .

3. J (u )  u1  2u 2  u3  2u4  u5  inf;
u1  2u2  u3  2;
2u1  u2  u4  0;
u1  3u2  u5  6;
u1  0; u 2  0, u3  0, u 4  0.
4. J (u )  2u1  3u 2  2u3  u 4  sup;
2u1  2u 2  3u3  u 4  6;
 u 2  u3  u4  2;
u1  u2  2u3  5;
u1  0; u 2  0, u3  0.
5. J (u )  u1  2u 2  u3  sup;
3u1  u2  u3  4;
2u1  3u2  6;
 u1  2 0 u 2  1, u j  0, j  1,3.
6. J (u )  u1  2u2  u3  inf;
u1  u2  1;
u1  2u2  u3  8;
 u1  3u 2  3, u j  0, j  1,3.
7. J (u )  2u1  u 2  u3  sup;

159
u1  2u2  u3  4;
u1  u2  2;
u1  u3  1, u j  0, j  1,3.
8. The steel twig by length 111 cm have gone into blanking shop. It is
necessary to cut them on stocking up on 19, 23 and 30 cm in amount 311,
215 and 190 pieces accordingly. To build the model on base of which it is
possible to formulate the extreme problem of the variant choice performing
the work under which the number cut twig are minimum.
9. There are two products which must pass processing on four
machines (I, II, III, IV) in process production. The time of the processing of
each product on each of these machines is specified in the following form:
Machine А В
I 2 ј
II 4 2
III 3 1
IV 1 4

The machines I, II, III and IV can be used accordingly during 45, 100,
300 and 50 hours. Price of the product А - 6 tenge per unit, product В - 4
tenge. In what correlation do follows to produce the product А and В to get
maximum profit? To solve the problem in expectation that product А is
required in amount not less 22 pieces.
10. Refinery disposes by two oil grade - А and В, moreover petrol and
fuel oil are got under processing. Three possible production processes are
characterized by the following scheme:
a) 1 unit of the sort A + 2 unit of the sort B  2 unit of the fuel oil + 3
unit of petrol;
b) 2 unit of the sort A + 1 unit of the sort B  5 unit of the fuel oil + 1
unit of petrol;
c) 2 unit of the sort A + 2 unit of the sort B  2 unit of the fuel oil + 1
unit of petrol;
We assume the price of the fuel oil 1 tenge per unit, but the price of the
petrol 10 tenge per unit. To find the most profitable production plan if there
are 10 units to oil of the sort А and 15 units oil of the sort B.

160
Appendix 2

TASKS ON MATHEMATICAL PROGRAMMING

Three term tasks for students of the 2-nd, 3-rd courses (5,6 terms
accordingly) educating on the specialties “Applied mathematics”,
“Mathematics”, “Mechanics”, “Informatics”, “Economical cybernetics” are
worked out by the following parts:
convex ensembles and convex function (1-th task),
convex and nonlinear functions (2-nd task),
linear programming (3-rd task),
Further we prefer variants of the tasks on course “Mathematical
programming”.

Task 1

To check the function J (u ) is convex (concave) on the ensemble U ,


or indicate such points from U in neighborhood of which J (u ) isn’t neither
convex or concave (1 – 89-variants).
1. J u   u16  u22  u32  u42  10u1  5u2  3u4  20; U  E 4 .
2. J u   e
2 u1  u 2
; U  E 2.
3. J u   u13  u23  u32  10u1  u2  15u3  10; U  E 3.
1 2
4. J u   u1  u2  u3  u1u2  u3  10; U  E .
2 2 3

2
5. J u   u1  u 2  2u3  u1u 2  u1u3  u 2u3 
2 2 2

 5u2  25; U  E 3 .

6. J u   u2 
1 2
2
5

u3  7u1  u3  6; U  u  E 3 : u  0 . 
7. J u   3u1  u2  2u3  u1u2  3u1u3  u2u3  3
2 2 2

 3u2  6; U  E 3.

161
1 2
8. J u   5u1  u2  4u32  u1u2  2u1u3  2u2u3 
2

2
 u3  1; U  E 3.
1 2 1
9. J u   2u1  u 2  5u 3  u1u 2  2u1u 3  u 2 u 3 
2 2

2 2
 3u1  2u 2  6; U  E 3 .
10. J u   u13  2u32  10u1  u 2  5u3  6;
U  {u  E 3 : u  0}.
11. J u   5u14  u26  u32  13u1  7u3  8; U  E 3.
12. J u   3u12  2u22  u32  3u1u2  u1u3 
 2u2u3  17; U  E 3.
1 4
13. J u   4u13  u24  u3  3u1  8u2  11;
2
U  {u  E 3 : u  0}.
14. J u   8u13  12u32  3u1u3  6u2  17;
U  {u  E 3 : u  0}.
15. J u   2u12  2u 22  4u32  2u1u 2  2u1u3 
 2u 2u3  16; U  E 3 .
1 2
16. J u   2u1  u2  u3  2u1u2  8u3  12; U  E 3 .
2 2

2
1 7 1 4
17. J u    u2  u3  2u2u3  11u1  6;
2 2
U  {u  E 3 : u  0}.
5 3 1
18. J u   u12  u 22  4u 32  u1u 2  2u1u 3  u 2 u 3 
2 2 2
+ 8u3  13; U  E 3 .

162
1 3
19. J u  3u1  u2  2u1u2  5u1u3  7u1  16;
2

2
U  {u  E 3 : u  0}.
3 2 1
20. J u   2u12  u22  u3  u1u2  u1u3 
2 2
 2u2u3  10; U  E 3 .
3 5
21. J u   2u12  u32  u1u3  12u2  18; U  E 3 .
2 2
22. J u   6u12  u23  6u32  12u1  8u2  7;
U  {u  E 3 : u  0}.
3 2
23. J u   u1  u2  2u3  u1u2  u1u3  2
2 2

2
 2u2u3  8u2 ; U  E 3.
5 2
24. J u   4u1  u2  u3  4u1u2  11u3  14; U  E .
2 2 3

2
7 2 4 3 1 3
25. J u   u1  u 2  u 3  13u1  7u 3  9;
2 2 2
U  {u  E : u  0}.
3

5 3 1 5 3 3
26. J u    u1  u2  u3  22u2  17;
6 4 2
U  {u  E : u  0}.
3

5 3 3 2
27. J u    u1  2u 2  u 3  2u1u 2  3u1u 3  u 2 u 3 ;
2

6 2
U  {u  E : u  0}.
3

28. J u   2u12  4u22  u32  u1u2  9u1u3  u2u3  9; U  E 3.

3 2 5 2 9 2
29. J u   u1  u2  u3  3u1u3  7u2u3 ; U  E 3 .
2 2 2

163
7 3 5 2 5 4 1 3
30. J u    u1  u 2  u 3  u 4  3u 2 ;
6 2 12 2
U  {u  E : u  0}.
3

1 3
31. J u   3u1  u 2  2u1u 2  5u1u 3  7u1  16;
2

2
U  {u  E 3 : u  0}.

5 2 3
32. J u   u1  u 22  4u 32  u1u 2  2u1u 3 
2 2
1
 u 2 u 3  8u 3  13; U  E 3.
2
1 7 1 4
33. J u    u2  u3  2u2u3  11u1  6;
2 2
U  {u  E 3 : u  0}.
1 2
34. J u   2u1  u2  u3  2u1u2  8u3  12; U  E 3 .
2 2

2
35. J u   2u1  2u2  4u3  2u1u2  2u1u3 
2 2 2

 2u2u3  16; U  E 3.
7 3 5 2 5 4 1 3
36. J u    u1  u 2  u 3  u 4  3u 2 ;
6 2 12 2
U  {u  E : u  0}.
4

3 2 5 2 9 2
37. J u   u1  u2  u3  3u1u3  7u2u3 ; U  E .
3

2 2 2
38. J u   2u1  2u2  u3  u1u2  9u1u3  u2u3  9; U  E .
2 2 2 3

5 3 3 2
39. J u    u1  2u 2  u 3  2u1u 2  3u1u 3  u 2 u 3 ;
2

6 2
U  {u  E : u  0}.
3

164
5 3 1 5 3 3
40. J u    u1  u2  u3  22u2  17;
6 4 2
U  {u  E : u  0}.
3

7 2 4 3 1 3
41. J u   u1  u 2  u 3  13u1  7u 3  9;
2 3 2
U  {u  E : u  0}.
3

5 2
42. J u   4u1  u2  u3  4u1u2  11u3  14; U  E .
2 2 3

2
3
43. J u   u1  u2  2u3  u1u2  u1u3 
2 2 2

2
 2u2u3  8u2 ; U  E 3.
44. J u   6u12  u32  6u32  12u1  8u2  7;
U  {u  E 3 : u  0}.
3 2 5
45. J u   2u1  u3  u1u3  12u2  18; U  E .
2 3

2 2
3 1
46. J u   2u1  u2  u3  u1u2  u1u3 
2 2 2

2 2
 2u2u3  10; U  E 3.
47. J u   u16  u22  u32  u42  10u1  3u4  20; U  E 4 .
48. J u   e
2 u1  u 2
; U  E 2.
49. J u   u13  u23  u33  10u1  u2  15u3  10; U  E 3.
1 2
50. J u   u1  u2  u3  u1u2  u3  10; U  E .
2 2 3

2
51. J u   u1  u2  2u3  u1u2  u1u3  u2u3 
2 2 2

 5u2  25; U  E 3.
1 2
52. J u   u2  u3  7u1  u3  6; U  {u  E 3 : u  0}.
5

165
53. J u   3u12  u22  2u32  u1u2  3u1u3 
 u2u3  3u2  6; U  E 3.
1 2
54. J u   5u1  u 2  4u 32  u1u 2  2u1u 3  2u 2 u 3  u 3  1;
2

2
U  E 3.
55. J u   u1  2u3  10u1  u2  5u3  6;
3 3

U  {u  E 3 : u  0}.
56. J u   5u14  u26  u32  13u1  7u3  8; U  E 3.
57. J u   3u12  2u22  u32  3u1u2  u1u3 
 2u3u2  17; U  E 3.
1 4
58. J u   4u1  u2  u3  3u1  8u2  11;
3 4

2
U  {u  E 3 : u  0}.
59. J u   3u13  12u32  3u1u3  6u2  17;
U  {u  E 3 : u  0}.
60. J u   2u12  2u22  4u32  2u1u2  2u1u3 
 2u2u3  16; U  E 3.
61. J u   u16  u22  u32  u42  10u1  3u4  20; U  E 4 .
62. J u   e
2 u1  u 2
; U  E 2.
63. J u   u13  u23  u33  10u1  u2  15u3  10; U  E 3.
1 2
64. J u   u1  u2  u3  u1u2  u3  10; U  E .
2 2 3

2
65. J u   u1  u2  2u3  u1u2  u1u3  u2u3 
2 2 2

 5u2  25; U  E 3.
1 2
66. J u   u2  u3  7u1  u3  6; U  {u  E 3 : u  0}.
5

166
67. J u   3u12  u22  2u32  u1u2  3u1u3  u2u3 
 3u2  6; U  E 3.
1 2
68. J u   5u1  u2  u32  u1u2  2u1u3  2u2u3 
2

2
 u3  1; U  E 3.
1 2 1
69. J u   2u1  u 2  5u 3  u1u 2  2u1u 3  u 2 u 3 
2 2

2 2
 3u1  2u 2  6; U  E 3 .
70. J u   u13  u33  10u1  u2  5u3  6;
U  {u  E 3 : u  0}.
71. J u   5u14  u26  u32  13u1  7u3  8; U  E 3.
72. J u   3u12  2u22  2u32  3u1u2  u1u3 
 2u2u3  17; U  E 3.
1 4
73. J u   4u3  u24  u3  3u1  8u2  11;
2
U  {u  E 3 : u  0}.
74. J u   8u13  12u32  3u1u3  6u2  17;
U  {u  E 3 : u  0}.
75. J u   2u12  2u22  4u32  u1u2  2u2u3  16; U  E 3.
1 2
76. J u   2u1  u2  u3  2u1u2  8u3  12; U  E .
2 2 3

2
1 7 1 4
77. J u    u2  u3  2u2u3  11u1  6;
2 2
U  {u  E : u  0}.
3

167
5 2 3
78. J u    u1  u 22  4u 32  u1u 2  2u1u 3 
2 2
1
 u 2 u 3  8u 3  13; U  E 3.
2
1 3
79. J u   u1  u 2  2u1u 2  5u1u 3  7u1  16;
2

2
U  {u  E 3 : u  0}.
3 2 1
80. J u   2u1  u2  u3  u1u2  u1u3 
2 2

2 2
 2u2u3  10; U  E 3.
3 2 5
81. J u   u1  u3  u1u3  12u2  18; U  E .
2 3

2 2
82. J u   6u1  u2  6u3  12u1  8u2  17;
2 3 2

; U  {u  E 3 : u  0}.
3 2
83. J u   u1  u2  2u2  u1u2  u1u3  2u2u3  8u2 ; U  E .
2 2 3

2
7 2 4 3 1 3
84. J u   u1  u 2  u 3  13u1  7u 3  9;
2 3 2
U  {u  E : u  0}.
3

5 3 1 5 3 3
85. J u   u1  u2  u3  22u2  10;
6 4 2
U  {u  E : u  0}.
3

5 3 3 2
86. J u    u1  2u 2  u 3  2u1u 2  3u1u 3  u 2 u 3 ;
2

6 2
U  {u  E : u  0}.
3

87. J u   2u1  4u2  u3  u1u2  9u1u3  u2u3  8; U  E .


2 2 2 3

3 2 5 2 9 2
88. J u   u1  u2  u3  3u1u3  7u2u3 ; U  E .
3

2 2 2

168
7 3 5 2 5 4 1 3
89. J u   u1  u2  u3  u4  3u2 ;
6 2 12 2
U  {u  E : u  0}.
3

Task 2

To evaluate the task of the convex or nonlinear programming (1 – 87–


variants):
1 2
1. J u   3u1  2u 2 u1  u22  u1u2  max,
2
2u1  u2  2, u1  0, u1  2u2  2, u2  0.
1 2
2. J u   3u1  2u2  u1  u2  u1u2  max,
2

2
u1  3, u2  6, u1  0, u2  0.
3 2
3. J u   4u1  8u 2  u1  u 2  2u1u 2  max,
2

2
u1  u2  3, u1  0, u1  u2  1, u2  0.
3 2
4. J u   4u1  8u 2  u1  u 2  2u1u 2  max,
2

2
 u1  u2  1, u1  0, u1  4, u2  0.
3 2
5. J u   4u1  8u 2  u1  u 2  2u1u 2  max,
2

2
3u1  5u2  15, u1  u2  1, u1  0, u2  0.
1 2
6. J u   3u1  2u 2  u1  u 2  u1u 2  max,
2

2
 u1  2u2  2, u1  0, 2u1  u2  2, u2  0.
7. J u   u1  6u 2  u1  3u 2  3u1u 2  max,
2 2

4u1  3u2  12, u1  0,  u1  u2  1, u2  0.

169
8. J u   u1  6u 2  u1  3u 2  3u1u 2  max,
2 2

u1  u2  3, u1  0,  2u1  u2  2, u2  0.
9. J u   u1  6u 2  u1  3u 2  3u1u 2  max,
2 2

u1  u2  0, u1  0, u2  5, u2  0.
3 2
10. J u   6u 2  u1  u 2  2u1u 2  max,
2

2
3u1  4u2  12, u1  0,  u1  u2  2, u2  0.
3 2
11. J u   6u 2  u1  u 2  2u1u 2  max,
2

2
 u1  2u2  2, u1  0, u1  2, u2  0.
3 2
12. J u   6u 2  u1  u 2  2u1u 2  max,
2

2
3u1  4u2  12, u1  0,  u1  2u2  2, u2  0.
3 2
13. J u   8u1  12u 2  u1  u 2  max,
2

2
 2u1  u2  4, u1  0, 2u1  5u2  10, u2  0.
3 2
14. J u   8u1  12u 2  u1  u 2  max,
2

2
 u1  2u2  2, u1  0, u1  6, u2  0.
3 2
15. J u   8u1  12u 2  u1  u 2  max,
2

2
 3u1  2u2  0, u1  0, 4u1  3u2  12, u2  0.
1 2
16. J u   3u1  2u 2  u1  u 2  u1u 2  max,
2

2
 2u1  u2  2, u1  0, 2u1  3u2  6, u2  0.
1 2
17. J u   6u1  4u 2  u1  u 2  u1u 2  max,
2

2
u1  2u2  2, u1  0,  2u1  u2  0, u2  0.

170
1 2
18. J u   6u1  4u 2  u1  u 2  u1u 2  max,
2

2
2u1  u2  2, u1  0, u2  1, u2  0.
1 2
19. J u   6u1  4u 2  u1  u 2  u1u 2  max,
2

2
3u1  2u2  6, u1  0,  3u1  u2  3, u2  0.
20. J u   8u1  6u2  2u1  u2  max, 
2 2

 u1  u2  1, u1  0, 3u1  2u2  6, u2  0.
21. J u   8u1  6u2  2u1  u2  max,
2 2

 u1  u2  1, u1  0, u1  3, u2  0.
22. J u   8u1  6u2  2u1  u2  max,
2 2

 u1  u2  2, u1  0, 3u1  4u2  12, u2  0.


23. J u   2u1  2u 2  u1  2u 2  2u1u 2  max,
2 2

4u1  3u2  12, u1  0, u2  3, u2  0.


24. J u   2u1  2u 2  u1  2u 2  2u1u 2  max,
2 2

2u1  u2  4, u1  0,  u1  u2  2, u2  0.
25. J u   2u1  2u 2  u1  2u 2  2u1u 2  max,
2 2

2u1  u2  2, u1  0, u 2  4, u 2  0.
26. J u   4u1  4u 2  3u1  u 2  2u1u 2  max,
2 2

4u1  5u2  20, u1  0, u1  4, u2  0.


27. J u   4u1  4u 2  3u1  u 2  2u1u 2  max,
2 2

3u1  6u2  18, u1  0, u1  4u2  4, u2  0.


28. J u   4u1  4u 2  3u1  u 2  2u1u 2  max,
2 2

3u1  4u2  12, u1  0, u1  2u2  2, u2  0.


29. J u   12u1  4u2  3u1  u2  max,
2 2

1 1
u1  u2  6, u1  0,  u1  u2  1, u2  0.
2 2

171
11 1 2 1
30. J u   u1  u 2  u12  u 22  u1u 2  max,
2 6 3 2
2u1  u2  2, u1  0,  u1  2u2  2, u2  0.
31. J u   18u1  12u 2  2u1  u 2  2u1u 2  max,
2 2

1
u1  u2  4, u1  0, u1  u2  1, u2  0.
2
1 2 5 2
32. J u   6u1  16u 2  u1  u 2  2u1u 2  max,
2 2
5u1  2u2  10, u1  0, 3u1  2u 2  6, u 2  0.
33. J u   11u1  8u 2  2u1  u 2  u1u 2  max,
2 2

u1  u2  0, u1  0, 3u1  4u2  12, u2  0.


34. J u   8u 2  4u12  2u 22  4u1u 2  max,
u1  4, u2  3, u1  0, u2  0.
35. J u   18u1  20u2  u12  2u22  2u1u2  max,
u1  u2  5, u1  2, u2  0.
3 1
36. J u   12u1  2u 2  u12  u 22  u1u 2  max,
2 2
u1  4, u2  3, u1  3u2  6, u2  0.
37. J u   26u1  20u 2  3u12  2u 22  4u1u 2  max,
2u1  u2  4, u1  0, u2  2, u2  0.
38. J u   10u1  8u 2  u1  2u 22  2u1u 2  max,
2

 u1  u2  2, u1  0, u1  5, u2  0.
13 1
39. J u   u1  5u 2  2u12  u 22  u1u 2  max,
2 2
u1  u2  3, u1  0, u1  u2  1, u2  0.
9 27 1 1
40. J u   u1  u 2  u12  2u 22  u1u 2  max,
2 2 2 2
u1  u2  2, u1  0, u1  u2  4, u2  0.

172
41. J u   2u1  8u 2  u1  5u 2  4u1u 2  max,
2 2

u1  u2  3, u1  0,  2u1  3u2  6, u2  0.
42. J u   8u1  18u 2  u1  2u 22  2u1u 2  max,
2

 u1  u2  3, u1  0, u1  2, u2  0.
23 3
43. J u   u1  u12  2u 22  u1u 2  max,
2 2
5u1  4u2  20, u1  0, u1  u2  2, u2  0.
44. J u   48u1  28u 2  4u1  2u 22  4u1u 2  max,
2

2u1  u2  6, u1  0,  2u1  u2  4, u2  0.
45. J u   u1  10u 2  u1  2u 22  u1u 2  max,
2

3u1  5u2  15, u1  0, u1  2u2  4, u2  0.


46. J u   6u1  18u 2  2u1  2u 22  2u1u 2  max,
2

2u1  u2  2, u1  0, 5u1  3u2  15, u2  0.


47. J u   u1  5u 2  u12  u 22  u1u 2  max,
u1  u2  3, u1  0, u1  2, u 2  0.
48. J u   14u1  10u 2  u1  u 22  u1u 2  max,
2

 3u1  2u2  6, u1  0, u1  u2  4, u2  0.
49. J u   u1  u2  6  max, u1  u2  5, u2  0.
2 2

50. J u   5u12  u 22  4u1u 2  max, u1  0, u2  0.


u12  u22  2u1  4u2  4,
51. J u   u12  u1u2  u2u3  6  max,
u1  2u2  u3  3, u j  0, j  1,3.
52. J u   u2u3  u1  u2  10  min,
2u1  5u2  u3  10, u 2  0, u 3  0.
53. J u   u1u2  u1  u2  5  min,
2u1  u2  3, u1  0, 5u1  u2  4, u2  0.

173
54. J u   10u1  5u 2  u1  2u 2  10  min,
2 2

2u12  u 2  4, u 2  0, u1  u2  8.
55. J u   u3  u1u 2  6  min,
2

2u2  u3  3, u j  0, u1  u2  u3  2, j  1,2,3 .
56. J u   2u1  3u2  u1  6  max,
2 2

u12  u2  3, u2  0, 2u1  u2  5.
57. J u   u1  3u1  5  min,
2

u12  u22  2u1  8u2  16,  u12  6u1  u2  7.


5 2
58. J u   u1  u2  u1u2  7  min,
2

2
u12  4u1  u2  5,  u12  6u1  u2  7.

59. J u   u1  7  max,


5

u12  u22  4u2  0, u1u2  1  0, u1  0.


7 2
60. J u   4u1  u 2  u1u 2  u 2  5  min,
2

2
2u1  9u 2  18, u 2  0,  u1  u2  1.
2 2

61. J u   2u1  3u2  11  min,


5 3

u12  u22  6u1  16u2  72, u2  8  0.


62. J u   3u1  u 2  2u1u 2  5  min,
2 2

25u12  4u 22  100, u1  0, u12  1  0.


63. J u   5u1  2u 2  3u1  4u 2  18  max,
2 5

3u12  6  u2  0, u12  u 12  9, u1  0, u 2  0.
5 2
64. J u   3u1  u 2  3u1u 2  7  min,
2

2
3u1  u 2  1, u22  2.

174
65. J u   u1  2u1u2  u1  6  max,
6

3u12  15, u1  0  u1  5u 2  10, u 2  0.


66. J u   4u1  3u 2  4u1u 2  u1  6u 2  5  max,
2 2

 u12  u 22  3,  3u12  u2  4.


67. J u   u1  2u 2  u1u 2  u1  26  max,
2 2

u12  25, u1  2u2  5, u2  0.


68. J u   u1  u2  max,
2 2

2u1  3u2  6,  u1  5u2  10, u1  0.


69. J u   2u1  u 2  4u1u 2  u1  6  max,
2 2

 u1  u2  1, u1  0 , 2u1  u2  5, u2  0.
70. J u   2u1  3u 2  u1u 2  6  max,
2 2

u1  u2  3.  u12  u2  5, u1  0
71. J u   u1  u 2  u1  5u 2  5  max, , u1  u2  5.
2 2

72. J u   u1  u 2  u1u 3  min,


2 2

3u1  u 2  u 3  4, u j  0, u1  2u2  2u3  3, j  1,2,3.


73. J u   5u1  6u 2  u 3  8u1u 2  u1  max,
2 2 2

u12  u 2  u3  5, u1  5u 2  8, u1  0, u 2  0.
74. J u   u12  u22  u32  min,
u1  u2  u3  3, 2u1  u2  u3  5.
75. J u   3u12  u22  u32  u1u2  6  min, 2
2u1  u2  u3  5, u2  3u3  8, u2  0.
76. J u   u1u 2  u1u 3  2u 2 u 3  u1  5  max,
u1  u2  u3  3, 2u1  u 2  5, u1  0.
1 2 3 2
77. J u   u1  u 2  u 3  12u1  13u 2  5u 3  min,
2

2 2
u1  5u2  4u3  16 , 2u1  7u 2  3u 3  2, u1  0.

175
78. J u   u1  2u 2  30u1  16u 3  10  min,
2 2

5u1  3u2  4u3  20, u1  6u2  3u3  0, u3  0.


1 2
79. J u   u1  u2  5u1  u3  16  min, u
2

2
u1  u2  2u3  3, 2u1  u 2  3u 3  11, u j  0, j  1,2,3.

80. J u   2u1  2u1  4u 2  3u 3  8  max,


2

8u1  3u2  3u3  40,  2u1  u 2  u 3  3, u 2  0.


81. J u   u1u3  u1  10  min, u12  u3  3, u j  0
u22  u3  3, j  1,2,3.
82. J u   3u1  u 2  u 2  7u 3  max,
2 2

4u1  u 2  2u3  5  1, u1  0, 2u2  u3  4.


83. J u   u1  u 2  u1u 2  6u1u 3  u 2 u 3  u1  25  min,
2 2

u12  u 22  u1u 2  u3  10,


u1u 2  u1  2u 2  u 3  4, u j  0, j  1,2,3.
84. J u   e u1 u2 u3   max, 
 u12  u 22  u32  10, u 2  0 u13  5, u3  0.
85. J u   3u 2  11u1  3u 2  u 3  27  max,
2

u1  7u2  3u3  7, 5u1  2u 2  u 3  2, u 3  0.


86. J u   4u12  8u1  u2  4u3  12  min,
3u1  u2  u3  5 , u1  2u 2  u 3  0, u j  0, j  1,2,3.
1 2 3 2
87. J u   u 2  u 3  2u 2  9u 3  min,
2 2
3u1  5u2  u3  19, 2u 2  u 3  0, u j  0, j  1,2,3.

176
Task 3

To evaluate the task of the linear programming (1 – 89 variants):


J u   cu  min; U  u  E n / Au  b, u  0 . 
 3 1 1 1 1
 
1. c  (5,1,1,0,0), b  5,4,11, A   2  1 3 0 0 .
 0 5 6 1 0
 
1 2 1 6 1
 
2. c  (6,1,1,2,0), b  4,1,9, A   3  1  1 1 0 .
1 3 5 0 0
 
3 1 1 6 1 
 
3. c  (0,6,1,1,0), b  6,6,6 , A   1 0 5 1  7 .
1 2 3 1 1 
 
5 1 1 3 1
 
4. c  (7,1,1,1,0), b  5,3,2 , A   0  2 4 1 1 .
 1  3 5 0 0
 
 1 1 1 2 1
 
5. c   (8,1,3,0,0), b   4,3,6 , A   2 0 1  3 5 .
 3 0  1 6 1
 
  2 1 2 0 0
 
6. c  (0,1,3,1,1), b  2,8,5, A   1 1 4 1 3 .
 3 1  1 0 6 

 2 0 1 1 1
 
7. c  (1,2,1,1,0), b  2,7,2 , A   4 1 3 1 2 .
1 0 1 2 1
 

177
 6 1 1 2 1
 
8. c  (0,1,6,1,3), b  9,14,3, A    1 0  1 7 8 .
 1 0 2 1 1
 
 2 0 3 1 1
 
9. c  (8,1,1,1,0), b  5,9,3, A   3 1 1 6 2 .
 1 0 2 1 2
 
 2 0 3 1 0
 
10. c  (1,3,1,1,0), b  4,4,15, A   1 0  1 2 3 .
 3 3 6 3 6
 
 4 1 1 0 1
 
11. c  (0,2,0,1,3), b  6,1,24 , A    1 3  1 0 3 .
 8 4 12 4 12 
 
 8 16 8 8 24 
 
12. c   (10,5,25,5,0), b  32,1,15, A   0 2  1 1 1 .
 0 3 2 1 1 
 
 4 1 1 2 1
 
13. c  (6,0,1,1,2), b  8,2,2 , A   2  1 0 1 0 .
1 1 0 0 1
 
1 2 3 4 1
 
14. c  (5,1,3,1,0), b  7,7,12 , A   0 3  1 4 0 .
0 4 0 8 1 

3 4 1 0 0
 
15. c  (5,3,2,1,1), b  12,16,3, A   3 2 1 1 1 .
1  3 0 0 1 

178
1 1 1 0 0
 
16. c  (7,0,1,1,1), b  1,12,4, A   2 2 1 1 2 .
 2 1 0 0 1
 
 1 1 1 0 0
 
17. c  (6,1,2,1,1), b  2,11,6 , A   5 2 1 1 1 .
 3 2 0 0 1
 
 2 1 1 1 3
 
18. c  (0,0,3,2,1), b  5,7,2 , A   3 0 2  1 6 .
1 0 1 2 1
 
6 3 1 1 1
 
19. c  (1,7,2,1,1), b  20,12,6 , A   4 3 0 1 0 .
 3  2 0 0 1
 

 1 2 1 0 0
 
20. c  (2,0,1,1,1), b  2,14,1, A   3 5 1 1 2 .
 1 1 0 0 1
 
 1 2 1 0 0
 
21. c  (6,1,0,1,2), b  2,18,2, A   2 6 2 1 1 .
 1  2 0 0 1
 
 1 2 1 0 0
 
22. c  (0,3,1,1,1), b  2,2,6 , A   1 1 0 1 0 .
 2 1 1 1 2
 
 2 2 1 1 1
 
23. c  (3,0,1,2,1), b  6,2,2 , A   2  1 0 1 0 .
1 1 0 0 1
 

179
 1 1 1 0 0
 
24. c  (0,5,1,1,1), b  2,2,10 , A   1  2 0 1 0 .
2 1 1 1 2 

 3 4 1 0 0
 
25. c  (1,5,2,1,1), b  12,1,3, A    1 1 0 1 0 .
 3 2 1 1 1
 
 1 1 1 0 0
 
26. c  (5,0,1,1,1), b  1,3,12 , A    3 1 0 1 0 .
 2 2 1 1 2 

 1 1 1 0 0
 
27. c  (7,0,2,1,1), b  2,3,11, A   3  1 0 1 0 .
5 2 1 1 1 

 5 5 1 2 1
 
28. c  (1,4,1,1,1), b  28,2,12 , A    1 2 0 1 0 .
 3 4 0 0 1
 
1 2 1 0 0
 
29. c   (0,8,2,1,1), b   2,20,6 , A   6 3 1 1 1 .
 3 2 0 0 1 

3 5 1 1 2
 
30. c  (0,2,1,1,1), b  14,10,1, A   2 5 0 1 0 .
1 1 0 0 1 

 1 2 1 0 0
 
31. c  (7,2,0,1,2), b  2,12,18, A   3 4 0 1 0 .
 2 6 2 1 1 

180
 1 2 1 0 0
 
32. c  (1,3,1,1,1), b  2,6,1, A   2 1 1 1 2 .
 1 1 0 0 1
 
 1 2 1 0 0
 
33. c  (5,1,1,1,2), b  2,8,2 , A   4 1 1 2 1 .
 1 1 0 0 1
 
1 1 2 2 1 
 
34. c   (1,2,1,1,1), b   11,2,3, A  1  2 0 1 0 .
1 1 0 0 1 
 
2 3 1 2 1
 
35. c  (10,5,2,1,1), b  17,1,3, A    1 1 0 1 0 .
 1  3 0 0 1
 
1 0 2 1 3
 
36. c  (2,1,3,1,1), b  6,16,7 , A   2 2 4 8 4 .
1 0 1 7 1
 
 1 1 1 0 0
 
37. c  (4,1,1,2,1), b  2,13,16, A   4 3 2 1 0 .
 3 2 0 0 1
 
 4  3 1 0 0
 
38. c  ( 2,2,1,2,1), b  12,2,26 , A    1 2 0 1 0 .
 6 3 1 1 1 

 9 1 1 1 2
 
39. c  (5,2,1,1,1), b  26,12,6 , A   4 3 0 1 0 .
3  2 0 0 1
 

181
 2 6 1 1 1
 
40. c  (1,11,1,2,1), b  13,10,1, A   2 5 0 1 0 .
1 1 0 0 1
 
 3 1  3 1 0
 
41. c  (5,1,1,2,0), b  1,6,2 , A   2 3 1 2 1 .
 3 1  2 1 0
 
 2 1 1 1 2
 
42. c  (0,3,1,1,1), b  6,2,1, A   1 1 0 1 0 .
1 1 0 0 1
 
1 1 1 2 1
 
43. c  (8,1,3,0,0), b  4,3,6, A   2 0 1  3 5 .
 3 0  1 6 1
 
 1 1 1 0 0
 
44. c  (2,1,1,1,1), b  2,11,3, A   1 1 2 2 1 .
 1 1 0 0 1
 
4 3 2 1 1
 
45. c  (5,0,1,2,1), b  13,3,6 , A   3 1 0 1 0 .
3 2 0 0 1 

1 1 1 0 0
 
46. c  (1,3,1,1,1), b  1,17,4 , A   5 2 2 1 3 .
2 1 0 0 1 

3 4 1 0 0
 
47. c  (9,5,2,1,1), b  12,17,3, A   2 3 1 2 1 .
1  3 0 0 1 

182
 4  3 1 0 0
 
48. c  (1,1,1,2,1), b  12,26,12 , A   6 3 1 1 1 .
 3 4 0 0 1
 
 1 2 1 0 0
 
49. c  (0,7,1,1,1), b  2,26,6, A   9 1 1 1 2 .
 3  2 0 0 1
 
 1 2 1 0 0
 
50. c  (4,8,1,2,1), b  2,13,1, A   2 6 1 1 1 .
 1 1 0 0 1
 
1 1 0 1 2
 
51. c  (3,1,1,1,0), b  3,6,5, A   2 1 1 2 3 .
3 2 0  3 8
 
 1 2 1 0 0
 
52. c  (1,3,1,2,1), b  2,2,5, A   1 1 0 1 0 .
 1 2 1 1 1
 
 1 2 1 0 0
 
53. c   (0,1,1,2,1), b  2,2,6 , A   2  1 0 1 0 .
 2 2 1 1 1
 
 1 1 1 0 0
 
54. c  (0,5,1,1,1), b  2,2,11, A   1  2 0 1 0 .
1 1 2 2 1 

 3 4 1 0 0
 
55. c  (9,2,1,0,1), b  12,1,17 , A    1 1 0 1 0 .
 2 3 1 2 1
 

183
1 1 1 0 0
 
56. c  (1,0,1,1,1), b  1,3,17 , A   3 1 0 1 0 .
5 2 2 1 3
 
1 1 1 0 0
 
57. c  (3,2,1,2,1), b  2,3,13, A   3  1 0 1 0 .
4 3 2 1 1 

 1 2 1 0 0
 
58. c  (9,0,1,1,1), b  2,12,26 , A   4 3 0 1 0 .
 9 1 1 1 2
 
 6 3 1 1 1
 
59. c  (5,5,1,2,1), b  26,2,12, A    1 2 0 1 0 .
 3 4 0 0 1
 
 1 2 1 0 0
 
60. c  (0,10,1,2,1), b  2,10,1, A   2 5 0 1 0 .
 2 6 1 1 1
 
3 1 3 1 2 
 
61. c  (3,2,1,1,0), b  5,5,5, A   3 2 1 1 1 .
 7  2 2 0  1
 
 5 10 5 15 10 
 
62. c  (1,2,1,1,0), b  25,3,5, A   0 1  1 6 2 .
 0 6 1  1  1
 
 2 1 0 1 1
 
63. c  (1,1,2,1,0), b  4,7,9 , A   3 2 0 1 1 .
 1 1 1 2 6
 

184
2 1 1 1 1 
 
64. c  (1,3,1,0,0), b  4,3,6 , A   1 0 2  1  3 .
3 0 3 1 2 

3 1 1 2 3 
 
65. c  (2,1,1,5,0), b  7,1,9 , A   2 0 3 2  1.
3 0 1 1 6 
 
 4 1 1 2 1
 
66. c  (6,0,1,1,2), b  8,2,2 , A   2  1 0 1 0 .
1 1 0 0 1
 
1 2 3 4 1
 
67. c  (5,1,3,1,0), b  7,7,12, A   0 3  1 4 0 .
0 4 0 8 1
 
3 4 1 0 0
 
68. c  (5,3,2,1,1), b  12,16,3, A   3 2 1 1 1 .
1  3 0 0 1
 
1 1 1 0 0
 
69. c  (7,0,1,1,1), b  1,12,4 , A   2 2 1 1 2 .
 2 1 0 0 1
 
 1 1 1 0 0
 
70. c  (6,1,2,1,1), b  2,11,6 , A   5 2 1 1 1 .
 3 2 0 0 1
 
 2 1 1 1 3
 
71. c  (0,0,3,2,1), b  5,7,2 , A   3 0 2  1 6 .
1 0 1 2 1
 

185
6 3 1 1 1
 
72. c  (1,7,2,1,1), b  20,12,6, A   4 3 0 1 0 .
 3  2 0 0 1
 
 1 2 1 0 0
 
73. c  (2,0,1,1,1), b  2,14,1, A   3 5 1 1 2 .
 1 1 0 0 1
 
 1 2 1 0 0
 
74. c  (6,1,0,1,2), b  2,18,2, A   2 6 2 1 1 .
 1  2 0 0 1
 
 1 2 1 0 0
 
75. c  (0,3,1,1,1), b  2,2,6, A   1 1 0 1 0 .
 2 1 1 1 2
 
 2 2 1 1 1
 
76. c  (3,0,1,2,1), b  6,2,2, A   2  1 0 1 0 .
1 1 0 0 1
 
 1 1 1 0 0
 
77. c  (0,5,1,1,1), b  2,2,10, A   1  2 0 1 0 .
2 2 1 1 2 

 3 4 1 0 0
 
78. c  (1,5,2,1,1), b  12,1,3, A    1 1 0 1 0 .
 3 2 1 1 1
 
 1 1 1 0 0
 
79. c  (5,0,1,1,1), b  1,3,12, A    3 1 0 1 0 .
 2 2 1 1 2 

186
 1 1 1 0 0
 
80. c  (7,0,2,1,1), b  2,3,11, A   3  1 0 1 0 .
5 2 1 1 1 

 5 5 1 2 1
 
81. c  (1,4,1,1,1), b  28,2,12, A    1 2 0 1 0 .
 3 4 0 0 1
 
 1 2 1 0 0
 
82. c  (0,8,2,1,1), b  2,20,6 , A   6 3 1 1 1 .
 3  2 0 0 1
 
 1 2 1 0 0
 
83. c  (7,2,0,1,2), b  2,12,18, A   3 4 0 1 0 .
 2 6 2 1 1
 
1 2 1 0 0
 
84. c  (1,3,1,1,1), b  2,6,1, A   2 1 1 1 2 .
 1 1 0 0 1
 
1 1 2 2 1 
 
85. c  (1,2,1,1,1), b  11,2,3, A  1  2 0 1 0 .
1 1 0 0 1 
 

2 3 1 2 1
 
86. c  (10,5,2,1,1), b  17,1,3, A    1 1 0 1 0 .
 1  3 0 0 1
 
 1 0 2 1 3
 
87. c  (2,1,3,1,1), b  6,16,7 , A   2 2 4 8 4 .
1 0 1 7 1
 

187
 2 6 1 1 1
 
88. c  (1,11,1,2,1), b  13,10,1, A   2 5 0 1 0 .
7  4 0 0 1
 
 1 1 1 0 0
 
89. c  ( 2,1,1,1,1), b  2,11,3, A   1 1 2 2 1 .
 1 1 0 0 1
 

188
Appendix 3

KEYS

To the 1-st task


It is possible to construct the matrixes J (u ) and check its
determination on U according to the theorem 3 (lecture 4). Similarly
tasks 2-89 are executed.

To the 2-nd task


1. J (u* )  2,50 ; 17. J (u* )  8,00 ; 33. J (u* )  20,82;
2. J (u* )  4,75 ; 18. J (u * )  5,75 ; 34. J (u* )  15,00;
3. J (u* )  11,00 ; 19. J (u * )  8,40 ; 35. J (u* )  66,0;
4. J (u* )  11,00 ; 20. J (u* )  12,76; 36. J (u* )  25,12;
5. J (u* )  11,12 ; 21. J (u* )  17,00; 37. J (u* )  47,0;
6. J (u* )  2,50 ; 22. J (u* )  15,84 ; 38. J (u* )  23,50;
7. J (u* )  5,67 ; 23. J (u* )  4,45 ; 39. J (u* )  12,07;
8. J (u* )  5,29 ; 24. J (u* )  3,77 ; 40. J (u* )  25,13 ;
9. J (u* )  6,25 ; 25. J (u* )  4,20 ; 41. J (u* )  4,60 ;
10. J (u* )  9,59 ; 26. J (u* )  11,02 ; 42. J (u* )  40,0;
11. J (u* )  12,50 ; 27. J (u* )  10,12 ; 43. J (u* )  23,45;
12. J (u* )  9,59 ; 28. J (u* )  9,53 ; 44. J (u* )  112,0;
13. J (u* )  24,34 ; 29. J (u* )  13,00 ; 45. J (u* )  103,5;
14. J (u* )  38,91 ; 30. J (u* )  6,10 ; 46. J (u* )  41,23;
15. J (u* )  27,99 ; 31. J (u* )  41,00; 47. J (u* )  6,75 ;
16. J (u* )  4,57 ; 32. J (u* )  25,81 ; 48. J (u* )  40,0.

189
To the 3-rd task
20
1. J (u* )  5 ; 17. J (u* )  15 ; 33. J (u * )  ;
3
4 22
2. J (u* )  6 ; 18. J (u* )  ; 34. J (u* )  ;
3 3
J (u* )  14 ; 437
3. J (u* )  6 ; 19. 35. J (u* )  ;
13
14 43 31
4. J (u* )  ; 20. J (u* )  ; 36. J (u* )  ;
3 7 3
134 99
5. J (u* )  ; 21. J (u* )  ; 37. U  ;
7 5
137 17 89
6. J (u* )   ; 22. J (u* )  ; 38. J (u* )  ;
33 3 5
7. J (u* )  0 ; 23. J (u* )  7 ; 39. J (u* )  19 ;
11
8. J (u* )  ; 24. J (u* )  6 ; 40. J (u* )  21 ;
3
9. J (u* )  0 ; 25. U  ; 41. J (u* )  3 ;
27 17
10. J (u* )  7 ; 26. J (u* )  ; 42. J (u* )  ;
5 3
54 110
11. J (u* )  ; 27. J (u* )  16 ; 43. J (u* )  ;
13 7
19
12. J (u* )  10 ; 28. J (u* )  26 ; 44. J (u* )  ;
2
13. J (u* )  6 ; 29. J (u* )  12 ; 45. J (u* )  5 ;
7 3 77
14. J (u* )   ; 30. J (u* )   ; 46. J (u* )  ;
6 7 5
118 389
15. J (u* )  26 ; 31. J (u* )  ; 47. J (u* )  ;
5 13
29 19 72
16. J (u* )  ; 32. J (u* )  ; 48. J (u* )  ;
5 3 5
438
49. J (u* )  16 ; 55. J (u* )  ; 61. J (u* )  5 ;
13

190
133
50. J (u* )  23 ; 56. J (u* )  8 ; 62. J (u* )  ;
37
23 24 21
51. J (u* )  ; 57. J (u* )  ; 63. J (u* )  ;
3 5 10
52. J (u* )  6 ; 58. J (u* )  22 ; 64. J (u* )  1 ;
5
53. J (u* )  4 ; 59. J (u* )  28 ; 65. J (u* )   .
17
26
54. J (u * )  ; 60. U  ;
5

191
Appendix 4

TESTS

1) Weierstrass’ theorems define


A) Sufficient conditions that ensemble
U  { u  U / I  u   min I u }  
* *  *  u U
B) Necessary and sufficient conditions that ensemble
U {u U / I u   min I u }  
* *  *  uU

C) Necessary conditions that ensemble


U  { u  U / I  u   min I u }  
* *  * u U
D) Necessary and sufficient conditions to ensemble convexity
U {u U / I u   min I u }
* *  *  uU
E) Necessary conditions that the point u* U is a point of the
minimum to function Iu on ensemble U

2) Sufficient conditions that the ensemble


give
U {u U / I u   min I u} 
 

* *  * uU

A) Weierstrass’ theorems
B) Farkas’s theorems
C) Kuhn-Tucker’s theorems
D) Lagrange theorems
E) There isn’t any correct answer amongst A)-D).

3) Ensemble U  E n is identified convex if


 
A) u  U , v  U ,   0,1 the point
u    u  1   v  U
B) u U ,vU ,   0,1 , u  u  1   v
 
C) u  U , v  U ,   0,1, u  u  1   v

192
D) u U,v U,  number  0,1 for which
u    u  1   v  U
E) u  U , v  U ,   E the point 1

u    u  1   v  U

4) Ensemble U  E is convex iff


n

A) it contains all convex linear combinations of any final number own


points
B) u U, v U possible construct the point
which is a convex linear combination of
u    u  1   v  U
the points u, v
C) It is limited and closed
D) v  U ,   0 vicinity S  v   U
E) From any sequence {u k }  U it is possible to select converging
subsequence

5) Function I u determined on convex ensemble U  E n is
identified convex if:
A) u U ,v U ,  0,1,
 

I  u  1 v   I u   1  I v







 
 
 




B)
{u }U : lim u  u, I u  lim I u 
k k  k k   k 
u  U , v  U ,    0,1,
C)
I  u  1   v    I u   1   I v 
D)
{u }  U : lim u  u , I  u   lim I u
k k  k k  k
 
E) There isn’t any correct answer amongst A)-D).

6) Necessary and sufficient condition that continuously


differentiated function I u is convex on convex ensemble U  En :
   
A) I u  I v  I  v , u  v , u  U , v  U
I u   I v   I  u  1   v ,
B)
u  U , v  U ,   0,1
193
C)  I u , u  u  0, u  U
* *

D)  I u , u  u  0, u  U
* *

I u   I v   I  u  1   v ,
E)
u  U , v  U ,   0,1

7) Necessary and sufficient condition that continuously



differentiated function I u is convex on convex ensemble U  E n
A)  I u   I v , u  v  0, u  U , v  U
B)  I u   I v , u  v  0, u  U , v  U
C) I u   I v   I u   I v , u  U , v  U
D) I  u  1   v   I u   I v , u  U , v  U
E) I  u  1   v   I u   I v , u  U , v  U


8) Function I u determined on convex ensemble U  E n is
identified strongly convex on U , if
A)
  0, I  u  1 v    I u   1  I v  1  u  v 2,
      

u U ,v U ,  0,1 




I  u  1   v    I u   1   I v ,
B)
u  U , v  U ,   0,1
I  u  1   v    I u   1   I v ,
C)
u  U , v  U ,   0,1
D)
   0, I  u  1   v    I u  1    I v   1    u  v 2 ,
 

 u  U ,  v U ,    0,1
E) There isn’t any correct answer amongst A)-D).

9) In order to twice continuously differentiated on convex ensemble


U function I (u) is convex on U necessary and sufficiently
performing the condition
 
A)  I  u  ,  0, u  U ,   E n
B) det I u   0 u  U

194
C)  I u  ,   0, u  U ,   E n
D)  I u u , v  0, u  U , v  U
E)  I u u , v  0, u  U , v  U

10) Choose correct statement


A) If I u , G u  are convex functions determined on convex
ensemble U ,   0,   0 that function  I u    G u  is also convex
on ensemble U
B) If I u , G u  are convex functions determined on convex
 
ensemble U  E n , that function I u  G u is also convex on U
C) If I u , Gu  are convex functions determined on convex
 
ensemble U  E n , that function I u  G u is also convex on U
   
D) If I u , G u are convex functions determined on convex
ensemble U  E n , G u   0, u  U , that I u / Gu is convex
function on U
E) There isn’t any correct answer amongst A)-D).

11) Choose correct statement


A) Intersection of two convex ensembles is convex ensemble
B) Union of two convex ensembles is convex ensemble
C) If U  E n is convex ensemble, that ensemble En \ U is also
convex
D) If U1  En, U2  En is convex ensemble, that ensemble U1 \ U2
is also convex
E) There isn’t any correct answer amongst A)-D).

12) Define type of the following problem I u   c, u  inf


u U  {u  E n / u  0, j  I ,
j
g u   ai ,u  b  0,i 1, m,
i i
i
g u   a ,u  b  0,i  m  1, s}
i i
A) General problem of linear programming
B) Canonical problem of linear programming

195
C) Nondegenerate problem of linear programming
D) Problem of nonlinear programming
E) The simplest variational problem

I u  c, u  inf
13) Define type of the following problem
n
u U  {u  E / u  0, i  1, n, Au  b}
j
A) General problem of linear programming
B) Canonical problem of linear programming
C) Nondegenerate problem of linear programming
D) Problem of nonlinear programming
E) The simplest variational problem

14) Function I(u) = u2 – 2u1u2 +u22 on ensemble U = En is


A) convex
B) concave
C) neither convex, nor concave
D) convex under u1 ≥ 0, u2 ≥ 0 and concave under u1 ≤ 0, u2 ≤ 0
E) convex under u1 ≤ 0, u2 ≤ 0 and concave under u1 ≥ 0, u2 ≥ 0

15) Define type of the problem I(u) = 2u12 – u22 → inf


u  U  {u  E n / 2u  u  3, u  4u  5}
1 2 1 2
A) Problem of nonlinear programming
B) Canonical problem of linear programming
C) Convex programming problem
D) General problem of linear programming
E) The simplest variational problem

16) Define type of the problem I(u) = 2u1 – u2 → inf


u  U  {u  E n / u  0, u  u 2  4, 2u  u  2
2 1 2 1 2
A) Convex programming problem
B) Canonical problem of linear programming
C) General problem of linear programming
D) Problem of nonlinear programming
E) The simplest variational problem

17) Define type of the problem


I (u )  2u  u  inf
1 2
u  U  {u  E n / u  0,  u  4u  2, u  3u  4}
1 1 2 1 2
196
A) General problem of linear programming
B) Canonical problem of linear programming
C) Convex programming problem
D) Problem of nonlinear programming
E) The simplest variational problem

18) Sequence {uk}  U is identified minimizing to function I u


determined on ensemble U , if
A) where
lim I (u )  I , I  inf I u 
k  k * * uU
B)    
I u   I u , k 1,2,...
 k 1  k
C)
 lim u  u ,
moreover u U

 k 1 I uk , k  1,2,...


k  k
D)
Iu
E) There isn’t any correct answer amongst A)-D).

19) Choose correct statement for problem


I u   inf, u  U  E n
A) For any function I (u) and ensemble U  E n always exists a
minimizing sequence for function I (u)
{u }  U
k
B) If function I(u) is continuously differentiable on ensemble U, that it
reaches minimum value on U, i.e. such that
u U
*
 
I u  min I u 
* uU
C) If minimizing sequence {uk}  U exists for function I (u), that
u  U
*
such that
 
I u min I u 
* uU
D) If the point
u U
*
exists such that
I u min I u 
* uU
  , that minimizing

sequence {uk}  U exists for function I(u)


E) There isn’t any correct answer amongst A)-D).

 
20) Choose correct statement for problem
I u   inf, u  U  E n , U  {u  U / I u  min I u }
* * * uU
197
A) If function I(u) is semicontinuous from below on compact ensemble
U, that ensemble
U 
*
B) If ensemble U is convex, but function I u is continuous on U,
that ensemble
U 
*
C) If ensemble U is limited, but function I u is convex on ensemble
U, that ensemble
U 
*
D) If function I u  is semicontinuous from below on ensemble U, but
ensemble U is limited, that
I u 
U *  

E) If function is continuously differentiable on closed ensemble


U, that
U 
*
21) Simplex method is used for solution
A) linear programming problem in canonical form
B) linear programming problem in general form
C) convex programming problem
D) nonlinear programming problem
E) optimal programming problem

22) Kuhn-Tucker theorems


A) define necessary and sufficient conditions that in the convex
programming problem for each point
* * * uU
 
U  {u  U / I u  min I u }
exist Lagrange’s multipliers  0
such that pair 


u , 
forms saddle

 * *
point to Lagrange’ function
B) define necessary and sufficient existence conditions of function I (u)
minimizing sequences {uk}  U
C) define necessary and sufficient convexity conditions to function

I u on convex ensemble U  E n
D) define necessary conditions that in the convex programming problem

198
ensemble
 
U  {u  U / I u  min I u }
* * * uU
consists of single point

U  {u  U / I u   min I u }  
E) define necessary and sufficient conditions that ensemble
* * * uU
23) For the convex programming problem of the type
I u   inf, u  U  {u  E n / u  U , g u   0, i  1, m}
0 i
Lagrange’s function has the form

A) L u ,    I u   m
  g u , u  U ,
i 1 i i 0
m
    {  E /   0,..., m  0}
0 1
B) Lu,    m  g u ,   0,
  
i 1 i i i
 

m
i  1, m,    1, u U
i 1 i 0
C)
Lu ,     I (u ), u  U   E1
0
D) m
Lu ,    I (u    g u ), u  U ,   E m
i 1 i i 0
m
E) Lu ,    I u     g u , u  U ,
i1 i i 0
    {  E m /   0,..., m  0}
0 1

24) For the convex programming problem of the type I(u) →inf
u  U  {u  E n / u  U , g u   0, i  1, m,
0 i
g u   a i , u  b  0, i  m  1, s}
i i
Lagrange’s function has the form

199
A) Lu,    I u   s  g u , u  U ,

i 1 i i 0
    {u  E s / u  0, u  0, ..., u m  0}
0 1 2
B) m m
Lu,      g u ,   0, i  1, m,    1, u  U
i 1 i i i i 1 i 0
m
L u ,    I u     g u , u  U ,   E m
C)
i 1 i i 0
s
D) Lu ,    I u     g u , u  U ,
i 1 i i 0
    {u  E s / u  0, u  0, ..., u s  0}
0 1 2
E) Lu,    I u , u  U ,   E1
25) For the convex programming problem necessary and sufficient
conditions that for any point
* * * uU
 
u  U  {u  U / I u  min I u 
exist Lagrange’s multipliers
*   such that pair
(u ,  ) forms
0 * *
saddle point to Lagrange’s function are defined
A) Kuhn-Tucker’s theorems
B) Lagrange’s theorems
C) Weierstrass’ theorem
D) Farkas’ theorems
E) Bellman’s theorems

26) Let I u be a convex function determined and continuously


differentiated on convex ensemble U, ensemble
U  {u  U / I u 
* * *
 
 min I u   
. In order to point u U
*
be a point of the function
uU
minimum I (u) on ensemble U necessary and sufficient performing the

 
condition
A)
 I  u , u  u  0, u  U
 
* *
B)
 I  u , u  u  0, u  U
* *
200
C)
 
I u  I u   I u , u  u , u  U
* *
D)
 *
 I  u  ,   0,   E n

I u   0
E)
*
27) Pair
u* , * U 0   0 is identified saddle point to Lagrange’s
function s , if
Lu ,    I u     g u 
i1 i i
A)
*
     
L u ,   L u , *  L u , * , u  U ,   
* 0 0
B)
    *
L u ,   L u ,  , u  U ,   
* 0 0
C)
* *
L u ,  0
D)
*
 
 L u ,    L u , *  L u ,  , u  U ,   
0 0
E)
* * * * uU
 
u  U  {u  U / I u  min I u }, *  0, i  1, s
j

28) If pair
u*, * U 0   0 is a saddle point to Lagrange’s

function Lu,   in the convex programming problem, that


A) the point
* * uU
 *
u  U  {u  U / I u  min I u }
B) Lebesgue ensemble
*
 
M u  {u  U / I u   I u }
*
 
is compact

C) there is a minimizing sequence for function I u , such


{u }  U
*
that
lim u  
k  k
D) the convex programming problem is nondegenerate
E) ensemble
 
U  {u  U / I u  min I u }
* * * uU
contains single point

201
29) For the nonlinear programming problem

I u   inf, u U  {u  E n / u U , g u   0, i  1, m, g u   0, i 1, m
0 i i
generalized Lagrange’s function has the form

A) L u ,   
s
 I u     g u , u  U ,    
0 i 1 i i 0 0
 {  E s 1 /   0 ,   0 ,...,  m  0}
0 1
B) m
Lu ,    I u    g u , u  U ,   E m
i1 i 0
m
Lu ,     I u     g u , u  U ,   E m
C)
0 i1 i i 0
s
Lu ,     I u     g u , u  U ,   E s 1
D)
0 i 1 i i 0
s
Lu ,     I u    g u , u  U ,   0
E)
i 1 i 0

30) Let U  E n be convex ensemble, function I u   C 1 U 


condition
  is
 I  u , u  u  0, u  U
* *
A) necessary condition that the point
u  U  {u  U / I u 
* * * * u
 
 min I u }
uU

 
B) necessary and sufficient condition that the point
u  U  {u  U / I u  min I u }
* * * * uU
u  U  {u  U / I u   min I u }
C) sufficient condition that the point

* * * * uU
D) necessary and sufficient condition that the function I(u) is convex in
the point
u U
*
 
E) necessary and sufficient condition that ensemble
U  {u  U / I u  min I u }  
* * * uU

202
31) Indicate faithful statement
A) convex programming problem is a partial case of the nonlinear
programming problem
B) nondegenerate problem of nonlinear programming can be reduced to
the convex programming problem
C) convex programming problem is a partial case of the linear
programming problem
D) any nonlinear programming problem can be reduced to the convex
programming problem
E) nonlinear programming problem is a partial case of the convex
programming problem

32) For solving of the nonlinear programming problem is used


A) Lagrange multipliers method
B) Simplex-method
C) Method of the least squares
D) Pontryagin maximum principle
E) Bellman’s dynamic programming method

33) What from enumerated methods can be used for solving of the
convex programming problem
A) Lagrange multipliers method
B) Simplex-method
C) Method of the least squares
D) Pontryagin maximum principle
E) Bellman’s dynamic programming method

34) What from enumerated methods can be used for solving of the
linear programming problem
A) Simplex-method
B) Method of the least squares
C) Pontryagin maximum principle
D) Bellman’s dynamic programming method
E) any method from A)- D)

35) In the convex programming problem the minimum to function


I(u) on ensemble U can be reached
A) in internal or border points ensemble U
B) only in the border points ensemble U
C) only in the isolated points ensemble U

203
D) only in the internal points ensemble U
E) in internal, border, isolated points ensemble U

36) In the nonlinear programming problem minimum to function


I(u) on ensemble U can be reached
A) in internal, border, isolated points ensemble U
B) only in the border points ensemble U
C) only in the isolated points ensemble U
D) in internal or border points ensemble U
E) only in the internal points ensemble U

37) If in the linear programming problem in canonical form


n ensemble
I u   inf, u  U  {u  E / u  0, j  1, n, Au  b}
j

 
U  {u  U / I u  min I u }
* * * uU
contains single point u *
, that the

point is
A) an extreme point ensemble U
B) an isolated points ensemble U
C) an internal point ensemble U
D) an internal or extreme point ensemble U
E) extreme or isolated point ensemble U

38) The linear programming problem in canonical form has the


form
A) I u   c, u  inf,
u U  {u  E n / u  0, j  1, n, Au  b}
j
I u   c,u  inf,
B)
u U  {u  E n / u  0, j  I , g u   ai ,
j i
u  b  0, i 1, m,
i
g u   ai ,u  b  0, i  m  1, s}
i i
C) I u   c , u  inf, u  U  {u  E / Au  b} n

204
D) I u   c,u  inf,
u U {u  E n / u  0, j  I , g u   ai ,
j i
u  b  0, i 1, m}
i
E) I u   c,u  inf,
u U  {u  E n / 0  g u   b , i 1, m}
i i
39) The linear programming problem in canonical form
I u   c,u  inf, is
identified nondegenerate,
n
uU {u E / u  0, j 1, n, Au  b}
j
if:
A) any point u  U has not less than rang A positive coordinates
B) rangA = m, where A is a constant matrix of dimensionality
m  n, m  n
C) ensemble
U
* * *
 
 {u  U / I u  min I u }  
uU
D) rangA = m, where A is a constant matrix of dimensionality
m  n, m  n
E) any point u U has no more than rangA = m, positive
coordinates

40) By extreme point ensemble


is called
U  {u  E n / u  0, j  1, n, Au  b}
j
u  U which can not be presented in the manner
A) Point of
u   v  1   w , where   0,1, v  U , w  U
B) Isolated point ensemble U
C) Border point ensemble U
D) Point u  U presented in the manner of u   v  1   w ,
where   0,1, v  U , w  U
E) Internal point ensemble U

205
CONTENTS

FOREWORD ..................................................................................................... 3
Introduction ........................................................................................................ 4
Lecture 1. THE MAIN DETERMINATIONS. STATEMENT
OF THE PROBLEM .......................................................................................... 4
Lecture 2. WEIERSTRASS’ THEOREM ......................................................... 12

Chapter I CONVEX PROGRAMMING. ELEMENTS OF THE


CONVEXANALYSIS ........................................................................................ 18
Lecture 3. CONVEX SETS ................................................................................ 18
Lecture 4. CONVEX FUNCTIONS .................................................................. 27
Lecture 5. THE ASSIGMENT WAYS OF THE CONVEX SETS.
THEOREM ABOUT GLOBAL MINIMUM. OPTIMALITY CRITERIA.
THE POINT PROJECTION ON SET ................................................................ 36
Lectures 6, 7. SEPARABILITY OF THE CONVEX SETS .............................. 44
Lecture 8. LAGRANGE’S FUNCTION. SADDLE POINT ............................. 52
Lectures 9, 10. KUHN-TUCKER’S THEOREM ............................................. 60

Chapter II NONLINEAR PROGRAMMING .................................................. 75


Lectures 11, 12. STATEMENT OF THE PROBLEM.
NECESSARYCONDITIONS OF THE OPTIMALITY .................................... 76
Lecture 13. SOLUTION ALGORITHM OF THE NONLINEAR
PROGRAMMING PROBLEM ......................................................................... 88
Lecture 14. DUALITY THEORY ..................................................................... 97

Chapter III LINEAR PROGRAMMING ......................................................... 106


Lectures 15, 16. STATEMENT OF THE PROBLEM. SIMPLEX-METHOD .. 106
Lecture 17. DIRECTION CHOICE. NEW SIMPLEX-TABLE
CONSRUCTION. THE INITIAL EXTREME POINT CONSTRUCTION ....... 120

References .......................................................................................................... 129


Appendix I. TASKS FOR INDEPENDENT WORK ......................................... 130
Appendix II. TASKS ON MATHEMATICAL PROGRAMMING ................... 161
Appendix III. KEYS........................................................................................... 189
Appendix IV. TESTS ......................................................................................... 192

206
У ч еб но е изда ние

Айсагалиев Серикбай Абдигалиевич


Жунусова Жанат Хафизовна

MATHEMATICAL
PROGRAMMING
Учебное пособие

Компьютерная верстка: Т.Е. Сапарова


Дизайн обложки: Г.К. Курманова

ИБ № 5074
Подписано в печать 16.03.11. Формат 60х84 1/16. Бумага офсетная.
Печать RISO. Объем 13,00 п.л. Тираж 500 экз. Заказ № 230.
Издательство «Қазақ университетi» Казахского национального
университета им. аль-Фараби. 050040, г. Алматы, пр. аль-Фараби, 71. КазНУ.
Отпечатано в типографии издательства «Қазақ университетi».

207

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy