0% found this document useful (0 votes)
68 views

Convex Optimization - Ben-Tal

The document discusses linear programming and its extensions to conic programming. It covers key concepts in linear programming like duality and engineering applications. It then extends these concepts to conic quadratic programming and semidefinite programming, examining what types of problems can be represented by conic quadratic and semidefinite constraints.

Uploaded by

Claudio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views

Convex Optimization - Ben-Tal

The document discusses linear programming and its extensions to conic programming. It covers key concepts in linear programming like duality and engineering applications. It then extends these concepts to conic quadratic programming and semidefinite programming, examining what types of problems can be represented by conic quadratic and semidefinite constraints.

Uploaded by

Claudio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 717

Contents

Main Notational Conventions 17

1 From Linear to Conic Programming 19


1.1 Linear programming: basic notions . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2 Duality in Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.2.1 Certificates for solvability and insolvability . . . . . . . . . . . . . . . . . 20
1.2.2 Dual to an LP program: the origin . . . . . . . . . . . . . . . . . . . . . . 25
1.2.3 The LP Duality Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.3 Selected Engineering Applications of LP . . . . . . . . . . . . . . . . . . . . . . . 29
1.3.1 Sparsity-oriented Signal Processing and `1 minimization . . . . . . . . . . 29
1.3.1.1 Sparse recovery from deficient observations . . . . . . . . . . . . 30
1.3.1.2 s-goodness and nullspace property . . . . . . . . . . . . . . . . . 32
1.3.1.3 From nullspace property to error bounds for imperfect `1 recovery 33
1.3.1.4 Compressed Sensing: Limits of performance . . . . . . . . . . . 35
1.3.1.5 Verifiable sufficient conditions for s-goodness . . . . . . . . . . . 36
1.3.2 Supervised Binary Machine Learning via LP Support Vector Machines . . 39
1.3.3 Synthesis of linear controllers . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.3.3.1 Discrete time linear dynamical systems . . . . . . . . . . . . . . 42
1.3.3.2 Purified outputs and purified-output-based control laws . . . . . 44
1.4 From Linear to Conic Programming . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.4.1 Orderings of Rm and cones . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.4.2 “Conic programming” – what is it? . . . . . . . . . . . . . . . . . . . . . . 52
1.4.3 Conic Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
1.4.4 Geometry of the primal and the dual problems . . . . . . . . . . . . . . . 56
1.4.5 Conic Duality Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1.4.5.1 Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
1.4.6 Is something wrong with conic duality? . . . . . . . . . . . . . . . . . . . 65
1.4.7 Consequences of the Conic Duality Theorem . . . . . . . . . . . . . . . . 66
1.4.7.1 Sufficient condition for infeasibility . . . . . . . . . . . . . . . . . 66
1.4.7.2 When is a scalar linear inequality a consequence of a given linear
vector inequality? . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.4.7.3 “Robust solvability status” . . . . . . . . . . . . . . . . . . . . . 70
1.5 Exercises for Lecture 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
1.5.1 Around General Theorem on Alternative . . . . . . . . . . . . . . . . . . 73
1.5.2 Around cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
1.5.2.1 Calculus of cones . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

7
8 CONTENTS

1.5.2.2 Primal-dual pairs of cones and orthogonal pairs of subspaces . . 76


1.5.2.3 Several interesting cones . . . . . . . . . . . . . . . . . . . . . . 76
1.5.3 Around conic problems: Several primal-dual pairs . . . . . . . . . . . . . 77
1.5.4 Feasible and level sets of conic problems . . . . . . . . . . . . . . . . . . . 78
1.5.5 Operational exercises on engineering applications of LP . . . . . . . . . . 79

2 Conic Quadratic Programming 87


2.1 Conic Quadratic problems: preliminaries . . . . . . . . . . . . . . . . . . . . . . . 87
2.2 Examples of conic quadratic problems . . . . . . . . . . . . . . . . . . . . . . . . 89
2.2.1 Contact problems with static friction [35] . . . . . . . . . . . . . . . . . . 89
2.3 What can be expressed via conic quadratic constraints? . . . . . . . . . . . . . . 91
2.3.1 Elementary CQ-representable functions/sets . . . . . . . . . . . . . . . . . 93
2.3.2 Operations preserving CQ-representability of sets . . . . . . . . . . . . . . 94
2.3.3 Operations preserving CQ-representability of functions . . . . . . . . . . . 96
2.3.4 More operations preserving CQ-representability . . . . . . . . . . . . . . . 97
2.3.5 More examples of CQ-representable functions/sets . . . . . . . . . . . . . 107
2.3.6 Fast CQr approximations of exponent and logarithm . . . . . . . . . . . . 111
2.3.7 From CQR’s to K-representations of functions and sets . . . . . . . . . . 113
2.3.7.1 Conic representability of convex-concave function—definition . . 114
2.3.7.2 Main observation . . . . . . . . . . . . . . . . . . . . . . . . . . 115
2.3.7.3 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
2.3.7.4 Calculus of conic representations of convex-concave functions . . 116
2.3.8 Illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
2.4 More applications: Robust Linear Programming . . . . . . . . . . . . . . . . . . 125
2.4.1 Robust Linear Programming: the paradigm . . . . . . . . . . . . . . . . . 125
2.4.2 Robust Linear Programming: examples . . . . . . . . . . . . . . . . . . . 127
2.4.3 Robust counterpart of uncertain LP with a CQr uncertainty set . . . . . . 136
2.4.4 CQ-representability of the optimal value in a CQ program as a function
of the data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
2.4.5 Affinely Adjustable Robust Counterpart . . . . . . . . . . . . . . . . . . . 141
2.4.5.1 Affinely Adjustable Robust Counterpart of LP . . . . . . . . . . 142
2.4.5.2 Example: Uncertain Inventory Management Problem . . . . . . 143
2.5 Does Conic Quadratic Programming exist? . . . . . . . . . . . . . . . . . . . . . 149
2.5.1 Proof of Theorem 2.5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
2.6 Exercises for Lecture 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
2.6.1 Optimal control in discrete time linear dynamic system . . . . . . . . . . 154
2.6.2 Around stable grasp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
2.6.3 Around randomly perturbed linear constraints . . . . . . . . . . . . . . . 155
2.6.4 Around Robust Antenna Design . . . . . . . . . . . . . . . . . . . . . . . 157

3 Semidefinite Programming 161


3.1 Semidefinite cone and Semidefinite programs . . . . . . . . . . . . . . . . . . . . 161
3.1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
3.1.1.1 Dual to a semidefinite program (SDP) . . . . . . . . . . . . . . . 163
3.1.1.2 Conic Duality in the case of Semidefinite Programming . . . . . 164
3.1.2 Comments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
3.2 What can be expressed via LMI’s? . . . . . . . . . . . . . . . . . . . . . . . . . . 165
CONTENTS 9

3.2.1 SD-representability of functions of eigenvalues of symmetric matrices . . . 167


3.3 Applications of Semidefinite Programming in Engineering . . . . . . . . . . . . . 180
3.3.1 Dynamic Stability in Mechanics . . . . . . . . . . . . . . . . . . . . . . . . 181
3.3.2 Truss Topology Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
3.3.3 Design of chips and Boyd’s time constant . . . . . . . . . . . . . . . . . . 185
3.3.4 Lyapunov stability analysis/synthesis . . . . . . . . . . . . . . . . . . . . 187
3.3.4.1 Uncertain dynamical systems . . . . . . . . . . . . . . . . . . . . 187
3.3.4.2 Stability and stability certificates . . . . . . . . . . . . . . . . . 188
3.3.4.3 Lyapunov Stability Synthesis . . . . . . . . . . . . . . . . . . . . 193
3.4 Semidefinite relaxations of intractable problems . . . . . . . . . . . . . . . . . . . 196
3.4.1 Semidefinite relaxations of combinatorial problems . . . . . . . . . . . . . 196
3.4.1.1 Combinatorial problems and their relaxations . . . . . . . . . . . 196
3.4.1.2 Shor’s Semidefinite Relaxation scheme . . . . . . . . . . . . . . . 197
3.4.1.3 When the semidefinite relaxation is exact? . . . . . . . . . . . . 200
3.4.1.4 Stability number, Shannon and Lovasz capacities of a graph . . 200
3.4.1.5 The MAXCUT problem and maximizing quadratic form over a
box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
3.4.1.6 Nesterov’s π2 Theorem . . . . . . . . . . . . . . . . . . . . . . . . 207
3.4.1.7 Shor’s semidefinite relaxation revisited . . . . . . . . . . . . . . 209
3.4.2 Semidefinite relaxation on ellitopes and its applications . . . . . . . . . . 210
3.4.2.1 Ellitopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
3.4.2.2 Construction and main result . . . . . . . . . . . . . . . . . . . . 211
3.4.2.3 Application: Near-optimal linear estimation . . . . . . . . . . . 215
3.4.2.4 Application: Tight bounding of operator norms . . . . . . . . . 217
3.4.3 Matrix Cube Theorem and interval stability analysis/synthesis . . . . . . 220
3.4.3.1 The Matrix Cube Theorem . . . . . . . . . . . . . . . . . . . . . 221
3.4.3.2 Application: Lyapunov Stability Analysis for an interval matrix 225
3.4.3.3 Application: Nesterov’s π2 Theorem revisited . . . . . . . . . . . 227
3.4.3.4 Application: Bounding robust ellitopic norms of uncertain ma-
trix with box uncertainty . . . . . . . . . . . . . . . . . . . . . . 228
3.5 S-Lemma and Approximate S-Lemma . . . . . . . . . . . . . . . . . . . . . . . . 235
3.5.1 S-Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
3.5.2 Inhomogeneous S-Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
3.5.3 Approximate S-Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
3.5.3.1 Application: Approximating Affinely Adjustable Robust Coun-
terpart of Uncertain Linear Programming problem with ellitopic
uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
3.5.3.2 Application: Robust Conic Quadratic Programming with elli-
topic uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
3.6 Semidefinite Relaxation and Chance Constraints . . . . . . . . . . . . . . . . . . 247
3.6.1 Chance constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
3.6.2 Safe tractable approximations of chance constraints . . . . . . . . . . . . 249
3.6.3 Situation and goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
3.6.4 Approximating chance constraints via Lagrangian relaxation . . . . . . . 250
3.6.4.1 Illustration I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
3.6.5 Modification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
3.6.5.1 Illustration II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
10 CONTENTS

3.7 Extremal ellipsoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255


3.7.1 Preliminaries on ellipsoids . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
3.7.2 Outer and inner ellipsoidal approximations . . . . . . . . . . . . . . . . . 257
3.7.2.1 Inner ellipsoidal approximation of a polytope . . . . . . . . . . . 257
3.7.2.2 Outer ellipsoidal approximation of a finite set . . . . . . . . . . . 258
3.7.3 Ellipsoidal approximations of unions/intersections of ellipsoids . . . . . . 260
3.7.3.1 Inner ellipsoidal approximation of the intersection of full-dimensional
ellipsoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
3.7.3.2 Outer ellipsoidal approximation of the union of ellipsoids . . . . 262
3.7.4 Approximating sums of ellipsoids . . . . . . . . . . . . . . . . . . . . . . . 262
3.7.4.1 Problem (O) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
3.7.4.2 Problem (I) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
3.8 Exercises for Lecture 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
3.8.1 Around positive semidefiniteness, eigenvalues and -ordering . . . . . . . 273
3.8.1.1 Criteria for positive semidefiniteness . . . . . . . . . . . . . . . . 273
3.8.1.2 Variational characterization of eigenvalues . . . . . . . . . . . . 273
3.8.1.3 Birkhoff’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 276
3.8.1.4 Semidefinite representations of functions of eigenvalues . . . . . 280
3.8.1.5 Cauchy’s inequality for matrices . . . . . . . . . . . . . . . . . . 281
3.8.1.6 -convexity of some matrix-valued functions . . . . . . . . . . . 283
3.8.2 SD representations of epigraphs of convex polynomials . . . . . . . . . . . 284
3.8.3 Around the Lovasz capacity number and semidefinite relaxations of com-
binatorial problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
3.8.4 Around operator norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
3.8.5 Around Lyapunov Stability Analysis . . . . . . . . . . . . . . . . . . . . . 293
3.8.6 Around ellipsoidal approximations . . . . . . . . . . . . . . . . . . . . . . 294
3.8.6.1 More on ellipsoidal approximations of sums of ellipsoids . . . . . 294
3.8.6.2 “Simple” ellipsoidal approximations of sums of ellipsoids . . . . 296
3.8.6.3 Invariant ellipsoids . . . . . . . . . . . . . . . . . . . . . . . . . . 297
3.8.6.4 Greedy infinitesimal ellipsoidal approximations . . . . . . . . . . 298
3.8.7 Around S-Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
3.8.7.1 A straightforward proof of the standard S-Lemma . . . . . . . . 301
3.8.7.2 S-Lemma with a multi-inequality premise . . . . . . . . . . . . . 302
3.8.7.3 Relaxed versions of S-Lemma . . . . . . . . . . . . . . . . . . . . 308
3.8.8 Around Chance constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 310

4 Polynomial Time Interior Point algorithms for LP, CQP and SDP 315
4.1 Complexity of Convex Programming . . . . . . . . . . . . . . . . . . . . . . . . . 315
4.1.1 Combinatorial Complexity Theory . . . . . . . . . . . . . . . . . . . . . . 315
4.1.2 Complexity in Continuous Optimization . . . . . . . . . . . . . . . . . . . 318
4.1.3 Computational tractability of convex optimization problems . . . . . . . . 319
4.1.3.1 What is inside Theorem 4.1.1: Black-box represented convex
programs and the Ellipsoid method . . . . . . . . . . . . . . . . 321
4.1.3.2 Proof of Theorem 4.1.2: the Ellipsoid method . . . . . . . . . . 322
4.1.4 Difficult continuous optimization problems . . . . . . . . . . . . . . . . . 331
4.2 Interior Point Polynomial Time Methods for LP, CQP and SDP . . . . . . . . . . 332
4.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
CONTENTS 11

4.2.2 Interior Point methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333


4.2.2.1 The Newton method and the Interior penalty scheme . . . . . . 333
4.2.3 But... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
4.3 Interior point methods for LP, CQP, and SDP: building blocks . . . . . . . . . . 338
4.3.1 Canonical cones and canonical barriers . . . . . . . . . . . . . . . . . . . . 338
4.3.2 Elementary properties of canonical barriers . . . . . . . . . . . . . . . . . 340
4.4 Primal-dual pair of problems and primal-dual central path . . . . . . . . . . . . . 342
4.4.1 The problem(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
4.4.2 The central path(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
4.4.2.1 On the central path . . . . . . . . . . . . . . . . . . . . . . . . . 345
4.4.2.2 Near the central path . . . . . . . . . . . . . . . . . . . . . . . . 347
4.5 Tracing the central path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
4.5.1 The path-following scheme . . . . . . . . . . . . . . . . . . . . . . . . . . 349
4.5.2 Speed of path-tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
4.5.3 The primal and the dual path-following methods . . . . . . . . . . . . . . 352
4.5.4 The SDP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
4.5.4.1 The path-following scheme in SDP . . . . . . . . . . . . . . . . . 355
4.5.4.2 Complexity analysis . . . . . . . . . . . . . . . . . . . . . . . . . 357
4.6 Complexity bounds for LP, CQP, SDP . . . . . . . . . . . . . . . . . . . . . . . . 368
4.6.1 Complexity of LP b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
4.6.2 Complexity of CQP b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
4.6.3 Complexity of SDP b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
4.7 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
4.8 Exercises for Lecture 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
4.8.1 Around canonical barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
4.8.2 Scalings of canonical cones . . . . . . . . . . . . . . . . . . . . . . . . . . 373
4.8.3 The Dikin ellipsoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
4.8.4 More on canonical barriers . . . . . . . . . . . . . . . . . . . . . . . . . . 377
4.8.5 Around the primal path-following method . . . . . . . . . . . . . . . . . . 378
4.8.6 Infeasible start path-following method . . . . . . . . . . . . . . . . . . . . 380

5 Simple methods for large-scale problems 389


5.1 Motivation: Why simple methods? . . . . . . . . . . . . . . . . . . . . . . . . . . 389
5.1.1 Black-box-oriented methods and Information-based complexity . . . . . . 391
5.1.2 Main results on Information-based complexity of Convex Programming . 392
5.2 The Simplest: Subgradient Descent and Euclidean Bundle Level . . . . . . . . . 395
5.2.1 Subgradient Descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
5.2.2 Incorporating memory: Euclidean Bundle Level Algorithm . . . . . . . . 398
5.3 Mirror Descent algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
5.3.1 Problem and assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
5.3.2 Proximal setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
5.3.3 Standard proximal setups . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
5.3.3.1 Ball setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
5.3.3.2 Entropy setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
5.3.3.3 `1 /`2 and Simplex setups . . . . . . . . . . . . . . . . . . . . . . 406
5.3.3.4 Nuclear norm and Spectahedron setups . . . . . . . . . . . . . . 407
5.3.4 Mirror Descent algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
12 CONTENTS

5.3.4.1 Basic Fact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409


5.3.4.2 Standing Assumption . . . . . . . . . . . . . . . . . . . . . . . . 410
5.3.4.3 MD: Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
5.3.4.4 MD: Complexity analysis . . . . . . . . . . . . . . . . . . . . . . 411
5.3.4.5 Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
5.3.4.6 MD: Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
5.3.5 Mirror Descent and Online Regret Minimization . . . . . . . . . . . . . . 416
5.3.5.1 Online regret minimization: what is it? . . . . . . . . . . . . . . 416
5.3.5.2 Online regret minimization via Mirror Descent, deterministic case 418
5.3.6 Mirror Descent for Saddle Point problems . . . . . . . . . . . . . . . . . . 420
5.3.6.1 Convex-Concave Saddle Point problem . . . . . . . . . . . . . . 420
5.3.6.2 Saddle point MD algorithm . . . . . . . . . . . . . . . . . . . . . 421
5.3.6.3 Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
5.3.7 Mirror Descent for Stochastic Minimization/Saddle Point problems . . . . 424
5.3.7.1 Stochastic Minimization/Saddle Point problems . . . . . . . . . 424
5.3.7.2 Stochastic Saddle Point Mirror Descent algorithm . . . . . . . . 424
5.3.7.3 Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
5.3.7.4 Solving (5.3.75) via Stochastic Saddle Point Mirror Descent. . . 432
5.3.8 Mirror Descent and Stochastic Online Regret Minimization . . . . . . . . 438
5.3.8.1 Stochastic online regret minimization: problem’s formulation . . 438
5.3.8.2 Minimizing stochastic regret by MD . . . . . . . . . . . . . . . . 439
5.3.8.3 Illustration: predicting sequences . . . . . . . . . . . . . . . . . . 440
5.4 Bundle Mirror and Truncated Bundle Mirror algorithms . . . . . . . . . . . . . . 444
5.4.1 Bundle Mirror algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
5.4.1.1 BM: Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
5.4.1.2 Convergence analysis . . . . . . . . . . . . . . . . . . . . . . . . 445
5.4.2 Truncated Bundle Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
5.4.2.1 TBM: motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 446
5.4.2.2 TBM: construction . . . . . . . . . . . . . . . . . . . . . . . . . 446
5.4.2.3 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . 449
5.4.2.4 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . 451
5.4.2.5 Illustration: PET Image Reconstruction by MD and TBM . . . 453
5.4.2.6 Alternative: PET via Krylov subspace minimization . . . . . . . 460
5.5 Saddle Point representations and Mirror Prox algorithm . . . . . . . . . . . . . . 463
5.5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
5.5.1.1 Examples of saddle point representations . . . . . . . . . . . . . 466
5.5.2 The Mirror Prox algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 468
5.5.2.1 Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
5.5.2.2 Typical implementation . . . . . . . . . . . . . . . . . . . . . . . 471
5.6 Summary on Mirror Descent and Mirror Prox Algorithms . . . . . . . . . . . . . 474
5.6.1 Situation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
5.6.2 Mirror Descent and Mirror Prox algorithms . . . . . . . . . . . . . . . . . 475
5.6.2.1 First Order oracles and oracle-based algorithms . . . . . . . . . 475
5.6.2.2 Mirror Descent Algorithm . . . . . . . . . . . . . . . . . . . . . . 475
5.6.3 Mirror Prox algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
5.6.4 Processing problems with convex structure by Mirror Descent and Mirror
Prox algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
CONTENTS 13

5.6.4.1 Problems with convex structure . . . . . . . . . . . . . . . . . . 481


5.6.4.2 Problems with convex structure: basic descriptive results . . . . 487
5.6.4.3 Problems with convex structure: basic operational results . . . . 490
5.7 Well-structured monotone vector fields . . . . . . . . . . . . . . . . . . . . . . . . 495
5.7.1 Conic representability of monotone vector fields and monotone VI’s in
conic form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
5.7.1.1 Conic representation of a monotone vector field . . . . . . . . . 495
5.7.1.2 Conic form of conic-representable monotone VI . . . . . . . . . . 496
5.7.2 Calculus of conic representations of monotone vector fields . . . . . . . . 497
5.7.2.1 Raw materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
5.7.2.2 Calculus rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
5.7.3 Illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
5.7.3.1 “Academic” illustration . . . . . . . . . . . . . . . . . . . . . . . 500
5.7.3.2 Nash Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . 501
5.7.4 Derivations for Section 5.7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . 504
5.7.4.1 Verification of “raw materials” . . . . . . . . . . . . . . . . . . . 504
5.7.4.2 Verification of calculus rules . . . . . . . . . . . . . . . . . . . . 508
5.7.4.3 Verifying (5.7.30) and (5.7.31) . . . . . . . . . . . . . . . . . . . 509
5.8 Fast First Order algorithms for Smooth Convex Minimization . . . . . . . . . . . 510
5.8.1 Fast Gradient Methods for Smooth Composite minimization . . . . . . . . 510
5.8.1.1 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . 510
5.8.1.2 Composite prox-mapping . . . . . . . . . . . . . . . . . . . . . . 511
5.8.1.3 Fast Composite Gradient minimization: Algorithm and Main
Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
5.8.1.4 Proof of Theorem 5.8.1 . . . . . . . . . . . . . . . . . . . . . . . 514
5.8.2 “Universal” Fast Gradient Methods . . . . . . . . . . . . . . . . . . . . . 516
5.8.2.1 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . 517
5.8.2.2 Algorithm and Main Result . . . . . . . . . . . . . . . . . . . . . 517
5.8.2.3 Proof of Theorem 5.8.2 . . . . . . . . . . . . . . . . . . . . . . . 521
5.8.3 From Fast Gradient Minimization to Conditional Gradient . . . . . . . . 524
5.8.3.1 Proximal and Linear Minimization Oracle based First Order al-
gorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
5.8.3.2 Conditional Gradient algorithm . . . . . . . . . . . . . . . . . . 525
5.8.3.3 Bridging Fast and Conditional Gradient algorithms . . . . . . . 527
5.8.3.4 LMO-based implementation of Fast Universal Gradient Method 529
5.9 Appendix: Some proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
5.9.1 A useful technical lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
5.9.2 Justifying Ball setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
5.9.3 Justifying Entropy setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
5.9.4 Justifying `1 /`2 setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
5.9.4.1 Proof of Theorem 5.9.1 . . . . . . . . . . . . . . . . . . . . . . . 532
5.9.4.2 Proof of Corollary 5.9.1 . . . . . . . . . . . . . . . . . . . . . . . 533
5.9.5 Justifying Nuclear norm setup . . . . . . . . . . . . . . . . . . . . . . . . 534
5.9.5.1 Proof of Theorem 5.9.2 . . . . . . . . . . . . . . . . . . . . . . . 535
5.9.5.2 Proof of Corollary 5.9.2 . . . . . . . . . . . . . . . . . . . . . . . 538

Bibliography 539
14 CONTENTS

Solutions to selected exercises 543


6.1 Exercises for Lecture 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
6.1.1 Around Theorem on Alternative . . . . . . . . . . . . . . . . . . . . . . . 543
6.1.2 Around cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
6.1.3 Feasible and level sets of conic problems . . . . . . . . . . . . . . . . . . . 545
6.2 Exercises for Lecture 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
6.2.1 Optimal control in discrete time linear dynamic system . . . . . . . . . . 547
6.2.2 Around stable grasp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
6.3 Exercises for Lecture 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
6.3.1 Around positive semidefiniteness, eigenvalues and -ordering . . . . . . . 550
6.3.1.1 Criteria for positive semidefiniteness . . . . . . . . . . . . . . . . 550
6.3.1.2 Variational description of eigenvalues . . . . . . . . . . . . . . . 550
6.3.1.3 Cauchy’s inequality for matrices . . . . . . . . . . . . . . . . . . 553
6.3.2 -convexity of some matrix-valued functions . . . . . . . . . . . . . . . . 556
6.3.3 Around Lovasz capacity number . . . . . . . . . . . . . . . . . . . . . . . 557
6.3.4 Around operator norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
6.3.5 Around S-Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
6.3.5.1 A straightforward proof of the standard S-Lemma . . . . . . . . 561
6.3.5.2 S-Lemma with a multi-inequality premise . . . . . . . . . . . . . 562
6.4 Exercises for Lecture 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
6.4.1 Around canonical barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
6.4.2 Scalings of canonical cones . . . . . . . . . . . . . . . . . . . . . . . . . . 566
6.4.3 Dikin ellipsoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
6.4.4 More on canonical barriers . . . . . . . . . . . . . . . . . . . . . . . . . . 569
6.4.5 Around the primal path-following method . . . . . . . . . . . . . . . . . . 570
6.4.6 An infeasible start path-following method . . . . . . . . . . . . . . . . . . 570

A Prerequisites from Linear Algebra and Analysis 573


A.1 Space Rn : algebraic structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
A.1.1 A point in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
A.1.2 Linear operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
A.1.3 Linear subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
A.1.4 Linear independence, bases, dimensions . . . . . . . . . . . . . . . . . . . 575
A.1.5 Linear mappings and matrices . . . . . . . . . . . . . . . . . . . . . . . . 576
A.2 Space Rn : Euclidean structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
A.2.1 Euclidean structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
A.2.2 Inner product representation of linear forms on Rn . . . . . . . . . . . . . 579
A.2.3 Orthogonal complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
A.2.4 Orthonormal bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
A.3 Affine subspaces in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
A.3.1 Affine subspaces and affine hulls . . . . . . . . . . . . . . . . . . . . . . . 583
A.3.2 Intersections of affine subspaces, affine combinations and affine hulls . . . 583
A.3.3 Affinely spanning sets, affinely independent sets, affine dimension . . . . . 584
A.3.4 Dual description of linear subspaces and affine subspaces . . . . . . . . . 587
A.3.4.1 Affine subspaces and systems of linear equations . . . . . . . . . 587
A.3.5 Structure of the simplest affine subspaces . . . . . . . . . . . . . . . . . . 588
A.4 Space Rn : metric structure and topology . . . . . . . . . . . . . . . . . . . . . . 589
CONTENTS 15

A.4.1 Euclidean norm and distances . . . . . . . . . . . . . . . . . . . . . . . . . 589


A.4.2 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
A.4.3 Closed and open sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
A.4.4 Local compactness of Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
A.5 Continuous functions on Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
A.5.1 Continuity of a function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
A.5.2 Elementary continuity-preserving operations . . . . . . . . . . . . . . . . . 593
A.5.3 Basic properties of continuous functions on Rn . . . . . . . . . . . . . . . 594
A.6 Differentiable functions on Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
A.6.1 The derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
A.6.2 Derivative and directional derivatives . . . . . . . . . . . . . . . . . . . . 596
A.6.3 Representations of the derivative . . . . . . . . . . . . . . . . . . . . . . . 597
A.6.4 Existence of the derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
A.6.5 Calculus of derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
A.6.6 Computing the derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
A.6.7 Higher order derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
A.6.8 Calculus of Ck mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
A.6.9 Examples of higher-order derivatives . . . . . . . . . . . . . . . . . . . . . 604
A.6.10 Taylor expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
A.7 Symmetric matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
A.7.1 Spaces of matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
A.7.2 Main facts on symmetric matrices . . . . . . . . . . . . . . . . . . . . . . 607
A.7.3 Variational characterization of eigenvalues . . . . . . . . . . . . . . . . . . 609
A.7.3.1 Corollaries of the VCE . . . . . . . . . . . . . . . . . . . . . . . 610
A.7.4 Positive semidefinite matrices and the semidefinite cone . . . . . . . . . . 611

B Convex sets in Rn 615


B.1 Definition and basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
B.1.1 A convex set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
B.1.2 Examples of convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
B.1.3 Inner description of convex sets: Convex combinations and convex hull . . 618
B.1.4 Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
B.1.5 Calculus of convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
B.1.6 Topological properties of convex sets . . . . . . . . . . . . . . . . . . . . . 621
B.2 Main theorems on convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
B.2.1 Caratheodory Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
B.2.2 Radon Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
B.2.3 Helley Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
B.2.4 Polyhedral representations and Fourier-Motzkin Elimination . . . . . . . . 627
B.2.5 General Theorem on Alternative and Linear Programming Duality . . . . 632
B.2.6 Separation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
B.2.7 Polar of a convex set and Milutin-Dubovitski Lemma . . . . . . . . . . . 648
B.2.8 Extreme points and Krein-Milman Theorem . . . . . . . . . . . . . . . . . 651
B.2.9 Structure of polyhedral sets . . . . . . . . . . . . . . . . . . . . . . . . . . 656
16 CONTENTS

C Convex functions 663


C.1 Convex functions: first acquaintance . . . . . . . . . . . . . . . . . . . . . . . . . 663
C.1.1 Definition and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
C.1.2 Elementary properties of convex functions . . . . . . . . . . . . . . . . . . 664
C.1.2.1 Jensen’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . 664
C.1.2.2 Convexity of level sets of a convex function . . . . . . . . . . . 665
C.1.3 What is the value of a convex function outside its domain? . . . . . . . . 665
C.2 How to detect convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
C.2.1 Operations preserving convexity of functions . . . . . . . . . . . . . . . . 666
C.2.2 Differential criteria of convexity . . . . . . . . . . . . . . . . . . . . . . . . 668
C.3 Gradient inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
C.4 Boundedness and Lipschitz continuity of a convex function . . . . . . . . . . . . 673
C.5 Maxima and minima of convex functions . . . . . . . . . . . . . . . . . . . . . . . 675
C.6 Subgradients and Legendre transformation . . . . . . . . . . . . . . . . . . . . . . 680
C.6.1 Proper functions and their representation . . . . . . . . . . . . . . . . . . 680
C.6.2 Subgradients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
C.6.3 Legendre transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687

D Convex Programming, Lagrange Duality, Saddle Points 691


D.1 Mathematical Programming Program . . . . . . . . . . . . . . . . . . . . . . . . 691
D.2 Convex Programming program and Lagrange Duality Theorem . . . . . . . . . . 692
D.2.1 Convex Theorem on Alternative . . . . . . . . . . . . . . . . . . . . . . . 692
D.2.1.1 Conic case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
D.2.2 Lagrange Function and Lagrange Duality . . . . . . . . . . . . . . . . . . 700
D.2.3 Optimality Conditions in Convex Programming . . . . . . . . . . . . . . . 703
D.3 Duality in Linear and Convex Quadratic Programming . . . . . . . . . . . . . . . 707
D.3.1 Linear Programming Duality . . . . . . . . . . . . . . . . . . . . . . . . . 708
D.3.2 Quadratic Programming Duality . . . . . . . . . . . . . . . . . . . . . . . 708
D.4 Saddle Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
D.4.1 Definition and Game Theory interpretation . . . . . . . . . . . . . . . . . 710
D.4.2 Existence of Saddle Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
CONTENTS 17

Main Notational Conventions

Vectors and matrices. By default, all vectors are column vectors.

• The space of all n-dimensional vectors is denoted Rn , the set of all m × n matrices is
denoted Rm×n or Mm×n , and the set of symmetric n × n matrices is denoted Sn . By
default, all vectors and matrices are real.

• Sometimes, “MATLAB notation” is used: a vector with coordinates x1 , ..., xn is written


down as

x = [x1 ; ...; xn ]
 
1
(pay attention to semicolon “;”). For example,  2  is written as [1; 2; 3].
 
3
More generally,
— if A1 , ..., Am are matrices with the same number of columns, we write [A1 ; ...; Am ] to
denote the matrix which is obtained when writing A2 beneath A1 , A3 beneath A2 , and so
on.
— if A1 , ..., Am are matrices with the same number of rows, then [A1 , ..., Am ] stands for
the matrix which is obtained when writing A2 to the right of A1 , A3 to the right of A2 ,
and so on.
Examples:  
" # 1 2 3
1 2 3 h i
• A1 = , A2 = 7 8 9 ⇒ [A1 ; A2 ] =  4 5 6 
 
4 5 6
7 8 9
" # " # " #
1 2 7 1 2 7
• A1 = , A2 = ⇒ [A1 , A2 ] =
3 4 8 4 5 8
• [1, 2, 3, 4] = [1; 2; 3; 4]T "" # " ## " #
1 2 5 6 1 2 5 6
• [[1, 2; 3, 4], [5, 6; 7, 8]] = , =
3 4 7 8 3 4 7 8
= [1, 2, 5, 6; 3, 4, 7, 8]

O(1)’s. Below O(1)’s denote properly selected positive absolute constants. We write f ≤
O(1)g, where f and g are nonnegative functions of some parameters, to express the fact that
for properly selected positive absolute constant C the inequality f ≤ Cg holds true in the entire
range of the parameters, and we write f = O(1)g when both f ≤ O(1)g and g ≤ O(1)f .

Color encoding in the text:

• Plain text • Theorems, Propositions, Lemmas,...


• Proofs • Pen and Pencil Exercises
• Exercises with solutions • Operational Exercises
18 CONTENTS
Lecture 1

From Linear to Conic Programming

1.1 Linear programming: basic notions


A Linear Programming (LP) program is an optimization program of the form
 
T

min c x Ax ≥ b ,
(LP)

where
• x ∈ Rn is the design vector
• c ∈ Rn is a given vector of coefficients of the objective function cT x
• A is a given m × n constraint matrix, and b ∈ Rm is a given right hand side of the
constraints.
(LP) is called
– feasible, if its feasible set
F = {x : Ax − b ≥ 0}
is nonempty; a point x ∈ F is called a feasible solution to (LP);
– bounded below, if it is either infeasible, or its objective cT x is bounded below on F.
For a feasible bounded below problem (LP), the quantity
c∗ ≡ inf cT x
x:Ax−b≥0

is called the optimal value of the problem. For an infeasible problem, we set c∗ = +∞,
while for feasible unbounded below problem we set c∗ = −∞.
(LP) is called solvable, if it is feasible, bounded below and the optimal value is attained, i.e.,
there exists x ∈ F with cT x = c∗ . An x of this type is called an optimal solution to (LP).
A priori it is unclear whether a feasible and bounded below LP program is solvable: why should
the infimum be achieved? It turns out, however, that a feasible and bounded below program
(LP) always is solvable. This nice fact (we shall establish it later) is specific for LP. Indeed, a
very simple nonlinear optimization program

1
 
min x≥1
x
is feasible and bounded below, but it is not solvable.

19
20 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

1.2 Duality in Linear Programming


The most important and interesting feature of Linear Programming as a mathematical entity
(i.e., aside of computations and applications) is the wonderful LP duality theory we are about
to consider. We motivate this topic by first addressing the following question:
Given an LP program  
c∗ = min cT x Ax − b ≥ 0 ,

(LP)
x

how to find a systematic way to bound from below its optimal value c∗ ?
Why this is an important question, and how the answer helps to deal with LP, this will be seen
in the sequel. For the time being, let us just believe that the question is worthy of the effort.
A trivial answer to the posed question is: solve (LP) and look what is the optimal value.
There is, however, a smarter and a much more instructive way to answer our question. Just to
get an idea of this way, let us look at the following example:
 


x 1 + 2x 2 + ... + 2001x 2001 + 2002x 2002 − 1 ≥ 0, 

min x1 + x2 + ... + x2002 2002x1 + 2001x2 + ... + 2x2001 + x2002 − 100 ≥ 0, .

 
 ..... ... ... 
101
We claim that the optimal value in the problem is ≥ 2003 . How could one certify this bound?
This is immediate: add the first two constraints to get the inequality

2003(x1 + x2 + ... + x1998 + x2002 ) − 101 ≥ 0,

and divide the resulting inequality by 2003. LP duality is nothing but a straightforward gener-
alization of this simple trick.

1.2.1 Certificates for solvability and insolvability


Consider a (finite) system of scalar inequalities with n unknowns. To be as general as possible,
we do not assume for the time being the inequalities to be linear, and we allow for both non-
strict and strict inequalities in the system, as well as for equalities. Since an equality can be
represented by a pair of non-strict inequalities, our system can always be written as

fi (x) Ωi 0, i = 1, ..., m, (S)

where every Ωi is either the relation ” > ” or the relation ” ≥ ”.


The basic question about (S) is
(?) Whether (S) has a solution or not.
Knowing how to answer the question (?), we are able to answer many other questions. E.g., to
verify whether a given real a is a lower bound on the optimal value c∗ of (LP) is the same as to
verify whether the system (
−cT x + a > 0
Ax − b ≥ 0
has no solutions.
The general question above is too difficult, and it makes sense to pass from it to a seemingly
simpler one:
1.2. DUALITY IN LINEAR PROGRAMMING 21

(??) How to certify that (S) has, or does not have, a solution.

Imagine that you are very smart and know the correct answer to (?); how could you convince
somebody that your answer is correct? What could be an “evident for everybody” certificate of
the validity of your answer?
If your claim is that (S) is solvable, a certificate could be just to point out a solution x∗ to
(S). Given this certificate, one can substitute x∗ into the system and check whether x∗ indeed
is a solution.
Assume now that your claim is that (S) has no solutions. What could be a “simple certificate”
of this claim? How one could certify a negative statement? This is a highly nontrivial problem
not just for mathematics; for example, in criminal law: how should someone accused in a murder
prove his innocence? The “real life” answer to the question “how to certify a negative statement”
is discouraging: such a statement normally cannot be certified (this is where the rule “a person
is presumed innocent until proven guilty” comes from). In mathematics, however, the situation
is different: in some cases there exist “simple certificates” of negative statements. E.g., in order
to certify that (S) has no solutions, it suffices to demonstrate that a consequence of (S) is a
contradictory inequality such as
−1 ≥ 0.
For example, assume that λi , i = 1, ..., m, are nonnegative weights. Combining inequalities from
(S) with these weights, we come to the inequality
m
X
λi fi (x) Ω 0 (Cons(λ))
i=1

where Ω is either ” > ” (this is the case when the weight of at least one strict inequality from
(S) is positive), or ” ≥ ” (otherwise). Since the resulting inequality, due to its origin, is a
consequence of the system (S), i.e., it is satisfied by every solution to S), it follows that if
(Cons(λ)) has no solutions at all, we can be sure that (S) has no solution. Whenever this is the
case, we may treat the corresponding vector λ as a “simple certificate” of the fact that (S) is
infeasible.
Let us look what does the outlined approach mean when (S) is comprised of linear inequal-
ities:   
T ”>”
(S) : {ai x Ωi bi , i = 1, ..., m} Ωi =
”≥”
Here the “combined inequality” is linear as well:
m
X m
X
T
(Cons(λ)) : ( λai ) x Ω λbi
i=1 i=1

(Ω is ” > ” whenever λi > 0 for at least one i with Ωi = ” > ”, and Ω is ” ≥ ” otherwise). Now,
when can a linear inequality
dT x Ω e
be contradictory? Of course, it can happen only when d = 0. Whether in this case the inequality
is contradictory, it depends on what is the relation Ω: if Ω = ” > ”, then the inequality is
contradictory if and only if e ≥ 0, and if Ω = ” ≥ ”, it is contradictory if and only if e > 0. We
have established the following simple result:
22 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

Proposition 1.2.1 Consider a system of linear inequalities


(
aTi x > bi , i = 1, ..., ms ,
(S) :
aTi x ≥ bi , i = ms + 1, ..., m.

with n-dimensional vector of unknowns x. Let us associate with (S) two systems of linear
inequalities and equations with m-dimensional vector of unknowns λ:

 (a) λ ≥ 0;
m


 P



 (b) λi ai = 0;
 i=1
TI : m
λi bi ≥ 0;
P

 (cI )

 i=1

 m
Ps
 (dI ) λi > 0.


i=1

 (a) λ ≥ 0;
m


 P

(b) λi ai = 0;
TII : i=1

 m
P
 (cII ) λi bi > 0.


i=1

Assume that at least one of the systems TI , TII is solvable. Then the system (S) is infeasible.
Proposition 1.2.1 says that in some cases it is easy to certify infeasibility of a linear system
of inequalities: a “simple certificate” is a solution to another system of linear inequalities. Note,
however, that the existence of a certificate of this latter type is to the moment only a sufficient,
but not a necessary, condition for the infeasibility of (S). A fundamental result in the theory of
linear inequalities is that the sufficient condition in question is in fact also necessary:
Theorem 1.2.1 [General Theorem on Alternative] In the notation from Proposition 1.2.1, sys-
tem (S) has no solutions if and only if either TI , or TII , or both these systems, are solvable.
There are numerous proofs of the Theorem on Alternative; in my taste, the most instructive one is
to reduce the Theorem to its particular case – the Homogeneous Farkas Lemma:
[Homogeneous Farkas Lemma] A homogeneous nonstrict linear inequality

aT x ≤ 0

is a consequence of a system of homogeneous nonstrict linear inequalities

aTi x ≤ 0, i = 1, ..., m

if and only if it can be obtained from the system by taking weighted sum with nonnegative
weights:
(a) aTi x ≤ 0, i = 1, ..., m ⇒ aT x ≤ 0,
m P (1.2.1)
(b) ∃λi ≥ 0 : a = λi ai .
i

The reduction of GTA to HFL is easy. As about the HFL, there are, essentially, two ways to prove the
statement:
• The “quick and dirty” one based on separation arguments (see Section B.2.6 and/or Exercise B.14),
which is as follows:
1.2. DUALITY IN LINEAR PROGRAMMING 23

1. First, we demonstrate that if A is a nonempty closed convex set in Rn and a is a point from
Rn \A, then a can be strongly separated from A by a linear form: there exists x ∈ Rn such
that
xT a < inf xT b. (1.2.2)
b∈A

To this end, it suffices to verify that



(a) In A, there exists a point closest to a w.r.t. the standard Euclidean norm kbk2 = bT b,
i.e., that the optimization program
min ka − bk2
b∈A

has a solution b∗ ;
(b) Setting x = b∗ − a, one ensures (1.2.2).
Both (a) and (b) are immediate.
2. Second, we demonstrate that the set
m
X
A = {b : ∃λ ≥ 0 : b = λi ai }
i=1

– the cone spanned by the vectors a1 , ..., am – is convex (which is immediate) and closed (the
proof of this crucial fact also is not difficult).
3. Combining the above facts, we immediately see that
— either a ∈ A, i.e., (1.2.1.b) holds,
— or there exists x such that xT a < inf xT
P
λi ai .
λ≥0 i
The latter inf is finite if and only if xT ai ≥ 0 for all i, and in this case the inf is 0, so that
the “or” statement says exactly that there exists x with aTi x ≥ 0, aT x < 0, or, which is the
same, that (1.2.1.a) does not hold.
Thus, among the statements (1.2.1.a) and the negation of (1.2.1.b) at least one (and, as it
is immediately seen, at most one as well) always is valid, which is exactly the equivalence
(1.2.1).
• “Advanced” proofs based purely on Linear Algebra facts (see Section B.2.5.A). The advantage of
these purely Linear Algebra proofs is that they, in contrast to the outlined separation-based proof,
do not use the completeness of Rn as a metric space and thus work when we pass from systems
with real coefficients and unknowns to systems with rational (or algebraic) coefficients. As a result,
an advanced proof allows to establish the Theorem on Alternative for the case when the coefficients
and unknowns in (S), TI , TII are restricted to belong to a given “real field” (e.g., are rational).
We formulate here explicitly two very useful principles following from the Theorem on Al-
ternative:

A. A system of linear inequalities

aTi x Ωi bi , i = 1, ..., m

has no solutions if and only if one can combine the inequalities of the system in
a linear fashion (i.e., multiplying the inequalities by nonnegative weights, adding
the results and passing, if necessary, from an inequality aT x > b to the inequality
aT x ≥ b) to get a contradictory inequality, namely, either the inequality 0T x ≥ 1, or
the inequality 0T x > 0.
B. A linear inequality
aT0 x Ω0 b0
24 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

is a consequence of a solvable system of linear inequalities

aTi x Ωi bi , i = 1, ..., m

if and only if it can be obtained by combining, in a linear fashion, the inequalities of


the system and the trivial inequality 0 > −1.

It should be stressed that the above principles are highly nontrivial and very deep. Consider,
e.g., the following system of 4 linear inequalities with two variables u, v:

−1 ≤ u ≤ 1
−1 ≤ v ≤ 1.

From these inequalities it follows that

u2 + v 2 ≤ 2, (!)

which in turn implies, by the Cauchy inequality, the linear inequality u + v ≤ 2:


p p √
u+v =1×u+1×v ≤ 12 + 12 u2 + v 2 ≤ ( 2)2 = 2. (!!)

The concluding inequality is linear and is a consequence of the original system, but in the
demonstration of this fact both steps (!) and (!!) are “highly nonlinear”. It is absolutely
unclear a priori why the same consequence can, as it is stated by Principle A, be derived
from the system in a linear manner as well [of course it can – it suffices just to add two
inequalities u ≤ 1 and v ≤ 1].
Note that the Theorem on Alternative and its corollaries A and B heavily exploit the fact
that we are speaking about linear inequalities. E.g., consider the following 2 quadratic and
2 linear inequalities with two variables:

(a) u2 ≥ 1;
(b) v 2 ≥ 1;
(c) u ≥ 0;
(d) v ≥ 0;

along with the quadratic inequality

(e) uv ≥ 1.

The inequality (e) is clearly a consequence of (a) – (d). However, if we extend the system of
inequalities (a) – (b) by all “trivial” (i.e., identically true) linear and quadratic inequalities
with 2 variables, like 0 > −1, u2 + v 2 ≥ 0, u2 + 2uv + v 2 ≥ 0, u2 − uv + v 2 ≥ 0, etc.,
and ask whether (e) can be derived in a linear fashion from the inequalities of the extended
system, the answer will be negative. Thus, Principle A fails to be true already for quadratic
inequalities (which is a great sorrow – otherwise there were no difficult problems at all!)

We are about to use the Theorem on Alternative to obtain the basic results of the LP duality
theory.
1.2. DUALITY IN LINEAR PROGRAMMING 25

1.2.2 Dual to an LP program: the origin


As already mentioned, the motivation for constructing the problem dual to an LP program

aT1
   

aT2
     
∗ T
 ∈ Rm×n 

c = min c x Ax − b ≥ 0 A =  (LP)
   
x   ...  
aTm

is the desire to generate, in a systematic way, lower bounds on the optimal value c∗ of (LP).
An evident way to bound from below a given function f (x) in the domain given by system of
inequalities
gi (x) ≥ bi , i = 1, ..., m, (1.2.3)
is offered by what is called the Lagrange duality and is as follows:
Lagrange Duality:

• Let us look at all inequalities which can be obtained from (1.2.3) by linear aggre-
gation, i.e., at the inequalities of the form
X X
yi gi (x) ≥ yi bi (1.2.4)
i i

with the “aggregation weights” yi ≥ 0. Note that the inequality (1.2.4), due to its
origin, is valid on the entire set X of solutions of (1.2.3).
• Depending on the choice of aggregation weights, it may happen that the left hand
side in (1.2.4) is ≤ f (x) for all x ∈ Rn . Whenever it is the case, the right hand side
P
yi bi of (1.2.4) is a lower bound on f in X.
i
P
Indeed, on X the quantity yi bi is a lower bound on yi gi (x), and for y in question
i
the latter function of x is everywhere ≤ f (x).

It follows that
• The optimal value in the problem
 
X y ≥ 0, (a) 
max yi bi : P yi gi (x) ≤ f (x) ∀x ∈ Rn (b) (1.2.5)
y  
i i

is a lower bound on the values of f on the set of solutions to the system (1.2.3).

Let us look what happens with the Lagrange duality when f and gi are homogeneous linear
functions: f = cT x, gi (x) = aTi x. In this case, the requirement (1.2.5.b) merely says that
yi ai (or, which is the same, AT y = c due to the origin of A). Thus, problem (1.2.5)
P
c =
i
becomes the Linear Programming problem
n o
max bT y : AT y = c, y ≥ 0 , (LP∗ )
y

which is nothing but the LP dual of (LP).


By the construction of the dual problem,
26 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

[Weak Duality] The optimal value in (LP∗ ) is less than or equal to the optimal value
in (LP).

In fact, the “less than or equal to” in the latter statement is “equal”, provided that the optimal
value c∗ in (LP) is a number (i.e., (LP) is feasible and below bounded). To see that this indeed
is the case, note that a real a is a lower bound on c∗ if and only if cT x ≥ a whenever Ax ≥ b,
or, which is the same, if and only if the system of linear inequalities

(Sa ) : −cT x > −a, Ax ≥ b

has no solution. We know by the Theorem on Alternative that the latter fact means that some
other system of linear equalities (more exactly, at least one of a certain pair of systems) does
have a solution. More precisely,

(*) (Sa ) has no solutions if and only if at least one of the following two systems with
m + 1 unknowns:

λ = (λ0 , λ1 , ..., λm ) ≥ 0;

 (a)

 m
−λ0 c +
 P
 (b) λi ai = 0;


TI : i=1
m
−λ0 a + λi bi ≥ 0;
P



 (cI )

 i=1
(dI ) λ0 > 0,

or 
 (a) λ = (λ0 , λ1 , ..., λm ) ≥ 0;
m



−λ0 c −
P

(b) λi ai = 0;
TII : i=1

 m
−λ0 a −
P
 (cII ) λi bi > 0


i=1

– has a solution.

Now assume that (LP) is feasible. We claim that under this assumption (Sa ) has no solutions
if and only if TI has a solution.
The implication ”TI has a solution ⇒ (Sa ) has no solution” is readily given by the above
remarks. To verify the inverse implication, assume that (Sa ) has no solutions and the system
Ax ≤ b has a solution, and let us prove that then TI has a solution. If TI has no solution, then
by (*) TII has a solution and, moreover, λ0 = 0 for (every) solution to TII (since a solution
to the latter system with λ0 > 0 solves TI as well). But the fact that TII has a solution λ
with λ0 = 0 is independent of the values of a and c; if this fact would take place, it would
mean, by the same Theorem on Alternative, that, e.g., the following instance of (Sa ):

0T x ≥ −1, Ax ≥ b

has no solutions. The latter means that the system Ax ≥ b has no solutions – a contradiction
with the assumption that (LP) is feasible. 2

Now, if TI has a solution, this system has a solution with λ0 = 1 as well (to see this, pass from
a solution λ to the one λ/λ0 ; this construction is well-defined, since λ0 > 0 for every solution
1.2. DUALITY IN LINEAR PROGRAMMING 27

to TI ). Now, an (m + 1)-dimensional vector λ = (1, y) is a solution to TI if and only if the


m-dimensional vector y solves the system of linear inequalities and equations

y ≥ 0;
m
AT y ≡
P
yi ai = c; (D)
i=1
bT y ≥ a

Summarizing our observations, we come to the following result.

Proposition 1.2.2 Assume that system (D) associated with the LP program (LP) has a solution
(y, a). Then a is a lower bound on the optimal value in (LP). Vice versa, if (LP) is feasible and
a is a lower bound on the optimal value of (LP), then a can be extended by a properly chosen
m-dimensional vector y to a solution to (D).

We see that the entity responsible for lower bounds on the optimal value of (LP) is the
system (D): every solution to the latter system induces a bound of this type, and in the case
when (LP) is feasible, all lower bounds can be obtained from solutions to (D). Now note that
if (y, a) is a solution to (D), then the pair (y, bT y) also is a solution to the same system, and
the lower bound bT y on c∗ is not worse than the lower bound a. Thus, as far as lower bounds
on c∗ are concerned, we lose nothing by restricting ourselves to the solutions (y, a) of (D) with
a = bTy; the ∗
best lower bound
 on c given by (D) is therefore the optimal value of the problem
maxy bT y AT y = c, y ≥ 0 , which is nothing but the dual to (LP) problem (LP∗ ). Note that

(LP∗ ) is also a Linear Programming program.


All we know about the dual problem to the moment is the following:

Proposition 1.2.3 Whenever y is a feasible solution to (LP∗ ), the corresponding value of the
dual objective bT y is a lower bound on the optimal value c∗ in (LP). If (LP) is feasible, then for
every a ≤ c∗ there exists a feasible solution y of (LP∗ ) with bT y ≥ a.

1.2.3 The LP Duality Theorem


Proposition 1.2.3 is in fact equivalent to the following

Theorem 1.2.2 [Duality Theorem in Linear Programming] Consider a Linear Programming


program  
T

min c x Ax ≥ b (LP)
x

along with its dual  


max bT y AT y = c, y ≥ 0 (LP∗ )

y

Then
1) The duality is symmetric: the problem dual to dual is equivalent to the primal;
2) The value of the dual objective at every dual feasible solution is ≤ the value of the primal
objective at every primal feasible solution
3) The following 5 properties are equivalent to each other:
28 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

(i) The primal is feasible and bounded below.


(ii) The dual is feasible and bounded above.
(iii) The primal is solvable.
(iv) The dual is solvable.
(v) Both primal and dual are feasible.

Whenever (i) ≡ (ii) ≡ (iii) ≡ (iv) ≡ (v) is the case, the optimal values of the primal and the dual
problems are equal to each other.

Proof. 1) is quite straightforward: writing the dual problem (LP∗ ) in our standard form, we
get      

 Im 0 

T  T

min −b y  A  y −  −c  ≥ 0 ,
  
y 
−AT

 c 

where Im is the m-dimensional unit matrix. Applying the duality transformation to the latter
problem, we come to the problem
 

 ξ ≥ 0 

 
 η ≥ 0 
max 0T ξ + cT η + (−c)T ζ : ,
ξ,η,ζ 
 ζ ≥ 0 

 

ξ − Aη + Aζ = −b 

which is clearly equivalent to (LP) (set x = η − ζ).


2) is readily given by Proposition 1.2.3.
3):

(i)⇒(iv): If the primal is feasible and bounded below, its optimal value c∗ (which
of course is a lower bound on itself) can, by Proposition 1.2.3, be (non-strictly)
majorized by a quantity bT y ∗ , where y ∗ is a feasible solution to (LP∗ ). In the
situation in question, of course, bT y ∗ = c∗ (by already proved item 2)); on the other
hand, in view of the same Proposition 1.2.3, the optimal value in the dual is ≤ c∗ . We
conclude that the optimal value in the dual is attained and is equal to the optimal
value in the primal.
(iv)⇒(ii): evident;
(ii)⇒(iii): This implication, in view of the primal-dual symmetry, follows from the
implication (i)⇒(iv).
(iii)⇒(i): evident.
We have seen that (i)≡(ii)≡(iii)≡(iv) and that the first (and consequently each) of
these 4 equivalent properties implies that the optimal value in the primal problem
is equal to the optimal value in the dual one. All which remains is to prove the
equivalence between (i)–(iv), on one hand, and (v), on the other hand. This is
immediate: (i)–(iv), of course, imply (v); vice versa, in the case of (v) the primal is
not only feasible, but also bounded below (this is an immediate consequence of the
feasibility of the dual problem, see 2)), and (i) follows. 2
1.3. SELECTED ENGINEERING APPLICATIONS OF LP 29

An immediate corollary of the LP Duality Theorem is the following necessary and sufficient
optimality condition in LP:

Theorem 1.2.3 [Necessary and sufficient optimality conditions in Linear Programming] Con-
sider an LP program (LP) along with its dual (LP∗ ). A pair (x, y) of primal and dual feasible
solutions is comprised of optimal solutions to the respective problems if and only if

yi [Ax − b]i = 0, i = 1, ..., m, [complementary slackness]

likewise as if and only if


cT x − bT y = 0 [zero duality gap]

Indeed, the “zero duality gap” optimality condition is an immediate consequence of the fact
that the value of primal objective at every primal feasible solution is ≥ the value of the
dual objective at every dual feasible solution, while the optimal values in the primal and the
dual are equal to each other, see Theorem 1.2.2. The equivalence between the “zero duality
gap” and the “complementary slackness” optimality conditions is given by the following
computation: whenever x is primal feasible and y is dual feasible, the products yi [Ax − b]i ,
i = 1, ..., m, are nonnegative, while the sum of these products is precisely the duality gap:

y T [Ax − b] = (AT y)T x − bT y = cT x − bT y.

Thus, the duality gap can vanish at a primal-dual feasible pair (x, y) if and only if all products
yi [Ax − b]i for this pair are zeros.

1.3 Selected Engineering Applications of LP


Linear Programming possesses enormously wide spectrum of applications. Most of them, or at
least the vast majority of applications presented in textbooks, have to do with Decision Making.
Here we present an instructive sample of applications of LP in Engineering. The “common
denominator” of what follows (except for the topic on Support Vector Machines, where we just
tell stories) can be summarized as “LP Duality at work.”

1.3.1 Sparsity-oriented Signal Processing and `1 minimization


1 Let us start with Compressed Sensing which addresses the problem as follows: “in the nature”
there exists a signal represented by an n-dimensional vector x. We observe (perhaps, in the
presence of observation noise) the image of x under linear transformation x 7→ Ax, where A is
a given m × n sensing matrix; thus, our observation is

y = Ax + η ∈ Rm (1.3.1)

where η is observation noise. Our goal is to recover x from the observed y. The outlined problem
is responsible for an extremely wide variety of applications and, depending on a particular
application, is studied in different “regimes.” For example, in the traditional Statistics x is
interpreted not as a signal, but as the vector of parameters of a “black box” which, given on
input a vector a ∈ Rn , produces output aT x. Given a collection a1 , ..., am of n-dimensional inputs
1
For related reading, see, e.g., [29] and references therein.
30 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

to the black box and the corresponding outputs (perhaps corrupted by noise) yi = [ai ]T x + ηi ,
we want to recover the vector of parameters x; this is called linear regression problem,. In order
to represent this problem in the form of (1.3.1) one should make the row vectors [ai ]T the rows
of an m × n matrix, thus getting matrix A, and to set y = [y1 ; ...; ym ], η = [η1 ; ...; ηm ]. The
typical regime here is m  n - the number of observations is much larger than the number
of parameters to be recovered, and the challenge is to use this “observation redundancy” in
order to get rid, to the best extent possible, of the observation noise. In Compressed Sensing
the situation is opposite: the regime of interest is m  n. At the first glance, this regime
seems to be completely hopeless: even with no noise (η = 0), we need to recover a solution x
to an underdetermined system of linear equations y = Ax. When the number of variables is
greater than the number of observations, the solution to the system either does not exist, or is
not unique, and in both cases our goal seems to be unreachable. This indeed is so, unless we
have at our disposal some additional information on x. In Compressed Sensing, this additional
information is that x is s-sparse — has at most a given number s of nonzero entries. Note that
in many applications we indeed can be sure that the true signal x is sparse. Consider, e.g., the
following story about signal detection:
There are n locations where signal transmitters could be placed, and m locations with
the receivers. The contribution of a signal of unit magnitude originating in location
j to the signal measured by receiver i is a known quantity aij , and signals originating
in different locations merely sum up in the receivers; thus, if x is the n-dimensional
vector with entries xj representing the magnitudes of signals transmitted in locations
j = 1, 2, ..., n, then the m-dimensional vector y of (noiseless) measurements of the m
receivers is y = Ax, A ∈ Rm×n . Given this vector, we intend to recover y.
Now, if the receivers are hydrophones registering noises emitted by submarines in certain part of
Atlantic, tentative positions of submarines being discretized with resolution 500 m, the dimension
of the vector x (the number of points in the discretization grid) will be in the range of tens of
thousands, if not tens of millions. At the same time, the total number of submarines (i.e.,
nonzero entries in x) can be safely upper-bounded by 50, if not by 20.

1.3.1.1 Sparse recovery from deficient observations


Sparsity changes dramatically our possibilities to recover high-dimensional signals from their
low-dimensional linear images: given in advance that x has at most s  m nonzero entries, the
possibility of exact recovery of x at least from noiseless observations y becomes quite natural.
Indeed, let us try to recover x by the following “brute force” search: we inspect, one by one,
all subsets I of the index set {1, ..., n} — first the empty set, then n singletons {1},...,{n}, then
n(n−1)
2 2-element subsets, etc., and each time try to solve the system of linear equations

y = Ax, xj = 0 when j 6∈ I;

when arriving for the first time at a solvable system, we terminate and claim that its solution
is the true vector x. It is clear that we will terminate before all sets I of cardinality ≤ s are
inspected. It is also easy to show (do it!) that if every 2s distinct columns in A are linearly
independent (when m ≥ 2s, this indeed is the case for a matrix A in a “general position”2 ),
then the procedure is correct — it indeed recovers the true vector x.
2
Here and in the sequel, the words “in general position” mean the following. We consider a family of objects,
with a particular object — an instance of the family – identified by a vector of real parameters (you may think
1.3. SELECTED ENGINEERING APPLICATIONS OF LP 31

A bad news is that the outlined procedure becomes completely impractical already for
“small” values of s and n because of the astronomically large number of linear systems we need
to process3 . A partial remedy is as follows. The outlined approach is, essentially, a particular
way to solve the optimization problem

min{nnz(x) : Ax = y}, (∗)

where nnz(x) is the number of nonzero entries in a vector x. At the present level of our knowl-
edge, this problem looks completely intractable (in fact, we do not know algorithms solving
the problem essentially faster than the brute force search), and there are strong reasons, to be
addressed later in our course, to believe that it indeed is intractable. Well, if we do not know
how to minimize under linear constraints the “bad” objective nnz(x), let us “approximate”
this objective with one which we do know how to minimize. The true objective is separable:
nnz(x) = ni=1 ξ(xj ), where ξ(s) is the function on the axis equal to 0 at the origin and equal
P

to 1 otherwise. As a matter of fact, the separable functions which we do know how to minimize
under linear constraints are sums of convex functions of x1 , ..., xn 4 . The most natural candidate
to the role of convex approximation of ξ(s) is |s|; with this approximation, (∗) converts into the
`1 -minimization problem
n
( )
X
min kxk1 := |xj | : Ax = y , (1.3.2)
x
i=1
which is equivalent to the LP program
( n )
X
min wj : Ax = y, −wj ≤ xj ≤ wj , 1 ≤ j ≤ n .
x,w
i=1

For the time being, we were focusing on the (unrealistic!) case of noiseless observations
η = 0. A realistic model is that η 6= 0. How to proceed in this case, depends on what we know
on η. In the simplest case of “unknown but small noise” one assumes that, say, the Euclidean
norm k · k2 of η is upper-bounded by a given “noise level“ δ: kηk2 ≤ δ. In this case, the `1
recovery usually takes the form

b = Argmin {kwk1 : kAw − yk2 ≤ δ}


x (1.3.3)
w

Now we cannot hope that our recovery xb will be exactly equal to the true s-sparse signal x, but
perhaps may hope that x is close to x when δ is small.
b
about the family of n × n square matrices; the vector of parameters in this case is the matrix itself). We say
that an instance of the family possesses certain property in general position, if the set of values of the parameter
vector for which the associated instance does not possess the property is of measure 0. Equivalently: randomly
perturbing the parameter vector of an instance, the perturbation being uniformly distributed in a (whatever
small) box, we with probability 1 get an instance possessing the property in question. E.g., a square matrix “in
general position” is nonsingular.
3
When s = 5 and n = 100, this number is ≈ 7.53e7 — much, but perhaps doable. When n = 200 and s = 20,
the number of systems to be processed jumps to ≈ 1.61e27, which is by many orders of magnitude beyond our
“computational grasp”; we would be unable to carry out that many computations even if the fate of the mankind
were dependent on them. And from the perspective of Compressed Sensing, n = 200 still is a completely toy size,
by 3-4 orders of magnitude less than we would like to handle.
4
A real-valued function f (s) on the real axis is called convex, if its graph, between every pair of its points, is
below the chord linking these points, or, equivalently, if f (x + λ(y − x)) ≤ f (x) + λ(f (y) − f (x)) for every x, y ∈ R
and every λ ∈ [0, 1]. For example, maxima of (finitely many) affine functions ai s + bi on the axis are convex. For
more detailed treatment of convexity of functions, see Appendix C.
32 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

Note that (1.3.3) is not an LP program anymore5 , but still is a nice convex optimization
program which can be solved to high accuracy even for reasonable large m, n.

1.3.1.2 s-goodness and nullspace property


Let us say that a sensing matrix A is s-good, if in the noiseless case `1 minimization (1.3.2)
recovers correctly all s-sparse signals x. It is easy to say when this is the case: the necessary
and sufficient condition for A to be s-good is the following nullspace property:
X 1
∀(z ∈ Rn : Az = 0, z 6= 0, I ⊂ {1, ..., n}, Card(I) ≤ s) : |zi | < kzk1 . (1.3.4)
i∈I
2

In other words, for every nonzero vector z ∈ KerA, the sum kzks,1 of the s largest magnitudes
of entries in z should be strictly less than half of the sum of magnitudes of all entries.
The necessity and sufficiency of the nullspace property for s-goodness of A can be
derived “from scratch” — from the fact that s-goodness means that every s-sparse
signal x should be the unique optimal solution to the associated LP minw {kwk1 :
Aw = Ax} combined with the LP optimality conditions. Another option, which
we prefer to use here, is to guess the condition and then to prove that its indeed is
necessary and sufficient for s-goodness of A. The necessity is evident: if the nullspace
property does not take place, then there exists 0 6= z ∈ KerA and s-element subset
I of the index set {1, ..., n} such that if J is the complement of I in {1, ..., n}, then
the vector zI obtained from z by zeroing out all entries with indexes not in I along
with the vector zJ obtained from z by zeroing out all entries with indexes not in J
satisfy the relation kzI k1 ≥ 21 kzk1 = 12 [kzI k1 + kzJ k1 ], that is,

kzI k1 ≥ kzJ k1 .

Since Az = 0, we have AzI = A[−zJ ], and we conclude that the s-sparse vector zI
is not the unique optimal solution to the LP minw {kwk1 : Aw = AzI }, since −zJ is
feasible solution to the program with the value of the objective at least as good as
the one at zJ , on one hand, and the solution −zJ is different from zI (since otherwise
we should have zI = zJ = 0, whence z = 0, which is not the case) on the other hand.
To prove that the nullspace property is sufficient for A to be s-good is equally easy:
indeed, assume that this property does take place, and let x be s-sparse signal, so
that the indexes of nonzero entries in x are contained in an s-element subset I of
{1, ..., n}, and let us prove that if x
b is an optimal solution to the LP (1.3.2), then
x
b = x. Indeed, denoting by J the complement of I, setting z = x b − x and assuming
that z 6= 0, we have Az = 0. Further, in the same notation as above we have

kxI k1 − kx
bI k1 ≤ kzI k1 < kzJ k1 = kx
bJ k1

(the first inequality is due to the Triangle inequality, the second – due to the nullspace
property, the equality is due to xJ = 0, that is, zJ = x bJ ), whence kxk1 = kxI k1 <
kx
b I k1 + kxb J k = kx
bk1 , which contradicts the origin of x
b.

5
To get an LP, we should replace the Euclidean norm kAw − yk2 of the residual with, say, the uniform norm
kAw − yk∞ , which makes perfect sense when we start with coordinate-wise bounds on observation errors, which
indeed is the case in some applications.
1.3. SELECTED ENGINEERING APPLICATIONS OF LP 33

1.3.1.3 From nullspace property to error bounds for imperfect `1 recovery


The nullspace property establishes necessary and sufficient condition for the validity of `1 re-
covery in the noiseless case, whatever be the s-sparse true signal. We are about to show that
after appropriate quantification, this property implies meaningful error bounds in the case of
imperfect recovery (presence of observation noise, near-, but not exact, s-sparsity of the true
signal, approximate minimization in (1.3.3).
The aforementioned “proper quantification” of the nullspace property is suggested by the LP
duality theory and is as follows. Let Vs be the set of all vectors v ∈ Rn with at most s nonzero
entries, equal ±1 each. Observing that the sum kzks,1 of the s largest magnitudes of entries in
a vector z is nothing that max v T z, the nullspace property says that the optimal value in the
v∈Vs
LP program
γ(v) = max{v T z : Az = 0, kzk1 ≤ 1} (Pv )
z

is < 1/2 whenever v ∈ Vs (why?). Applying the LP Duality Theorem, we get, after straightfor-
ward simplifications of the dual, that

γ(v) = min kAT h − vk∞ .


h

Denoting by hv an optimal solution to the right hand side LP, let us set

γ := γs (A) = max γ(v), βs (A) = max khv k2 .


v∈Vs v∈Vs

Observe that the maxima in question are well defined reals, since Vs is a finite set, and that the
nullspace property is nothing but the relation

γs (A) < 1/2. (1.3.5)

Observe also that we have the following relation:

∀z ∈ Rn : kzks,1 ≤ βs (A)kAzk2 + γs (A)kzk1 . (1.3.6)

Indeed, for v ∈ Vs and z ∈ Rn we have

v T z = [v − AT hv ]T z + [AT hv ]T z ≤ kv − AT hv k∞ kzk1 + hTv Az


≤ γ(v)kzk1 + khv k2 kAzk2 ≤ γs (A)kzk1 + βs (A)kAzk2 .

Since kzks,1 = maxv∈Vs v T z, the resulting inequality implies (1.3.6).

Now consider imperfect `1 recovery x 7→ y 7→ x


b, where

1. x ∈ Rn can be approximated within some accuracy ρ, measured in the `1 norm, by an


s-sparse signal, or, which is the same,

kx − xs k1 ≤ ρ

where xs is the best s-sparse approximation of x (to get this approximation, one zeros out
all but the s largest in magnitude entries in x, the ties, if any, being resolved arbitrarily);

2. y is a noisy observation of x:
y = Ax + η, kηk2 ≤ δ;
34 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

3. x
b is a µ-suboptimal and -feasible solution to (1.3.3), specifically,

kx
bk1 ≤ µ + min {kwk1 : kAw − yk2 ≤ δ} & kAx
b − yk2 ≤ .
w

Theorem 1.3.1 Let A, s be given, and let the relation


∀z : kzks,1 ≤ βkAzk2 + γkzk1 (1.3.7)
holds true with some parameters γ < 1/2 and β < ∞ (as definitely is the case when A is s-good,
γ = γs (A) and β = βs (A)). The for the outlined imperfect `1 recovery the following error bound
holds true:
2β(δ + ) + µ + 2ρ
kx
b − xk1 ≤ , (1.3.8)
1 − 2γ
i.e., the recovery error is of order of the maximum of the “imperfections” mentioned in 1) —
3).

Proof. Let I be the set of indexes of the s largest in magnitude entries in x, J be


the complement of I, and z = x b − x. Observing that x is feasible for (1.3.3), we have
min {kwk1 : kAw − yk2 ≤ δ} ≤ kxk1 , whence
w

kx
bk1 ≤ µ + kxk1 ,

or, in the same notation as above,


kxI k1 − kx
bI k1 ≥ kx
bJ k1 − kxJ k1 −µ
| {z } | {z }
≤kzI k1 ≥kzJ k1 −2kxJ k1

whence
kzJ k1 ≤ µ + kzI k1 + 2kxJ k1 ,
so that
kzk1 ≤ µ + 2kzI k1 + 2kxJ k1 . (a)
We further have
kzI k1 ≤ βkAzk2 + γkzk1 ,
which combines with (a) to imply that
kzI k1 ≤ βkAzk2 + γ[µ + 2kzI k1 + 2kxJ k1 ],
whence, in view of γ < 1/2 and due to kxJ k1 = ρ,
1
kzI k1 ≤ [βkAzk2 + γ[µ + 2ρ]] .
1 − 2γ
Combining this bound with (a), we get
2
kzk1 ≤ µ + 2ρ + [βkAzk2 + γ[µ + 2ρ]].
1 − 2γ
b − x and that therefore kAzk2 ≤ kAx − yk2 + kAx
Recalling that z = x b − yk2 ≤ δ + , we finally
get
2
kx
b − xk1 ≤ µ + 2ρ + [β[δ + ] + γ[µ + 2ρ]]. 2
1 − 2γ
1.3. SELECTED ENGINEERING APPLICATIONS OF LP 35

1.3.1.4 Compressed Sensing: Limits of performance


The Compressed Sensing theory demonstrates that

1. For given m, n with m  n (say, m/n ≤ 1/2), there exist m × n sensing matrices which
m 6
are s-good for the values of s “nearly as large as m,” specifically, for s ≤ O(1) ln(n/m)
Moreover, there are natural families of matrices where this level of goodness “is a rule.”
E.g., when drawing an m × n matrix at random from the Gaussian or the ±1 distributions
(i.e., filling the matrix with independent realizations of a random variable which is either

Gaussian (zero mean, variance 1/m), or takes values ±1/ m with probabilities 0.5 7 , the
result will be s-good, for the outlined value of s, with probability approaching 1 as m and
n grow. Moreover, for the indicated values of s and randomly selected matrices A, one has

βs (A) ≤ O(1) s with probability approaching one when m, n grow.

2. The above results can be considered as a good news. A bad news is, that we do not
know how to check efficiently, given an s and a sensing matrix A, that the matrix is s-
good. Indeed, we know that a necessary and sufficient condition for s-goodness of A is
the nullspace property (1.3.5); this, however, does not help, since the quantity γs (A) is
difficult to compute: computing it by definition requires solving 2s Csn LP programs (Pv ),
v ∈ Vs , which is an astronomic number already for moderate n unless s is really small, like
1 or 2. And no alternative efficient way to compute γs (A) is known.
As a matter of fact, not only we do not know how to check s-goodness efficiently; there still
is no efficient recipe allowing to build, given m, an m × 2m matrix A which is provably

s-good for s larger than O(1) m — a much smaller “level of goodness” then the one
(s = O(1)m) promised by theory for typical randomly generated matrices.8 The “common
life” analogy of this pitiful situation would be as follows: you know that with probability
at least 0.9, a brick in your wall is made of gold, and at the same time, you do not know
how to tell a golden brick from a usual one.9
6
From now on, O(1)’s denote positive absolute constants – appropriately chosen numbers like 0.5, or 1, or
perhaps 100,000. We could, in principle, replace all O(1)’s by specific numbers; following the standard mathe-
matical practice, we do not do it, partly from laziness, partly because the particular values of these numbers in
our context are irrelevant.
7 √
entries “of order of 1/ m” make the Euclidean norms of columns in m × n matrix A nearly one, which is the
most convenient for Compressed Sensing normalization of A.
8
Note that the naive algorithm “generate m × 2m matrices at random until an s-good, with s promised by the
theory, matrix is generated” is not an efficient recipe, since we do not know how to check s-goodness efficiently.
9
This phenomenon is met in many other situations. E.g., in 1938 Claude Shannon (1916-2001), “the father
of Information Theory,” made (in his M.Sc. Thesis!) a fundamental discovery as follows. Consider a Boolean
function of n Boolean variables (i.e., both the function and the variables take values 0 and 1 only); as it is easily
n
seen there are 22 function of this type, and every one of them can be computed by a dedicated circuit comprised of
“switches” implementing just 3 basic operations AND, OR and NOT (like computing a polynomial can be carried
out on a circuit with nodes implementing just two basic operation: addition of reals and their multiplication). The
discovery of Shannon was that every Boolean function of n variables can be computed on a circuit with no more
than Cn−1 2n switches, where C is an appropriate absolute constant. Moreover, Shannon proved that “nearly all”
Boolean functions of n variables require circuits with at least cn−1 2n switches, c being another absolute constant;
“nearly all” in this context means that the fraction of “easy to compute” functions (i.e., those computable by
circuits with less than cn−1 2n switches) among all Boolean functions of n variables goes to 0 as n goes to ∞. Now,
computing Boolean functions by circuits comprised of switches was an important technical task already in 1938;
its role in our today life can hardly be overestimated — the outlined computation is nothing but what is going
on in a computer. Given this observation, it is not surprising that the Shannon discovery of 1938 was the subject
of countless refinements, extensions, modifications, etc., etc. What is still missing, is a single individual example
of a “difficult to compute” Boolean function: as a matter of fact, all multivariate Boolean functions f (x1 , ..., xn )
36 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

1.3.1.5 Verifiable sufficient conditions for s-goodness


As it was already mentioned, we do not know efficient ways to check s-goodness of a given sensing
matrix in the case when s is not really small. The difficulty here is the standard: to certify s-
goodness, we should verify (1.3.5), and the most natural way to do it, based on computing γs (A),
is blocked: by definition,
γs (A) = max {kzks,1 : Az = 0, kzk1 ≤ 1} (1.3.9)
z

that is, γs (A) is the maximum of a convex function kzks,1 over the convex set {z : Az = 0, kzk≤ 1}.
Although both the function and the set are simple, maximizing of convex function over a convex
set typically is difficult. The only notable exception here is the case of maximizing a convex
function f over a convex set X given as the convex hull of a finite set: X = Conv{v 1 , ..., v N }. In
this case, a maximizer of f on the finite set {v 1 , ..., v N } (this maximizer can be found by brute
force computation of the values of f at v i ) is the maximizer of f over the entire X (check it
yourself or see Section C.5).
Given that the nullspace property “as it is” is difficult to check, we can look for “the second
best thing” — efficiently computable upper and lower bounds on the “goodness” s∗ (A) of A
(i.e., on the largest s for which A is s-good).
Let us start with efficient lower bounding of s∗ (A), that is, with efficiently verifiable suffi-
cient conditions for s-goodness. One way to derive such a condition is to specify an efficiently
computable upper bound γbs (A) on γs (A). With such a bound at our disposal, the efficiently
verifiable condition γbs (A) < 1/2 clearly will be a sufficient condition for the validity of (1.3.5).
The question is, how to find an efficiently computable upper bound on γs (A), and here is
one of the options:
 
γs (A) = max max v T z : Az = 0, kzk1 ≤ 1
z  v∈Vs 
⇒ ∀H ∈ Rm×n : γs (A) = max max v T [1 − H T A]z : Az = 0, kzk1 ≤ 1
 z v∈Vs 
≤ max max v T [1 − H T A]z : kzk1 ≤ 1
z v∈Vs
= max k[I − H T A]zks,1 , Z = {z : kzk1 ≤ 1}.
z∈Z

We see that whatever be “design parameter” H ∈ Rm×n , the quantity γs (A) does not exceed
the maximum of a convex function k[I − H T A]zks,1 of z over the unit `1 -ball Z. But the latter
set is perfectly well suited for maximizing convex functions: it is the convex hull of a small (just
2n points, ± basic orths) set. We end up with
∀H ∈ Rm×n : γs (A) ≤ max k[I − H T A]zks,1 = max kColj [I − H T A]ks,1 ,
z∈Z 1≤j≤n

where Colj (B) denotes j-th column of a matrix B. We conclude that


γs (A) ≤ γbs (A) := min max kColj [I − H T A]ks,1 (1.3.10)
H j
| {z }
Ψ(H)

The function Ψ(H) is efficiently computable and convex, this is why its minimization can be
carried out efficiently. Thus, γbs (A) is an efficiently computable upper bound on γs (A).
Some instructive remarks are in order.
people managed to describe explicitly are computable by circuits with just linear in n number of switches!
1.3. SELECTED ENGINEERING APPLICATIONS OF LP 37

1. The trick which led us to γbs (A) is applicable to bounding from above the maximum of a
convex function f over the set X of the form {x ∈ Conv{v 1 , ..., v N } : Ax = 0} (i.e., over
the intersection of an “easy for convex maximization” domain and a linear subspace. The
trick is merely to note that if A is m × n, then for every H ∈ Rm×n one has
n o
max f (x) : x ∈ Conv{v 1 , ..., v N }, Ax = 0 ≤ max f ([I − H T Ax]v i ) (!)
x 1≤i≤N

Indeed, a feasible solution x to the left hand side optimization problem can be represented
as a convex combination i λi v i , and since Ax = 0, we have also x = i λi [I − H T A]v i ;
P P

since f is convex, we have therefore f (x) ≤ max f ([I − H T A]v i ), and (!) follows. Since (!)
i
takes place for every H, we arrive at
n o
max f (x) : x ∈ Conv{v 1 , ..., v N }, Ax = 0 ≤ γb := max f ([I − H T A]v i ),
x 1≤i≤N

and, same as above, γb is efficiently computable, provided that f is efficiently computable


convex function.

2. The efficiently computable upper bound γbs (A) is polyhedrally representable — it is the
optimal value in an explicit LP program. To derive this problem, we start with important
by itself polyhedral representation of the function kzks,1 :

Lemma 1.3.1 For every z ∈ Rn and integer s ≤ n, we have


n
( )
X
kzks,1 = min st + wi : |zi | ≤ t + wi , 1 ≤ i ≤ n, w ≥ 0 . (1.3.11)
w,t
i=1

Proof. One way to get (1.3.11) is to note that kzks,1 = max v T z = max v T z and
v∈Vs v∈Conv(Vs )
to verify that the convex hull of the set Vs is exactly the polytope Vs = {v ∈ Rn
: |vi | ≤
1 ∀i, i |vi | ≤ s} (or, which is the same, to verify that the vertices of the latter polytope
P

are exactly the vectors from Vs ). With this verification at our disposal, we get
( )
X
T
kzks,1 = max v z : |vi | ≤ 1∀i, |vi | ≤ s ;
v
i

applying LP Duality, we get the representation (1.3.11). A shortcoming of the outlined


approach is that one indeed should prove that the extreme points of Vs are exactly the
points from Vs ; this is a relatively easy exercise which we strongly recommend to do. We,
however, prefer to demonstrate (1.3.11) directly. Indeed, if (w, t) is feasible for (1.3.11),
then |zi | ≤ wi +t, whence the sum of the s largest magnitudes of entries in z does not exceed
st plus the sum of the corresponding s entries in w, and thus – since w is nonnegative –
does not exceed st + i wi . Thus, the right hand side in (1.3.11) is ≥ the left hand side.
P

On the other hand, let |zi1 | ≥ |zi2 | ≥ ... ≥ |zis | are the s largest magnitudes of entries in
z (so that i1 , ..., is are distinct from each other), and let t = |zis |, wi = max[|zi | − t, 0]. It
is immediately seen that (t, w) is feasible for the right hand side problem in (1.3.11) and
that st + i wi = sj=1 |zij | = kzks,1 . Thus, the right hand side in (1.3.11) is ≤ the left
P P

hand side. 2
38 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

Lemma 1.3.1 straightforwardly leads to the following polyhedral representation of γbs (A):

γbs (A) := min max kColj [I − H T A]ks,1


H j ( )
−wj − tj ≤ [I − H T A]ij ≤ wij + tj ∀i, j
= min τ: j i .
w ≥ 0 ∀j, stj + i wij ≤ τ ∀j
P
H,wj ,tj ,τ

3. The quantity γb1 (A) is exactly equal to γ1 (A) rather than to be an upper bound on the
latter quantity.
Indeed, we have

γ1 (A) = max max {|zi | : Az = 0, kzk1 ≤ 1} max max {zi : Az = 0, kzk1 ≤ 1}


i z i z
| {z }
γi

Applying LP Duality, we get

γi = min kei − AT hk∞ , (Pi )


h

where ei are the standard basic orths in Rn . Denoting by hi optimal solutions to the latter
problem and setting H = [H 1 , ..., hn ], we get

γ1 (A) = max γi = max kei − AT hi k∞ = max |[I − AT hi ]j |


i i i,j
= max |[I − AT H]ij | = max |[I − H T A]ij |
i,j i,j
= max kColj [I − H T A]k1,1
i
≥ γb1 (A);

since the opposite inequality γ1 (A) ≤ γb1 (A) definitely holds true, we conclude that

γb1 (A) = γ1 (A) = min max |[I − H T A]ij |.


H i,j

Observe that an optimal solution H to the latter problem can be found column by column,
with j-th column hj of H being an optimal solution to the LP (Pj ); this is in a nice contrast
with computing γbs (A) for s > 1, where we should solve a single LP with O(n2 ) variables
and constraints, which is typically much more time consuming that solving O(n) LP’s with
O(n) variables and constraints each, as it is the case when computing γb1 (A).
Observe also that if p, q are positive integers, then for every vector z one has kzkpq,1 ≤
qkzkp,1 , and in particular kzks,1 ≤ skzk1,1 = skzk∞ . It follows that if H is such that
γbp (A) = max kColj [I − H T A]kp,1 , then γbpq (A) ≤ q max kColj [I − H T A]kp,1 ≤ q γbp (A). In
j j
particular,
γbs (A) ≤ sγb1 (A),
meaning that the easy-to-verify condition
1
γb1 (A) <
2s
is sufficient for the validity of the condition

γbs (A) < 1/2

and thus is sufficient for s-goodness of A.


1.3. SELECTED ENGINEERING APPLICATIONS OF LP 39

4. Assume that A and s are such that s-goodness of A can be certified via our verifiable
sufficient condition, that is, we can point out an m × n matrix H such that

γ := max kColj [I − H T A]ks,1 < 1/2.


j

Now, for every n × n matrix B, any norm k · k on Rn and every vector z ∈ Rn we clearly
have  
kBzk ≤ max kColj [B]k kzk1
j

(why?) Therefore form the definition of γ, for every vector z we have k[I − H T A]zks,1 ≤
γkzk1 , so that
 
kzks,1 ≤ kH T Azks,1 + k[I − H T A]zks,1 ≤ s max kColj [H]k2 kAzk2 + γkzk1 ,
j

meaning that H certifies not only the s-goodness of A, but also an inequality of the form
(1.3.7) and thus – the associated error bound (1.3.8) for imperfect `1 recovery.

1.3.2 Supervised Binary Machine Learning via LP Support Vector Machines


Imagine that we have a source of feature vectors — collections x of n measurements representing,
e.g., the results of n medical tests taken from patients, and a patient can be affected, or not
affected, by a particular illness. “In reality,” these feature vectors x go along with labels y taking
values ±1; in our example, the label −1 says that the patient whose test results are recorded
in the feature vector x does not have the illness in question, while the label +1 means that the
patient is ill.
We assume that there is certain dependence between the feature vectors and the labels, and
our goal is to predict, given a feature vector alone, the value of the label. What we have at our
disposal is a training sample (xi , y i ), 1 ≤ i ≤ N of examples (xi , y i ) where we know both the
feature vector and the label; given this sample, we want to build a classifier – a function f (x)
on the space of feature vectors x taking values ±1 – which we intend to use to predict, given
the value of a new feature vector, the value of the corresponding label. In our example this
setup reads: we are given medical records containing both the results of medical tests and the
diagnoses of N patients; given this data, we want to learn how to predict the diagnosis given
the results of the tests taken from a new patient.
The simplest predictors we can think about are just the “linear” ones looking as follows. We
fix an affine form z T x + b of a feature vector, choose a positive threshold γ and say that if the
value of the form at a feature vector x is “well positive” – is ≥ γ – then the proposed label for
x is +1; similarly, if the value of the form at x is “well negative” – is ≤ −γ, then the proposed
label will be −1. In the “gray area” −γ < z T x + b < γ we decline to classify. Noting that the
actual value of the threshold is of no importance (to compensate a change in the threshold by
certain factor, it suffices to multiply by this factor both z and b, without affecting the resulting
classification), we from now on normalize the situation by setting the threshold to the value 1.
Now, we have explained how a linear classifier works, but where from to take it? An intu-
itively appealing idea is to use the training sample in order to “train” our potential classifier –
to choose z and b in a way which ensures correct classification of the examples in the sample.
This amounts to solving the system of linear inequalities

z T xi + b ≥ 1∀(i ≤ N : y i = +1) & z T xi + b ≤ −1∀(i : y i = −1),


40 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

which can be written equivalently as

y i (ziT xi + b) ≥ 1 ∀i = 1, ..., N.

Geometrically speaking, we want to find a “stripe”

−1 < z T x + b < 1 (∗)

between two parallel hyperplanes {x : z T x + b = −1} and {x : z T x + b = 1} such that all


“positive examples” (those with the label +1) from the training sample are on one side of this
stripe, while all negative (the label −1) examples from the sample are on the other side of the
stripe. With this approach, it is natural to look for the “thickest” stripe separating the positive
and the negative examples. Since the geometric width of the stripe is √ 2T (why?), this amounts
z z
to solving the optimization program
n √ o
min kzk2 := z T z : y i (z T xi + b) ≥ 1, 1 ≤ i ≤ N ; (1.3.12)
z,b

The latter problem, of course, not necessarily is feasible: it well can happen that it is impossible
to separate the positive and the negative examples in the training sample by a stripe between two
parallel hyperplanes. To handle this possibility, we allow for classification errors and minimize
a weighted sum of kwk2 and total penalty for these errors. Since the absence of classification
penalty at an example (xi , y i ) in outer context is equivalent to the validity of the inequality
y i (wT xi + b) ≥ 1, the most natural penalty for misclassification of the example is max[1 −
y i (z T xi + b), 0]. With this in mind, the problem of building “the best on the training sample”
classifier becomes the optimization problem
N
( )
X
i T i
min kzk2 + λ max[1 − y (z x + b), 0] , (1.3.13)
z,b
i=1

where λ > 0 is responsible for the “compromise” between the width of the stripe (∗) and the
“separation quality” of this stripe; how to choose the value of this parameter, this is an additional
story we do not touch here. Note that the outlined approach to building classifiers is the most
basic and the most simplistic version of what in Machine Learning is called “Support Vector
Machines.”
Now, (1.3.13) is not an LO program: we know how to get rid of nonlinearities max[1 −
y i (wT xi + b), 0] by adding slack variables and linear constraints, but we cannot get rid of the
nonlinearity brought by the term kzk2 . Well, there are situations in Machine Learning where
it makes sense to get rid of this term by “brute force,” specifically, by replacing the k · k2 with
k · k1 . The rationale behind this “brute force” action is as follows. The dimension n of the
feature vectors can be large, In our medical example, it could be in the range of tens, which
perhaps is “not large;” but think about digitalized images of handwritten letters, where we want
to distinguish between handwritten letters ”A” and ”B;” here the dimension of x can well be
in the range of thousands, if not millions. Now, it would be highly desirable to design a good
classifier with sparse vector of weights z, and there are several reasons for this desire. First,
intuition says that a good on the training sample classifier which takes into account just 3 of the
features should be more “robust” than a classifier which ensures equally good classification of
the training examples, but uses for this purpose 10,000 features; we have all reasons to believe
that the first classifier indeed “goes to the point,” while the second one adjusts itself to random,
1.3. SELECTED ENGINEERING APPLICATIONS OF LP 41

irrelevant for the “true classification,” properties of the training sample. Second, to have a
good classifier which uses small number of features is definitely better than to have an equally
good classifier which uses a large number of them (in our medical example: the “predictive
power” being equal, we definitely would prefer predicting diagnosis via the results of 3 tests to
predicting via the results of 20 tests). Finally, if it is possible to classify well via a small number
of features, we hopefully have good chances to understand the mechanism of the dependencies
between these measured features and the feature which presence/absence we intend to predict
— it usually is much easier to understand interaction between 2-3 features than between 2,000-
3,000 of them. Now, the SVMs (1.3.12), (1.3.13) are not well suited for carrying out the outlined
feature selection task, since minimizing kzk2 norm under constraints on z (this is what explicitly
goes on in (1.3.12) and implicitly goes on in (1.3.13)10 ) typically results in “spread” optimal
solution, with many small nonzero components. In view of our “Compressed Sensing” discussion,
we could expect that minimizing the `1 -norm of z will result in “better concentrated” optimal
solution, which leads us to what is called “LO Support Vector Machine.” Here the classifier is
given by the solution of the k · k1 -analogy of (1.3.13), specifically, the optimization problem

N
( )
X
i T i
min kzk1 + λ max[1 − y (z x + b), 0] . (1.3.14)
z,b
i=1

This problem clearly reduces to the LO program


 
Xn N
X 
min wj + λ ξi : −wj ≤ zj ≤ wj , 1 ≤ j ≤ n, ξi ≥ 0, ξi ≥ 1 − y i (z T xi + b), 1 ≤ i ≤ N .
z,b,w,ξ  
j=1 i=1
(1.3.15)

Concluding remarks. A reader could ask, what is the purpose of training the classifier on the
training set of examples, where we from the very beginning know the labels of all the examples?
why a classifier which classifies well on the training set should be good at new examples? Well,
intuition says that if a simple rule with a relatively small number of “tuning parameters” (as it is
the case with a sparse linear classifier) recovers well the labels in examples from a large enough
sample, this classifier should have learned something essential about the dependency between
feature vectors and labels, and thus should be able to classify well new examples. Machine
Learning theory offers a solid probabilistic framework in which “our intuition is right”, so that
under assumptions (not too restrictive) imposed by this framework it is possible to establish
quantitative links between the size of the training sample, the behavior of the classifier on this
sample (quantified by the k · k2 or k · k1 norm of the resulting z and the value of the penalty
for misclassification), and the predictive power of the classifier, quantified by the probability of
misclassification of a new example; roughly speaking, good behavior of a linear classifier achieved
at a large training sample ensures low probability of misclassifying a new example.
10
PN
To understand the latter claim, take an optimal solution (z∗ , b∗ ) to (1.3.13), set Λ = i=1
max[1 − y i (z∗T xi +
b∗ ), 0] and note that (z∗ , b∗ ) solves the optimization problem
( N
)
X i T i
min kzk2 : max[1 − y (z x + b), 0] ≤ Λ
z,b
i=1

(why?).
42 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

1.3.3 Synthesis of linear controllers


1.3.3.1 Discrete time linear dynamical systems

The most basic and well studied entity in control is a linear dynamical system (LDS). In the
sequel, we focus on discrete time LDS modeled as

x0 = z [initial condition]
xt+1 = At xi + Bt ut + Rt dt , [state equations] (1.3.16)
yt = Ct xt + Dt dt [outputs]

In this description, t = 0, 1, 2, ... are time instants,

• xt ∈ Rnx is state of the system at instant t,

• ut ∈ Rnu is control generated by system’s controller at instant t,

• dt ∈ Rnd is external disturbance coming from system’s environment at instant t,

• yt ∈ Rny is observed output at instant t,

• At , Bt , ..., Dt are (perhaps depending on t) matrices of appropriate sizes specifying system’s


dynamics and relations between states, controls, external disturbances and outputs.

What we have described so far is called an open loop system (or open loop plant). This plant
should be augmented by a controller which generates subsequent controls. The standard assump-
tion is that the control ut is generated in a deterministic fashion and depends on the outputs
y1 , ..., yt observed prior to instant t and at this very instant (non-anticipative, or causal control):

ut = Ut (y0 , ..., yt ); (1.3.17)

here Ut are arbitrary everywhere defined functions of their arguments taking values in Rnu .
Plant augmented by controller is called a closed loop system; its behavior clearly depends on
the initial state and external disturbances only.

Affine control. The simplest (and extremely widely used) form of control law is affine control,
where ut are affine functions of the outputs:

ut = ξt + Ξt0 y0 + Ξt1 y1 + ... + Ξtt yt , (1.3.18)

where ξt are vectors, and Ξtτ , 0 ≤ τ ≤ t, are matrices of appropriate sizes.


Augmenting linear open loop system (1.3.16) with affine controller (1.3.18), we get a well
defined closed loop system in which states, controls and outputs are affine functions of the
initial state and external disturbances; moreover, xt depends solely on the initial state z and
the collection dt−1 = [d0 ; ...; dt−1 ] of disturbances prior to instant t, while ut and yt may depend
on z and the collection dt = [d0 ; ...; dt ] of disturbances including the one at instant t.
1.3. SELECTED ENGINEERING APPLICATIONS OF LP 43

Design specifications and the Analysis problem. The entities of primary interest in
control are states and controls; we can arrange states and controls into a long vector —- the
state-control trajectory
wN = [xN ; xN −1 ; ...; x1 ; uN −1 ; ...; u0 ];
here N is the time horizon on which we are interested in system’s behavior. with affine control
law (1.3.18), this trajectory is an affine function of z and dN :

wN = wN (z, dN ) = ωN + ΩN [z; dN ]

with vector ωN and matrix ΩN are readily given by the matrices At , ..., Dt , 0 ≤ t < N , from the
description of the open loop system and by the collection ξ~N = {ξt , Ξtτ : 0 ≤ τ ≤ t < N } of the
parameters of the affine control law (1.3.18).
Imagine that the “desired behaviour” of the closed loop system on the time horizon in
question is given by a system of linear inequalities

BwN ≤ b (1.3.19)

which should be satisfied by the state-control trajectory provided that the initial state z and the
disturbances dN vary in their “normal ranges” Z and DN , respectively; this is a pretty general
form of design specifications. The fact that wN depends affinely on [z; dN ] makes it easy to
solve the Analysis problem: to check wether a given control law (1.3.18) ensures the validity
of design specifications (1.3.19). Indeed, to this end we should check whether the functions
[BwN − b]i , 1 ≤ i ≤ I (I is the number of linear inequalities in (1.3.19)) remain nonpositive
whenever dN ∈ DN , z ∈ Z; for a given control law, the functions in question are explicitly given
affine function φi (z, dN ) of [z; dN ], so that what we need to verify is that
n o
max φi (z, dN ) : [z; dN ] ∈ ZDN := Z × DN ≤ 0, i = 1, ..., I.
[z;dN ]

Whenever DN and Z are explicitly given convex sets, the latter problems are convex and thus
easy to solve. Moreover, if ZDN is given by polyhedral representation
n o
ZDN := DN × Z = [z; dN ] : ∃v : P [z; dN ] + Qv ≤ r (1.3.20)

(this is a pretty flexible and enough general way to describe typical ranges of disturbances and
initial states), the analysis problem reduces to a bunch of explicit LPs
n o
max φi ([z; dN ]) : P [z; dN ] + Qv ≤ r , 1 ≤ i ≤ I;
z,dN ,v

the answer in the analysis problem is positive if and only if the optimal values in all these LPs
are nonpositive.

Synthesis problem. As we have seen, with affine control and affine design specifications, it
is easy to check whether a given control law meets the specifications. The basic problem in
linear control is, however, somehow different: usually we need to build an affine control which
meets the design specifications (or to detect that no such control exists). And here we run into
difficult problem: while state-control trajectory for a given affine control is an easy-to-describe
affine function of [z; dN ], its dependence on the collection ξ~N of the parameters of the control
44 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

law is highly nonlinear, even for a time-invariant system (the matrices At , ..., Dt are independent
of t) and a control law as simple as time-invariant linear feedback: ut = Kyt . Indeed, due to the
dynamic nature of the system, in the expressions for states and controls powers of the matrix
K will be present. Highly nonlinear dependence of states and controls on S makes it impossible
to optimize efficiently w.r.t. the parameters of the control law and thus makes the synthesis
problem extremely difficult.
The situation, however, is far from being hopeless. We are about to demonstrate that one
can re-parameterize affine control law in such a way that with the new parameterization both
Analysis and Synthesis problems become tractable.

1.3.3.2 Purified outputs and purified-output-based control laws


Imagine that we “close” the open loop system with a whatever (affine or non-affine) control law
(1.3.17) and in parallel with running the closed loop system run its model:
System:
x0 = z
xt+1 = At xi + Bt ut + Rt dt ,
yt = Ct xt + Dt dt
Model:
(1.3.21)
x
b0 = 0
x
bt+1 = At x bi + Bt ut ,
ybt = Ct xbt
Controller:
ut = Ut (y0 , ..., yt ) (!)
Assuming that we know the matrices At , ..., Dt , we can run the model in an on-line fashion,
so that at instant t, when the control ut should be specified, we have at our disposal both
the actual outputs y0 , ..., yt and the model outputs yb0 , ..., ybt , and thus have at our disposal the
purified outputs
vτ = yτ − ybτ , 0 ≤ τ ≤ t.
Now let us ask ourselves what will happen if we, instead of building the controls ut on the basis
of actual outputs yτ , 0 ≤ τ ≤ t, pass to controls ut built on the basis of purified outputs vτ ,
0 ≤ τ ≤ t, i.e., replace the control law (!) with control law of the form
ut = Vt (v0 , ..., vt ) (!!)
It easily seen that nothing will happen:
For every control law {Ut (y0 , ..., yt )}∞ t=0 of the form (!) there exists a control law

{Vt (v0 , ..., vt )}t=0 of the form (!!) (and vice versa, for every control law of the form
(!!) there exists a control law of the form (!)) such that the dependencies of actual
states, outputs and controls on the disturbances and the initial state for both control
laws in question are exactly the same. Moreover, the above “equivalence claim”
remains valid when we restrict controls (!), (!!) to be affine in their arguments.
The bottom line is that every behavior of the close loop system which can be obtained with
affine non-anticipative control (1.3.18) based on actual outputs, can be also obtained with affine
non-anticipative control law
ut = ηt + H0t v0 + H1t v1 + ... + Htt vt , (1.3.22)
1.3. SELECTED ENGINEERING APPLICATIONS OF LP 45

based on purified outputs (and vice versa).


Exercise 1.1 Justify the latter conclusion11
We have said that as far as achievable behaviors of the closed loop system are concerned, we
loose (and gain) nothing when passing from affine output-based control laws (1.3.18) to affine
purified output-based control laws (1.3.22). At the same time, when passing to purified outputs,
we get a huge bonus:
(#) With control (1.3.22), the trajectory wN of the closed loop system turns out to
be bi-affine: it is affine in [z; dN ], the parameters ~η N = {ηt , Hτt , 0 ≤ τ ≤ t ≤ N } of
the control law being fixed, and is affine in the parameters of the control law ~η N ,
[z; dN ] being fixed, and this bi-affinity, as we shall see in a while, is the key to efficient
solvability of the synthesis problem.
The reason for bi-affinity is as follows (after this reason is explained, verification of bi-affinity
itself becomes immediate): purified output vt is completely independent on the controls and is
a known in advance affine function of dt , z. Indeed, from (1.3.21) it follows that

vt = Ct (xt − x
bt ) + Dt dt
| {z }
δt

and that the evolution of δt is given by

δ0 = z, δt+1 = At δt + Rt dt

and thus is completely independent of the controls, meaning that δt and vt indeed are known in
advance (provided the matrices At , ..., Dt are known in advance) affine functions of dt , z.
Exercise 1.2 Derive (#) from the observation we just have made.
Note that in contrast to the just outlined “control-independent” nature of purified outputs, the
actual outputs are heavily control-dependent (indeed, yt is affected by xt , and this state, in turn,
is affected by past controls u0 , ..., ut−1 ). This is why with the usual output-based affine control,
the states and the controls are highly nonlinear in the parameters of the control law – to build
ui , we multiply matrices Ξtτ (which are parameters of the control law) by outputs yτ which by
themselves already depend on the “past” parameters of the control law.

Tractability of the Synthesis problem. Assume that the normal range ZDN of [z; dN ] is
a nonempty and bounded set given by polyhedral representation (1.3.20), and let us prove that
in this case the design specifications (1.3.19) reduce to a system of explicit linear inequalities in
variables ~η N – the parameters of the purified-output-based affine control law used to close the
open-loop system – and appropriate slack variables. Thus, (parameters of) purified-output-based
affine control laws meeting the design specifications (1.3.19) form a polyhedrally representable,
and thus easy to work with, set.
The reasoning goes as follows. As stated by (#), the state-control trajectory wN associated
with affine purified-output-based control with parameters ~η N is bi-affine in dN , z and in ~η N , so
that it can be represented in the form

wN = w[~η N ] + W [~η N ] · [z; dN ],


11
For proof, see [15, Theorem 14.4.1].
46 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

where the vector-valued and the matrix-valued functions w[·], W [·] are affine and are readily
given by the matrices At , ..., Dt , 0 ≤ t ≤ N − 1. Plugging in the representation of wN into the
design specifications (1.3.19), we get a system of scalar constraints of the form

αiT (~η N )[z; dN ] ≤ βi (~η N ), 1 ≤ i ≤ I, (∗)

where the vector-valued functions αi (·) and the scalar functions βi (·) are affine and readily given
by the description of the open-loop system and by the data B, b in the design specifications.
What we want from ~η N is to ensure the validity of every one of the constraints (∗) for all [z; dN ]
from ZDn , or, which is the same in view of (1.3.20), we want the optimal values in the LPs
n o
max αiT (~η N )[z; dN ] : P [z; dN ] + Qv ≤ r
[z;dN ],v

to be ≤ βi (~η N ) for 1 ≤ i ≤ I. Now, the LPs in question are feasible; passing to their duals,
what we want become exactly the relations
n o
min rT si : P T si = βi (~η N ), QT si = 0, si ≥ 0 ≤ βi (~η N ), 1 ≤ i ≤ I.
si

The bottom line is that

A purified-output-based control law meets the design specifications (1.3.19) if and


only if the corresponding collection ~η N of parameters can be augmented by properly
chosen slack vector variables si to give a solution to the system of linear inequalities
in variables s1 , ..., sI , ~κN , specifically, the system

P T si = βi (~η N ), QT si = 0, rT si ≤ βi (~η N ), si ≥ 0, 1 ≤ i ≤ I. (1.3.23)

Remark 1.3.1 We can say that when passing from an affine output-based control laws to
affine purifies output based ones we are all the time dealing with the same entities (affine non-
anticipative control laws), but switch from one parameterization of these laws (the one by ξ~N -
parameters) to another parameterization (the one by ~η N -parameters). This re-parameterization
is nonlinear, so that in principle there is no surprise that what is difficult in one of them (the Syn-
thesis problem) is easy in another one. This being said, note that with our re-parameterization
we neither lose nor gain only as far as the entire family of linear controllers is concerned. Spe-
cific sub-families of controllers can be “simply-looking” in one of the parameterizations and be
extremely difficult to describe in the other one. For example, time-invariant linear feedback
ut = Kyt looks pretty simple (just a linear subspace) in the ξ~N -parameterization and form a
highly nonlinear manifold in the ~η N -one. Similarly, the a.p.o.b. control ut = Kvt looks simple in
the ~η N -parameterization and is difficult to describe in the ξ~N -one. We can say that there is no
such thing as “the best” parameterization of affine control laws — everything depends on what
are our goals. For example, the ~η N parameterization is well suited for synthesis of general-type
affine controllers and becomes nearly useless when a linear feedback is sought.
1.3. SELECTED ENGINEERING APPLICATIONS OF LP 47

Modifications. We have seen that if the “normal range” ZDN of [z; dN ] is given by polyhedral
representation (which we assume from now on), the parameters of “good” (i.e., meeting the de-
sign specifications (1.3.19) for all [z; dN ] ∈ ZDN ) affine purified-output-based (a.p.o.b.) control
laws admit an explicit polyhedral representation – these are exactly the collections ~η N which can
be extended by properly selected slacks s1 , ..., sN to a solution of the explicit system of linear
inequalities (1.3.23). As a result, various synthesis problems for a.p.o.b. controllers become just
explicit LPs. This is the case, e.g., for the problem of checking whether the design specifications
can be met (this is just a LP feasibility problem of checking solvability of (1.3.23)). If this is
the case, we can further minimize efficiently a “good” (i.e., convex and efficiently computable)
objective over the a.p.o.b. controllers meeting the design specifications; when the objective is
just linear, this will be just an LP program.
In fact, our approach allows for solving efficiently other meaningful problems of a.p.o.b.
controller design, e.g., as follows. Above, we have assumed that the range of disturbances
and initial states is known, and we want to achieve “good behavior” of the closed loop sys-
tem in this range; what happens when the disturbances and the initial states go beyond their
normal range, is not our responsibility. In some cases, it makes sense to take care of the
disturbances/initial states beyond their normal range, specifically, to require the validity of de-
sign specifications (1.3.19) in the normal range of disturbances/initial states, and to allow for
controlled deterioration of these specifications when the disturbances/initial states run beyond
their normal range. This can be modeled as follows: Let us fix a norm k · k on the linear space
LN = {ζ = [z; dN ] ∈ Rnd × ... × Rnd × Rnx } where the disturbances and the initial states live,
and let
dist([z; dN ], ZDN ) = min k[z; dN ] − ζk
ζ∈ZDN

be the deviation of [z; dN ] from the normal range of this vector. A natural extension of our
previous design specifications is to require the validity of the relations

BwN (dN , z) ≤ b + dist([z; dN ], ZDN )α, (1.3.24)

for all [z; dN ] ∈ LN ; here, as above, wN (dN , z) is the state-control trajectory of the closed loop
system on time horizon N treated as a function of [z; dN ], and α = [α1 ; ...; αI ] is a nonnegative
vector of sensitivities, which for the time being we consider as given to us in advance. In other
words, we still want the validity of (1.3.19) in the normal range of disturbances and initial
states, and on the top of it, impose additional restrictions on the global behaviour of the closed
loop system, specifically, want the violations of the design specifications for [z; dN ] outside of its
normal range to be bounded by given multiples of the distance from [z; dN ] to this range.
As before, we assume that the normal range ZDN of [z; dN ] is a nonempty and bounded set
given by polyhedral representation (1.3.20). We are about to demonstrate that in this case the
specifications (1.3.24) still are tractable (and even reduce to LP when k · k is a simple enough
norm). Indeed, let us look at i-th constraint in (1.3.24). Due to bi-affinity of wN in [z; dN ] and
in the parameters ~η N of the a.p.o.b. control law in question, this constraint can be written as

αiT (~η N )[z; dN ] ≤ βi (~η N ) + αi dist([z; dN ], ZDN ) ∀[z; dN ] ∈ LN . (1.3.25)

where αi (
cdot), βi (·) are affine functions. We claim that
48 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

(&) Relation (1.3.25) is equivalent to the following two relations:

(a) αiT (~η N )[z; dN ] ≤n βi (~η N ) ∀[z; dN ] o


(b) fi (~η N ) := max αiT (~η N )[z; dN ] : k[z; dN ]k ≤ 1 ≤ αi .
[z;dN ]

Postponing for the moment the verification of this claim, let us look at the consequences. We
already know that (a) “admits polyhedral representation:” we can point out a system Si of
linear inequalities on ~η N and slack vector si such that (a) takes place if and only if ~κN can
be extended, by a properly chosen si , to a solution of Si . Further, the function fi (·) clearly is
convex (since αi (·) is affine) and efficiently computable, provided that the norm k · k is efficiently
computable (indeed, in this case computing fi at a point reduces to maximizing a linear form
over a convex set). Thus, not only (a), but (b) as well is a tractable constraint on ~κN .

In particular, when k · k is a polyhedral norm, meaning that the set {[z; dN ] :


k[z; dN ]k ≤ 1} admits polyhedral representation:

{[z; dN ] : k[z; dN ]k ≤ 1} = {[z; dN ] : ∃y : U · [z; dN ] + Y y ≤ q},

we have

fi (~η N ) = max αiT (~η N )[z; dN ] : U · [z; dN ] + Y y ≤ q
[z;dN
],y
= min q T ti : U T ti = αiT (~η N ), Y T ti = 0, ti ≥ 0 [by LP Duality Theorem],
ti

We arrive at an explicit “polyhedral reformulation” of (b): ~η N satisfies (b) if and


only if ~η N can be extended, by a properly chosen slack vector ti , to a solution of an
explicit system of linear inequalities in variables ~η N , ti , specifically, the system

q T ti ≤ αi U T ti = αiT (~η N ), Y T ti = 0, ti ≥ 0 (Ti )

Thus, for a polyhedral norm k · k, (a), (b) can be expressed by a system of linear
inequalities in the parameters ~η N of the a.p.o.b. control law we are interested in and
in slack variables si , ti .

Remark 1.3.2 For the time being, we treated sensitivities αi as given parameters. Our analy-
sis, however, shows that in “tractable reformulation” of (1.3.25) αi are just the right hand sides
of some inequality constraints, meaning that the nice structure of these constraints (convex-
ity/linearity) is preserved when treating αi as variables (restricted to be ≥ 0) rather than given
parameters. As a result, we can solve efficiently the problem of synthesis an a.p.o.b. controller
which ensures (1.3.25) and minimizes one of αi ’s (or some weighted sum of αi ’s) under addi-
tional, say, linear, constraints on αi ’s, and there are situations when such a goal makes perfect
sense.

Clearing debts. It remains to prove (&), which is easy. Leaving the rest of the justification of
(&) to the reader, let us focus on the only not completely evident part of the claim, namely, that
(1.3.25) implies (b). The reasoning is as follows: take a vector [z̄; d¯N ] ∈ LN with k[z̄; d¯N ]k ≤ 1
1.4. FROM LINEAR TO CONIC PROGRAMMING 49

and a vector [zb; dbN ] ∈ ZDN , and let t > 0. (1.3.25) should hold for [z; dN = [zb; dbN ] + t[z̄; d¯N ]],
meaning that
 
αiT (~η N ) [zb; dbN ] + t[z̄; d¯N ] ≤ βi (~η N ) + αi dist([z; dN ], ZDN )
| {z }
≤tk[z;dN ]−[ẑ;dˆN ]k=
tk[z̄;d¯N ]k≤t

Dividing both sides by t and passing to limit as t → ∞, we get

αiT (~η N )[z̄; d¯N ] ≤ αi ,

and this is so for every [z̄; d¯N ] of k · k-norm not exceeding 1; this is exactly (b).

1.4 From Linear to Conic Programming


Linear Programming models cover numerous applications. Whenever applicable, LP allows to
obtain useful quantitative and qualitative information on the problem at hand. The specific
analytic structure of LP programs gives rise to a number of general results (e.g., those of the LP
Duality Theory) which provide us in many cases with valuable insight and understanding. At
the same time, this analytic structure underlies some specific computational techniques for LP;
these techniques, which by now are perfectly well developed, allow to solve routinely quite large
(tens/hundreds of thousands of variables and constraints) LP programs. Nevertheless, there
are situations in reality which cannot be covered by LP models. To handle these “essentially
nonlinear” cases, one needs to extend the basic theoretical results and computational techniques
known for LP beyond the bounds of Linear Programming.
For the time being, the widest class of optimization problems to which the basic results of
LP were extended, is the class of convex optimization programs. There are several equivalent
ways to define a general convex optimization problem; the one we are about to use is not the
traditional one, but it is well suited to encompass the range of applications we intend to cover
in our course.
When passing from a generic LP problem
 
T

min c x Ax ≥ b [A : m × n] (LP)
x

to its nonlinear extensions, we should expect to encounter some nonlinear components in the
problem. The traditional way here is to say: “Well, in (LP) there are a linear objective function
f (x) = cT x and inequality constraints fi (x) ≥ bi with linear functions fi (x) = aTi x, i = 1, ..., m.
Let us allow some/all of these functions f, f1 , ..., fm to be nonlinear.” In contrast to this tra-
ditional way, we intend to keep the objective and the constraints linear, but introduce “nonlin-
earity” in the inequality sign ≥.

1.4.1 Orderings of Rm and cones


The constraint inequality Ax ≥ b in (LP) is an inequality between vectors; as such, it requires a
definition, and the definition is well-known: given two vectors a, b ∈ Rm , we write a ≥ b, if the
coordinates of a majorate the corresponding coordinates of b:

a ≥ b ⇔ {ai ≥ bi , i = 1, ..., m}. (” ≥ ”)


50 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

In the latter relation, we again meet with the inequality sign ≥, but now it stands for the
“arithmetic ≥” – a well-known relation between real numbers. The above “coordinate-wise”
partial ordering of vectors in Rm satisfies a number of basic properties of the standard ordering
of reals; namely, for all vectors a, b, c, d, ... ∈ Rm one has

1. Reflexivity: a ≥ a;

2. Anti-symmetry: if both a ≥ b and b ≥ a, then a = b;

3. Transitivity: if both a ≥ b and b ≥ c, then a ≥ c;

4. Compatibility with linear operations:

(a) Homogeneity: if a ≥ b and λ is a nonnegative real, then λa ≥ λb


(”One can multiply both sides of an inequality by a nonnegative real”)
(b) Additivity: if both a ≥ b and c ≥ d, then a + c ≥ b + d
(”One can add two inequalities of the same sign”).

It turns out that

• A significant part of the nice features of LP programs comes from the fact that the vector
inequality ≥ in the constraint of (LP) satisfies the properties 1. – 4.;

• The standard inequality ” ≥ ” is neither the only possible, nor the only interesting way to
define the notion of a vector inequality fitting the axioms 1. – 4.

As a result,

A generic optimization problem which looks exactly the same as (LP), up to the
fact that the inequality ≥ in (LP) is now replaced with and ordering which differs
from the component-wise one, inherits a significant part of the properties of LP
problems. Specifying properly the ordering of vectors, one can obtain from (LP)
generic optimization problems covering many important applications which cannot
be treated by the standard LP.

To the moment what is said is just a declaration. Let us look how this declaration comes to
life.
We start with clarifying the “geometry” of a “vector inequality” satisfying the axioms 1. –
4. Thus, we consider vectors from a finite-dimensional Euclidean space E with an inner product
h·, ·i and assume that E is equipped with a partial ordering (called also vector inequality), let it
be denoted by : in other words, we say what are the pairs of vectors a, b from E linked by the
inequality a  b. We call the ordering “good”, if it obeys the axioms 1. – 4., and are interested
to understand what are these good orderings.
Our first observation is:

A. A good vector inequality  is completely identified by the set K of -nonnegative


vectors:
K = {a ∈ E : a  0}.
Namely,
ab⇔a−b0 [⇔ a − b ∈ K].
1.4. FROM LINEAR TO CONIC PROGRAMMING 51

Indeed, let a  b. By 1. we have −b  −b, and by 4.(b) we may add the latter
inequality to the former one to get a − b  0. Vice versa, if a − b  0, then, adding
to this inequality the one b  b, we get a  b.

The set K in Observation A cannot be arbitrary. It is easy to verify that it must be a pointed
cone, i.e., it must satisfy the following conditions:

1. K is nonempty and closed under addition:

a, a0 ∈ K ⇒ a + a0 ∈ K;

2. K is a conic set:
a ∈ K, λ ≥ 0 ⇒ λa ∈ K.

3. K is pointed:
a ∈ K and − a ∈ K ⇒ a = 0.

Geometrically: K does not contain straight lines passing through the origin.

Definition 1.4.1 [a cone] From now on, we refer to a subset of Rn which is nonempty, conic
and closed under addition as to a cone in Rn ; equivalently, a cone in Rn is a nonempty convex
and conic subset of Rn .

Note that with our terminology, convexity is built into the definition of a cone; in this book
the words “convex cone” which we use from time to time mean exactly the same as the word
“cone.”

Exercise 1.3 Prove that the outlined properties of K are necessary and sufficient for the vector
inequality a  b ⇔ a − b ∈ K to be good.

Thus, every pointed cone K in E induces a partial ordering on E which satisfies the axioms
1. – 4. We denote this ordering by ≥K :

a ≥K b ⇔ a − b ≥K 0 ⇔ a − b ∈ K.

What is the cone responsible for the standard coordinate-wise ordering ≥ on E = Rm we have
started with? The answer is clear: this is the cone comprised of vectors with nonnegative entries
– the nonnegative orthant

Rm T m
+ = {x = (x1 , ..., xm ) ∈ R : xi ≥ 0, i = 1, ..., m}.

(Thus, in order to express the fact that a vector a is greater than or equal to, in the component-
wise sense, to a vector b, we were supposed to write a ≥Rm +
b. However, we are not going to be
that formal and shall use the standard shorthand notation a ≥ b.)
The nonnegative orthant Rm + is not just a pointed cone; it possesses two useful additional
properties:
I. The cone is closed: if a sequence of vectors ai from the cone has a limit, the latter also
belongs to the cone.
II. The cone possesses a nonempty interior: there exists a vector such that a ball of positive
radius centered at the vector is contained in the cone.
52 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

These additional properties are very important. For example, I is responsible for the possi-
bility to pass to the term-wise limit in an inequality:

ai ≥ bi ∀i, ai → a, bi → b as i → ∞ ⇒ a ≥ b.

It makes sense to restrict ourselves with good partial orderings coming from cones K sharing
the properties I, II. Thus,

From now on, speaking about vector inequalities ≥K , we always assume that the
underlying set K is a pointed and closed cone with a nonempty interior.

Note that the closedness of K makes it possible to pass to limits in ≥K -inequalities:

ai ≥K bi , ai → a, bi → b as i → ∞ ⇒ a ≥K b.

The nonemptiness of the interior of K allows to define, along with the “non-strict” inequality
a ≥K b, also the strict inequality according to the rule

a >K b ⇔ a − b ∈ int K,

where int K is the interior of the cone K. E.g., the strict coordinate-wise inequality a >Rm
+
b
(shorthand: a > b) simply says that the coordinates of a are strictly greater, in the usual
arithmetic sense, than the corresponding coordinates of b.

Examples. The partial orderings we are especially interested in are given by the following
cones:

• The nonnegative orthant Rm n


+ in R ;

• The Lorentz (or the second-order, or the less scientific name the ice-cream) cone
 v
 um−1 
uX 
Lm = x = (x1 , ..., xm−1 , xm )T ∈ Rm : xm ≥ t x2i
 
i=1

• The semidefinite cone Sm


+ . This cone “lives” in the space E = S
m of m × m symmetric

matrices (equipped with the Frobenius inner product hA, Bi = Tr(AB) = Aij Bij ) and
P
i,j
consists of all m × m matrices A which are positive semidefinite, i.e.,

A = AT ; xT Ax ≥ 0 ∀x ∈ Rm .

1.4.2 “Conic programming” – what is it?


Let K be a regular cone in E, regularity meaning that the cone is convex, pointed, closed and
with a nonempty interior. Given an objective c ∈ Rn , a linear mapping x 7→ Ax : Rn → E and
a right hand side b ∈ E, consider the optimization problem
 
min cT x Ax ≥K b

(CP).
x
1.4. FROM LINEAR TO CONIC PROGRAMMING 53

We shall refer to (CP) as to a conic problem associated with the cone K, and to the constraint

Ax ≥K b

– as to linear vector inequality, or conic inequality, or conic constraint, in variables x.


Note that the only difference between this program and an LP problem is that the latter
deals with the particular choice E = Rm , K = Rm + . With the formulation (CP), we get a
possibility to cover a much wider spectrum of applications which cannot be captured by LP; we
shall look at numerous examples in the sequel.

1.4.3 Conic Duality


Aside of algorithmic issues, the most important theoretical result in Linear Programming is the
LP Duality Theorem; can this theorem be extended to conic problems? What is the extension?
The source of the LP Duality Theorem was the desire to get in a systematic way a lower
bound on the optimal value c∗ in an LP program
 
∗ T

c = min c x Ax ≥ b .
(LP)
x

The bound was obtained by looking at the inequalities of the type

hλ, Axi ≡ λT Ax ≥ λT b (Cons(λ))

with weight vectors λ ≥ 0. By its origin, an inequality of this type is a consequence of the system
of constraints Ax ≥ b of (LP), i.e., it is satisfied at every solution to the system. Consequently,
whenever we are lucky to get, as the left hand side of (Cons(λ)), the expression cT x, i.e.,
whenever a nonnegative weight vector λ satisfies the relation

AT λ = c,

the inequality (Cons(λ)) yields a lower bound bT λ on the optimal value in (LP). And the dual
problem n o
max bT λ : λ ≥ 0, AT λ = c
was nothing but the problem of finding the best lower bound one can get in this fashion.
The same scheme can be used to develop the dual to a conic problem
n o
min cT x : Ax ≥K b , K ⊂ E. (CP)

Here the only step which needs clarification is the following one:

(?) What are the “admissible” weight vectors λ, i.e., the vectors such that the scalar
inequality
hλ, Axi ≥ hλ, bi
is a consequence of the vector inequality Ax ≥K b?

In the particular case of coordinate-wise partial ordering, i.e., in the case of E = Rm , K = Rm


+,
the admissible vectors were those with nonnegative coordinates. These vectors, however, not
necessarily are admissible for an ordering ≥K when K is different from the nonnegative orthant:
54 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

Example 1.4.1 Consider the ordering ≥L3 on E = R3 given by the 3-dimensional ice-cream
cone:
a1 0
   
q
 a2  ≥L3  0  ⇔ a3 ≥ a2 + a2 .
1 2
a3 0
The inequality
−1 0
   
 −1  ≥L3  0 
2 0
1
 

is valid; however, aggregating this inequality with the aid of a positive weight vector λ =  1 ,
0.1
we get the false inequality
−1.8 ≥ 0.
Thus, not every nonnegative weight vector is admissible for the partial ordering ≥L3 .
To answer the question (?) is the same as to say what are the weight vectors λ such that

∀a ≥K 0 : hλ, ai ≥ 0. (1.4.1)

Whenever λ possesses the property (1.4.1), the scalar inequality

hλ, ai ≥ hλ, bi

is a consequence of the vector inequality a ≥K b:

a ≥K b
⇔ a − b ≥K 0 [additivity of ≥K ]
⇒ hλ, a − bi ≥ 0 [by (1.4.1)]
⇔ hλ, ai ≥ λT b. 2

Vice versa, if λ is an admissible weight vector for the partial ordering ≥K :

∀(a, b : a ≥K b) : hλ, ai ≥ hλ, bi

then, of course, λ satisfies (1.4.1).


Thus the weight vectors λ which are admissible for a partial ordering ≥K are exactly the
vectors satisfying (1.4.1), or, which is the same, the vectors from the set

K∗ = {λ ∈ E : hλ, ai ≥ 0 ∀a ∈ K}.

The set K∗ is comprised of vectors whose inner products with all vectors from K are nonnegative.
K∗ is called the cone dual to K. The name is legitimate due to the following fact (see Section
B.2.7.B):
Theorem 1.4.1 [Properties of the dual cone] Let E be a finite-dimensional Euclidean space
with inner product h·, ·i and let K ⊂ E be a nonempty set. Then
(i) The set
K∗ = {λ ∈ Em : hλ, ai ≥ 0 ∀a ∈ K}
is a closed cone.
1.4. FROM LINEAR TO CONIC PROGRAMMING 55

(ii) If int K 6= ∅, then K∗ is pointed.


(iii) If K is a closed pointed cone, then int K∗ 6= ∅.
(iv) If K is a closed cone, then so is K∗ , and the cone dual to K∗ is K itself:

(K∗ )∗ = K.

An immediate corollary of the Theorem is as follows:

Corollary 1.4.1 A closed cone K ⊂ E is regular (i.e., in addition to being a closed cone, is
pointed and with a nonempty interior) if and only if the set K∗ is so.

From the dual cone to the problem dual to (CP). Now we are ready to derive the dual
problem of a conic problem (CP). In fact, it makes sense to operate with the problem in a
slightly more flexible form than (CP), namely, problem
n o
Opt(P ) = minn cT x : Ax − b ≥K 0, Rx = r (P )
x∈R

where K is a regular cone in Euclidean space E.


As in the case of Linear Programming, we start with the observation that whenever x is
a feasible solution to (P ), λ is an admissible weight vector, i.e., λ ∈ K∗ , and µ is a whatever
vector of the same dimension as r, let us call it d, then x satisfies the scalar inequality

(A∗ λ)T x + RT µ ≥ hb, λi + rT µ, 12 )

– this observation is an immediate consequence of the definition of K∗ . It follows that whenever


λ ∈ K∗ and µ ∈ Rd satisfy the relation

A∗ λ + RT µ = c,

one has
cT x = (A∗ λ)T x + µT x = hλ, Axi + µT x ≥ hb, λi + rT µ

for all x feasible for (P ), so that the quantity hb, λi + rT µ is a lower bound on the optimal value
Opt(P ) of (P ). The best bound one can get in this fashion is the optimal value in the problem
n o
Opt(D) = max hb, λi + rT µ : A∗ λ + RT µ = c, λ ≥K∗ 0 (D)
λ,µ

and this program is called the program dual to (P ).


12)
For a linear operator x 7→ Ax : Rn → E, A∗ is the conjugate operator given by the identity

hy, Axi = xT Ay ∀(y ∈ E, x ∈ Rn ).

When representing the operators by their matrices in orthogonal bases in the argument and the range spaces,
the matrix representing the conjugate operator is exactly the transpose of the matrix representing the operator
itself.
56 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

Slight modification. “In real life” the cone K in (P ) usually is a direct product of a number
m regular cones:
K = K1 × ... × Km
or, in other words, instead of one conic constraint Ax − b ≥K 0 we operate with a system of m
conic constraints Ai x − bi ≥Ki 0, i ≤ m, so that (P ) reads
n o
Opt(P ) = min cT x : Ai x − bi ≥Ki 0, i ≤ m, Rx = r . (P )
x

Taking into account that the cone dual to the direct product of several regular cones clearly is
the direct product of cones dual to the factors, the above recipe for building the dual as applied
to the latter problem reads as follows:
• We equip conic constraints Ai x − bi ≥Ki 0 with weights (a.k.a Lagrange multipliers) λi
restricted to reside in the dual to Ki cones Ki∗ , and the linear equality constraint Rx = r
- with Lagrange multiplier µ residing in Rd , where d is the dimension of r;
• We multiply the constraints by the Lagrange multipliers and sum the results up, thus
arriving at the aggregated scalar linear inequality
" #T
A∗i λi
X X
T
+R µ x≥ hbi , λi i + rT µ
i i

which by its origin is a consequence of the system of constraints of (P ) – it is satisfied


at every feasible solution x to the problem. In particular, when the left hand side of the
aggregated inequality is identically in x equal to cT x, its right hand size is a lower bound
on Opt(P ).
The problem dual to (P ) is the problem
( )
A∗i λi
X X
T T
Opt(D) = max hbi , λi i + r µ : + R µ = c, λi ∈ Ki∗ , i ≤m
{λi },µ
i i

of finding the best lower bound on Opt(P ) allowed by this bounding mechanism.
So far, what we know about the duality we have just introduced is the following
Proposition 1.4.1 [Weak Conic Duality Theorem] The optimal value Opt(D) of (D) is a lower
bound on the optimal value Opt(P ) of (P ).

1.4.4 Geometry of the primal and the dual problems


We are about to understand extremely nice geometry of the pair comprised of conic problem
(P ) and its dual (D):
n o
Opt(P ) = minx cT x : Ax − b ∈ K, Rx = r (P )
n o
Opt(D) = maxλ,µ hb, λi + rT µ : λ ∈ K∗ , A∗ λ + RT µ = c (D)
Let us make the following
Assumption: The systems of linear equality constraints in (P ) and (D) are solvable:
∃x̄, (λ, µ̄) : Rx̄ = r, A∗ λ̄ + RT µ̄ = c.

which acts everywhere in this Section.


1.4. FROM LINEAR TO CONIC PROGRAMMING 57

A. Let us pass in (P ) from variable x to primal slack η = Ax − b. Whenever x satisfies Rx = r,


we have

cT x = [A∗ λ̄ + RT µ̄]T x = hλ̄, Axi + µ̄T Rx = hλ̄, Ax − bi + [hb, yλ


¯ + rT µ̄]

We see that (P ) is equivalent to the conic problem



Opt(P) = min hλ̄, ηi : η ∈ [L − η̄] ∩ K , L = {Ax : Rx = 0}, η̄ = b − Ax̄
η h i (P)
Opt(P) = Opt(P ) − [hb, λ̄i + rT µ̄]

Indeed, (P ) wants of η := Ax−b (a) to belong to K, and (b) to be representable as Ax−b for some
x satisfying Rx = r. (b) says that η should belong to the primal affine plane {Ax − b : Rx = r},
which is the shift of the parallel linear subspace L = {Ax : Rx = 0} by a (whatever) vector from
the primal affine plane, e.g., the vector −η̄ = Ax̄ − b.

B. Let us pass in (D) from variables (λ, µ) to variable λ. Whenever (λ, µ) satisfies A∗λ+RT µ =
c, we have

hb, λi + rT µ = hb, λi + x̄T RT µ = hb, λi + x̄T [c − A∗ λ] = hb − Ax̄, λi + cT x̄ = hη̄, λi + cT x̄,

and we see that (D) is equivalent to the conic problem


n o
Opt(D) = maxλ hηb, λi : λ ∈ [L⊥ + λ]
b ∩ K∗
h i (D)
Opt(D) = Opt(D) − cT x
b

where L⊥ is the orthogonal complement to L in E:

L⊥ = {ζ ∈ E : hζ, ηi = 0 ∀η ∈ L}.

Indeed, (D) wants of λ (a) to belong to K∗ , and (b) to satisfy A∗ λ = c − RT µ for some µ. (b)
says that λ should belong to the dual affine plane {λ : ∃µ : A∗ λ + RT µ = c}, which is the shift
of the parallel linear subspace Le = {λ : ∃µ : A∗ λ + RT µ = 0} by a (whatever) vector from the
dual affine plane, e.g., the vector λ.
b It remains to note that Elementary Linear Algebra says

that L = L . Indeed,
e

e ⊥ = {zζ : hζ, λi = 0 ∀λ : ∃µ : A∗ λ + RT µ = 0}
[L]
= {ζ : hζ, λi + 0T µ = 0 whenever A∗ λ + RT µ = 0}
= {ζ : ∃x : hζ, λi + 0T µ − [A∗ λ + RT µ]T x ≡ 0 ∀λ, µ} ≡ hζ + ∀(λ, µ)}
| {z }
≡[iζ−Ax,λi−[Rx]T µ
= {ζ : ∃x : ζ = Ax & Rx = 0}

The bottom line is that problems (P ), (D) are equivalent, respectively, to the problems
n o
Opt(P) = minη hλ,
b ηi : η ∈ [L − ηb] ∩ K (P)
n o
Opt(D) = maxλ hηb, λi : λ ∈ [L⊥ + λ]
b ∩ K∗ (D)
h i
L = {Ax : Rx = 0}, Rx
b = r, ηb = b − Ax
b, A∗ λ
b + RT µ
b =c
58 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

Note that when x is feasible for (P ), and [λ, µ] is feasible for (D), the vectors η = Ax − b, λ are
feasible for (P), resp., (D), and every pair η, λ of feasible solutions to (P), (D) can be obtained
in the fashion just described from feasible solutions x to (P ) and (λ, µ) to (D). Besides this, we
have nice expression for the duality gap – the value of the objective of (P ) at x minus the value
of the objective of (D) as evaluated at (λ, µ):
DualityGap(x; (λ, µ)) := cT x−[hb, λi+rT µ] = [A∗ λ+RT µ]T x−hb, λi−rT µ = hAx−b, λi = hη, λi.
Geometrically, (P ), (D) are as follows: ”geometric data” of the problems are the pair of linear
subspaces L, L⊥ in the space E where K, K∗ live, the subspaces being orthogonal complements
to each other, and pair of vectors ηb, λ
b in this space.

• (P ) is equivalent to minimizing f (η) = hλ,


b ηi over the intersection of K and the primal
feasible plane MP which is the shift of L by −ηb
• (D) is equivalent to maximizing g(λ) = hηb.λi over the intersection of K∗ and the dual
feasible plane MD which is the shift of L⊥ by λ
b

• taken together, (P ) and (D) form the problem of minimizing the duality gap over feasible
solutions to the problems, which is exactly the problem of finding pair of vectors in MP ∩ K
and MD ∩ K∗ as close to orthogonality as possible.
Pay attention to the ideal geometrical primal-dual symmetry we observe.

Figure 1.1. Primal-dual pair of conic problems on 3D Lorentz cone


Red: feasible set of (P) Blue: feasible set of (D)
1.4. FROM LINEAR TO CONIC PROGRAMMING 59

1.4.5 Conic Duality Theorem


Definition 1.4.2 A conic problem of optimizing a linear objective under the constraints
Ax − b ∈ K, Rx = r
is called strictly feasible, if there exists a feasible solution x̄ which strictly satisfies the conic
constraint:
∃x̄ : Rx̄ = r & Ax̄ − b ∈ int K.
Assuming that the conic constraint is split into ”general” and ”polyhedral” parts, so that the
feasible set is given by
Ax − b ∈ K, P x − p ≥ 0, Rx = r
the problem is called essentially strictly feasible, if there exists a feasible solution x̄ which strictly
satisfies the ”general” conic constraint:
∃x̄ : Rx̄ = r, P x̄ − p ≥ 0, Ax̄ − b ∈ int K.

From now on we make the following


Convention: In the above definition, the constraints of the problem have conic part
Ax − b ≥K 0 and linear equality part Rx = r (or conic part Ax − b ≥K 0, polyhedral
part P x − p ≥ 0, and linear equality part Rx = 0). In the sequel, when speaking
about strict/essemtially strict feasibility of a particular conic problem, some of these
parts can be absent. Whenever this is the case, to make the definition applicable, we
act as if the constraints were augmented by trivial version(s) of the missing part(s),
say, [0; ...; 0]T x + 1 ≥ 0 in the role of the actually missing conic part Ax − b ≥K 0
and [0; ...; 0]T x = 0 in the role of the actually missing linear equality part.
For example, the univariate problem
min{x : x ≥ 0, −x ≥ −1}
x∈R

(single conic constraint with K = R2+ , no linear equalities) is strictly feasible, same as the
problem
min{x : x = 0}
x∈R
(no conic constraint, only linear equality one), while the problem
min{x : x ≥ 0, −x ≥ 0}
x∈R

(single polyhedral conic constraint with K = R2+ , no linear equalities) is essentially strictly
feasible, but is not strictly feasible.
Note: When the conic constraint in the primal problem allows for splitting into ”general” and
”polyhedral” parts:
n o
Opt(P ) = min cT x : Ax − b ∈ K, P x − p ≥ 0, Rx = r (P )
x

then the dual problem reads


n o
Opt(D) = max hb, λi + pT θ + rT µ : λ ∈ K∗ , θ ≥ 0, A∗ λ + P T θ + RT µ = c (D)
λ,θ,µ

so that its conic constraint also is split into ”general” and ”polyhedral” parts.
60 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

Definition 1.4.3 Let K be a regular cone. A conic constraint


Ax − b ≥K 0 (∗)
is called strictly feasible, if there exists x̄ which satisfies the constraint strictly: Ax̄ − b >K 0
(i.e, Ax̄ − b ∈ int K).
The constraint is called essentially strictly feasible, if K can be represented as the direct
product of several factors, some of them nonnegative orthants (“polyhedral factors”), and there
exists an essentially strictly feasible solution – a feasible solution x̄ such that Ax̄ − b belongs to
the direct product of these polyhedral factors and the interiors of the remaining factors.

Note: Essential strict feasibility of (∗) means that in fact (∗) is a system of several conic
constraints, some of them just ≥-type ones, and there exists x̄ which satisfies the ≥ constraints
and strictly satisfies all other constraints of the system. For example,
• the system of constraints in variables x ∈ R3
q
x1 − x2 ≥ 0, x2 − x1 ≥ −1, x3 ≥ x21 + x22

(it can be thought of as a single conic constraint with K = R+ × R+ × L3 ) is strictly


feasible, a strictly feasible solution being, e.g., x1 = 0.5, x2 = 0, x3 = 1;
• the system of constraints in variables x ∈ R3
q
x1 − x2 ≥ 0, x2 − x1 ≥ 0, x3 ≥ x21 + x22

— a single conic constraint with the same cone K = R+ × R+ × L3 as above — clearly


is not strictly feasible, but is essentially strictly feasible, an essentially strictly feasible
solution being, e.g. x1 = 0, x2 = 0, x3 = 1.

Theorem 1.4.2 [Conic Duality Theorem] Consider conic program along with its dual:
n o
Opt(P ) = minx cT x : Ax − b ∈ K, Rx = r (P )
n o
Opt(D) = maxλ,µ hb, λi + rT µ : λ ∈ K∗ , A∗ λ + RT µ =c (D)

Then

• [Primal-Dual Symmetry] The duality is symmetric: (D) is conic along with (P ) and the
problem dual to (D) is (equivalent to) (P ).
• [Weak Duality] One has Opt(D) ≤ Opt(P ).
• [Strong Duality] Assume that one of the problems (P ), (D) is strictly feasible and bounded,
boundedness meaning on the feasible set the objective is bounded from below in the mini-
mization and from above – in the maximization case. Then the other problem in the pair
is solvable, and
Opt(P ) = Opt(D).
In particular, if both problems are strictly feasible (and thus both are bounded by Weak
Duality), then both problems are solvable with equal optimal values.
In addition, if one of the problems is strictly feasible, then Opt(P ) = Opt(D).

Proof.
1.4. FROM LINEAR TO CONIC PROGRAMMING 61

A. Primal-Dual Symmetry: (D) is a conic problem. To write down its dual, we rewrite it
as a minimization problem
n o
−Opt(D) = min −hb, λi − rT µ : λ ∈ K∗ , A∗ λ + RT µ = c
λ,µ

denoting the Lagrange multipliers for the constraints λ ∈ K∗ and A∗ λ + RT µ = c by z and −x,
the dual to dual problem reads
 
max − cT x : −Ax + z = −b, z ∈ (K∗ )∗ [= K], −Rx = −r .
z,x | {z }
says that Ax − b ∈ K
Eliminating z, we arrive at (P ). 2

B. Weak Duality By construction of the dual. 2

C. Strong Duality: We should prove that if one of the problems (P ), (D) is strictly feasible
and bounded, then the other problem is solvable with Opt(P ) = Opt(D), or, which is the same
by Weak Duality, with Opt(D) ≥ Opt(P ). By Primal-Dual Symmetry, we lose nothing when
assuming that (P ) is strictly feasible and bounded.
C.0. Observe that the fact we should prove is “stable w.r.t. shift in x: passing in (P ) from
variable x to variable h = x − x̄, the problem becomes
h i n o
Opt(P ) − cT x̄ = Opt(P 0 ) = min cT h : Ah − [b − Ax̄] ≥K 0, Rh = [r − Rx̄] ,
h

with the dual being


n o
Opt(D0 ) = max hb − Ax̄, λi + [r − Rx̄]T µ : λ ∈ K∗ , A∗ λ + RT µ = c (D0 )
λ,µ

We see that the feasible set of (D0 ) is exactly the same as the one of (D), and at a point (λ, µ)
from this set the objective of (D0 ) is the objective of (D) minus the quantity
hAx̄, λi + [Rx̄]T µ = x̄T [A∗ λ + RT µ] = cT x̄
Thus, (D0 ) is obtained from (D) by keeping the feasible set of (D) intact and shifting the objective
by the constant −cT x̄. Consequently, (D) and (D0 ) are solvable/unsolvable simultaneously, and
Opt(P ) = Opt(D) is exactly the same as Opt(P 0 ) = Opt(D0 ).
The observation we have just made says that we lose nothing when assuming that the strictly
feasible solution x̄ is just the origin, implying that r = 0, b <K 0, and the dual problem reads
n o
max hb, λi : λ ∈ K∗ , A∗ λ + RT µ = c (D)
λ,µ

C.1. Let F = {x : Rx = 0} be the set of solutions to the system of primal equality constraints.
Consider the sets
S = {(s, z) ∈ R×E : s < Opt(P ), z ≤K 0}, T = {(s, z) ∈ R×E : ∃x ∈ F : cT x ≤ s, b−Ax ≤K z}.
Clearly, S and T are nonempty convex sets with empty intersection. Now let us use the following
fundamental fact: (see Section B.2.6):
62 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

Theorem 1.4.3 [Separation Theorem for Convex Sets] Let S, T be nonempty non-
intersecting convex subsets of a finite-dimensional Euclidean space H with inner
product h·, ·i. Then S and T can be separated by a linear functional: there exists a
nonzero vector λ ∈ H such that

suphλ, ui ≤ inf hλ, ui.


u∈S u∈T

By Separation Theorem, we can find a pair 0 6= (α, λ) ∈


bR × E such that
sup [αs + hλ, zi] ≤ inf [αs + hλ, zi]. (1.4.2)
s<Opt(P ),z≤K 0 (s,z)∈T
| {z }
≡ sup [αs+hλ,zi]
(s,z)∈S

The left hand side in this inequality should be finite, implying that α ≥ 0 and λ ∈ K∗ . In
view of this observation and the definitions of S and T , the left hand side in the inequality s
αOpt(P ), and the right hand side is
n o
inf αcT x + hb − Ax, λi : x ∈ F .
x

Thus, by (1.4.2) we have


h i
αOpt(P ) ≤ inf αcT x + hb − Ax, λi . (1.4.3)
x∈F

Recall that α ≥ 0 and λ ∈ K∗ . We claim that in fact α > 0. Indeed, assuming α = 0, we have
λ 6= 0 (since (α, λ) 6= 0) and λ ∈ K∗ , whence hb, λi < 0 (recall that b <K 0). As a result, (1.4.3)
cannot hold true (look what happens with the right hand side when x = 0 ∈ F ) which is the
desired contradiction.
We conclude that α > 0, so that (1.4.3) implies that λ∗ = α−1 λ is well defined, belongs to
K∗ and satisfies the relation

∀(x ∈ F ) : `(x) := cT x − hAx, λ∗ i ≥ δ := Opt(λ) − hb, λ∗ i.

Recall that F is the linear subspace Rx = 0, and since the linear form `(x) is below bounded
by d on this space, it is identically zero on F , and δ ≤ 0. In particular, c − A∗ λ∗ is orthogonal
to F , implying by Linear Algebra that c = A∗ λ∗ + RT µ∗ for some µ∗ . Recalling that λ∗ ∈ K∗
and looking at (D), we see that (λ∗ , µ∗ ) is a feasible solution to (D); the value bT λ∗ of the dual
objective at this feasible solution is ≥ Opt(P ) due to δ ≤ 0, which, by weak duality, implies
that (λ∗ , µ∗ ) is optimal solution to (D) and Opt(P ) = Opt(D).
C.2. We have proved that when one of the problems (P ), (D) is strictly feasible and bounded,
then the other one is solvable, and Opt(P ) = Opt(D). As a result, when both (P ) and (D) are
strictly feasible, then both are solvable with equal optimal values (since by Weak Duality when
one of the problems is feasible, the other one is bounded). Finally, we should verify that if one of
the problems (P ), (D) is strictly feasible, then Opt(P ) = Opt(D). By Primal-Dual Symmetry,
we lose nothing when assuming that (P ) is strictly feasible. If (P ) is bounded, then we already
know that Opt(D) = Opt(P ). And when (P ) us unbounded, we have Opt(P ) = −∞, whence
Opt(D) = −∞ by Weak Duality. Verification of Strong Duality is completed.
1.4. FROM LINEAR TO CONIC PROGRAMMING 63

D. Optimality conditions. Let x be feasible for (P ), (λ, µ) be feasible for (D), and let one
of the problems be strictly feasible and bounded, so that Opt(P ) = Opt(D) are two equal reals.
We have
h i h i
DualityGap(x, (λ, µ)) = cT x − [hb, λi + rT µ] = cT x − Opt(P ) + Opt(D) − [hb, λi + rT µ]
| {z } | {z }
≥0 ≥0

We see that in the case under consideration the duality gap is the sum of non-optimalities of the
primal feasible solution and the dual feasible solution in terms of respective problems, implying
ghat the duality gap is zero if and only if the solutions in question are optimal for the respective
problems.
The “Complementary Slackness” necessary and sufficient, under the circumstances, opti-
mality condition is readily given by the fact that for primal-dual feasible x, (λ, µ) (in fact- for
x, (λ, µ) satisfying only the linear equality constraints of the respective problems), we have

cT x − [hb, λi + rT µ] = [A∗ λ + RT µ]T x − [hb, λi + rT µ] = hλ, Ax − bi + µT [Rx − r],


| {z }
=0

that is, the duality gap is zero if and only if complementary slackness takes place. 2

1.4.5.1 Refinement
We can slightly refine the Conic Duality Theorem, applying “special treatment” to scalar linear
inequality constraints. Specifically, assume that our primal problem is
( )
P x − p ≥ 0 (a)
T
Opt(P ) = min c x : (P )
x Ax − b ∈ K (b)

where K is regular cone in Euclidean space E; to save notation, we assume that linear equality
constraints, if any, are represented by pairs of opposite scalar linear inequalities included into
(a) . As we know, with (P ) in this form, the dual problem is
n o
Opt(D) = max θT p + hb, λi : θ ≥ 0, λ ∈ K∗ , P T θ + AT λ = c (D)
θ,λ

The refinement in question is as follows:

Theorem 1.4.4 [Refined Conic Duality Theorem] Consider a primal-dual pair of conic pro-
grams (P ), (D). All claims of Conic Duality Theorem 1.4.2 remain valid when replacing in its
formulation “strict feasibility” with “essentially strict feasibility” as defined in Definition 1.4.2.

Note that the Refined Conic Duality Theorem covers the usual Linear Programming Duality
Theorem: the latter is the particular case of the former corresponding to the case when the
only “actual” constraints are the polyhedral ones (which formally can be modeled by setting
K = R+ and Ax − b ≡ 1).
Proof. The only claim we should take care of is that Strong Duality remains valid when
strict feasibility is relaxed to essentially strict feasibility, provided that (P ), (Q) are of the form
postulated in this Section. On a closest inspection, it is immediately seen that all we need is to
prove that if one of the problems (P ), (D) is bounded and essentially strictly feasible, then the
64 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

other problem is solvable, and optimal values are equal to each other. Same as in the proof of
Theorem 1.4.2, we lose nothing when assuming that the essentially strictly feasible and bounded
problem is (P ).
Next, let us select among essentially strictly feasible solutions to (P ) the one which maximizes
the number of constraints in the polyhedral part P x − p ≥ 0 which are strictly satisfied at this
solution. By reasons completely similar to those used in item C.0 of the proof of Theorem
1.4.2 we lose nothing when assuming that this essentially strictly feasible solution is the origin
(implying that b <K 0 and p ≤ 0). We also loose nothing when assuming that the constraints
P x − p ≥ 0 indeed are present, since otherwise all we need is given by Theorem 1.4.2; let m > 0
be the dimension of p.
It may happen that all constraints P x − p ≥ 0 are strictly satisfied at the origin (i.e., p < 0).
In this case the origin is strictly feasible solution to (P ), and we get everything we need from
Theorem 1.4.2 as applied with R = 0, r = 0, the cone Rm + × K in the role of K, and the mapping
x 7→ [P x − p; Ax − b] in the role of the mapping x 7→ Ax − b. Now assume that ν, 0 < ν ≤ m, of
the entries in p are zeros, and the remaining, if any, are strictly negative. We lose nothing when
assuming that the last ν entries in p are zeros, and the first m − ν are negative. Now let us split
the constraints P x − p ≥ 0 into two groups: the first m − ν forming the system Qx − q ≥ 0 with
q < 0 and the last ν forming the system Rx ≥ 0. We claim that the system of constraints

Rx ≥ 0 (!)

has the same set of solutions as the system of linear equations

Rx = 0. (!!)

The only thing we should check is that if x̄ solves (!), it solves (!!) as well, that is, all entries in
Rx̄ (which are nonnegative) are in fact zeros. Indeed, assuming that some of these entries are
positive, we conclude that for small positive t the vector tx̄ strictly satisfies, along with x = 0,
the constraint Ax − b ≥K 0 and the constraints Qx − q ≥ 0, and satisfies all the constraints
Rx ≥ 0, some of them – strictly. In other words, tx̄ is an essentially strictly feasible solution
to (P ) were the number of strictly satisfied constrai8nts of the system P x − geq0 is larger than
at the origin; this is impossible, since by construction x = 0 is the essentially strictly feasible
solution to (P ) with the largest possible number of constraints P x − p ≥ 0 which are satisfied
at this solution strictly.
Now let K+ = Rm−ν + × K ⊂ E+ = Rm−ν × E. Consider the conic problem
n o
Opt(P 0 ) = min cT x : [Q; A]x − [q; b] ≥K+ 0, Rx = r ≡ 0 (P 0 )
x

where [Q; A]x − [q; b] ≡ (Qx − q, Ax − b). By construction and in view of the fact that solution
sets to (!) and (!!) are the same, the feasible set of the conic problem (P 0 ) and its objective are
exactly the same as in (P ), implying that (P 0 ) is bounded and Opt(P 0 ) = Opt(P ). Besides this,
(P 0 ) is strictly feasible, a strictly feasible solution being the origin. Applying Theorem 1.4.2,
the problem
 
0 T T T ∗
Opt(D ) = max hb, λi + q γ : λ ∈ K∗ , γ ≥ 0, Q γ + R µ +A λ = c (D0 )
γ,µ,λ |{z} | {z }
=pT [γ;µ] =P T [γ;µ]

dual to (P 0 ) is solvable with Opt(D0 ) = Opt(P 0 ), whence Opt(D0 ) = Opt(P ). Now let γ∗ , µ∗ , λ∗
be an optimal solution to (D0 ). Since (!!) is consequence of (!), for every column ρ of RT the
1.4. FROM LINEAR TO CONIC PROGRAMMING 65

vector −ρ is a conic combination of the rows ρ1 ,...,ρν of RT (since the homogeneous linear
inequality ρT x ≤ 0 is consequence of (!); recall Homogeneous Farkas Lemma). It follows that
we can find a vector µ+ T T + +
∗ ≥ 0 such that R µ∗ = R µ∗ . It follows that γ∗ , µ∗ , λ∗ is an optimal
solution to (D ) with the value of the objective Opt(D ) = Opt(P ). Looking at (D) and (D0 ) and
0 0

recalling that γ∗ ≥ 0, µ+∗ ≥ 0 and that the last ν entries in p are zeros, we conclude immediately
+ +
that θ∗ := [γ∗ ; µ+ ], λ∗ is a feasible solution to (D) with the value of the objective Opt(P ).
By Weak Duality as applied to (P ), (D), this solution is optimal. Thus, (D) is solvable, and
Opt(P ) = Opt(D). 2

1.4.6 Is something wrong with conic duality?


The statement of the Conic Duality Theorem is weaker than that of the LP Duality theorem:
in the LP case, feasibility (even non-strict) and boundedness of either primal, or dual problem
implies solvability of both the primal and the dual and equality between their optimal values. In
the general conic case something “nontrivial” is stated only in the case of strict (or essentially
strict) feasibility (and boundedness) of one of the problems. It can be demonstrated by examples
that this phenomenon reflects the nature of things, and is not due to our ability to analyze it.
The case of non-polyhedral cone K is truly more complicated than the one of the nonnegative
orthant K; as a result, a “word-by-word” extension of the LP Duality Theorem to the conic
case is false.

Example 1.4.2 Consider the following conic problem with 2 variables x = (x1 , x2 )T and the
3-dimensional ice-cream cone K:
   

 x1 − x2 

min x1 : Ax − b ≡  1  ≥L3 0 .
 
 
 x1 + x2 

Recalling the definition of L3 , we can write the problem equivalently as


 q 
min x1 : (x1 − x2 )2 + 1 ≤ x1 + x2 ,

i.e., as the problem


min {x1 : 4x1 x2 ≥ 1, x1 + x2 > 0} .
Geometrically the problem is to minimize x1 over the intersection of the 3D ice-cream cone with
a 2D plane; the inverse image of this intersection in the “design plane” of variables x1 , x2 is part
of the 2D nonnegative orthant bounded by the hyperbola x1 x2 ≥ 1/4. The problem is clearly
strictly feasible (a strictly feasible solution is, e.g., x = (1, 1)T ) and bounded below, with the
optimal value 0. This optimal value, however, is not achieved – the problem is unsolvable!

Example 1.4.3 Consider the following conic problem with two variables x = (x1 , x2 )T and the
3-dimensional ice-cream cone K:
   

 x1 

min x2 : Ax − b =  x2  ≥L3 0 .
 
 
 x1 
66 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

The problem is equivalent to the problem


 q 
x2 : x21 + x22 ≤ x1 ,

i.e., to the problem


min {x2 : x2 = 0, x1 ≥ 0} .
The problem is clearly solvable, and its optimal set is the ray {x1 ≥ 0, x2 = 0}.
Now let us build the conic dual to our (solvable!) primal. It is immediately seen that the
cone dual to an ice-cream cone is this ice-cream cone itself. Thus, the dual problem is
( " # " # )
λ1 + λ3 0
max 0 : = , λ ≥L3 0 .
λ λ2 1

In spite of the fact that primal is solvable, the dualqis infeasible: indeed, assuming that λ is dual
feasible, we have λ ≥L3 0, which means that λ3 ≥ λ21 + λ22 ; since also λ1 + λ3 = 0, we come to
λ2 = 0, which contradicts the equality λ2 = 1.

We see that the weakness of the Conic Duality Theorem as compared to the LP Duality one
reflects pathologies which indeed may happen in the general conic case.

1.4.7 Consequences of the Conic Duality Theorem


1.4.7.1 Sufficient condition for infeasibility
Recall that a necessary and sufficient condition for infeasibility of a (finite) system of scalar linear
inequalities (i.e., for a vector inequality with respect to the partial ordering ≥) is the possibility
to combine these inequalities in a linear fashion in such a way that the resulting scalar linear
inequality is contradictory. In the case of cone-generated vector inequalities a slightly weaker
result can be obtained:
Proposition 1.4.2 [Conic Theorem on Alternative] Consider a linear vector inequality

Ax − b ≥K 0. (I)

(i) If there exists λ satisfying

λ ≥K∗ 0, A∗ λ = 0, hλ, bi > 0, (II)

then (I) has no solutions.


(ii) If (II) has no solutions, then (I) is “almost solvable” – for every positive  there exists b0
such that kb0 − bk2 <  and the perturbed system

Ax − b0 ≥K 0

is solvable.
Moreover,
(iii) (II) is solvable if and only if (I) is not “almost solvable”.
Note the difference between the simple case when ≥K is the usual partial ordering ≥ and
the general case. In the former, one can replace in (ii) “nearly solvable” by “solvable”; however,
in the general conic case “almost” is unavoidable.
1.4. FROM LINEAR TO CONIC PROGRAMMING 67

Example 1.4.4 Let system (I) be given by


 
x+1
Ax − b ≡  x√− 1  ≥L3 0.
 
2x

Recalling the definition of the ice-cream cone L3 , we can write the inequality equivalently as
√ q p
2x ≥ (x + 1)2 + (x − 1)2 ≡ 2x2 + 2, (i)

which of course is unsolvable. The corresponding system (II) is


q h i
λ3 ≥ λ21 + λ22 ⇔ λ ≥L3∗ 0
√ h i
λ1 + λ2 + 2λ3 = 0 ⇔ AT λ = 0 (ii)
h i
λ2 − λ1 > 0 ⇔ bT λ >0

From the second of these relations, λ3 = − √12 (λ1 + λ2 ), so that from the first inequality we get
0 ≤ (λ1 − λ2 )2 , whence λ1 = λ2 . But then the third inequality in (ii) is impossible! We see that
here both (i) and (ii) have no solutions.
The geometry of the example is as follows. (i) asks to find a point in the intersection of
the 3D ice-cream cone and a line. This line is an asymptote of the cone (it belongs to a 2D
plane which crosses the cone in such way that the boundary of the cross-section is a branch of
a hyperbola, and the line is one of two asymptotes of the hyperbola). Although the intersection
is empty ((i) is unsolvable), small shifts of the line make the intersection nonempty (i.e., (i) is
unsolvable and “almost solvable” at the same time). And it turns out that one cannot certify
the fact that (i) itself is unsolvable by providing a solution to (ii).

Proof of the Proposition. (i) is evident (why?).


Let us prove (ii). To this end it suffices to verify that if (I) is not “almost solvable”, then (II) is
solvable. Let us fix a vector σ >K 0 and look at the conic problem

min {t : Ax + tσ − b ≥K 0} (CP)
x,t

in variables (x, t). Clearly, the problem is strictly feasible (why?). Now, if (I) is not almost solvable, then
the optimal value in (CP) is strictly positive (otherwise the problem would admit feasible solutions with
t close to 0, and this would mean that (I) is almost solvable). From the Conic Duality Theorem it follows
that the dual problem of (CP)

max {hb, λi : A∗ λ = 0, hσ, λi = 1, λ ≥K∗ 0}


λ

has a feasible solution with positive hb, λi, i.e., (II) is solvable.
It remains to prove (iii). Assume first that (I) is not almost solvable; then (II) must be solvable by
(ii). Vice versa, assume that (II) is solvable, and let λ be a solution to (II). Then λ solves also all systems
of the type (II) associated with small enough perturbations of b instead of b itself; by (i), it implies that
all inequalities obtained from (I) by small enough perturbation of b are unsolvable. 2
68 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

Explanation and refinement. The set B of those b’s for which (I) is solvable is nothing but

B = {b = Ax − y : y ∈ K};

this set clearly is convex and nonempty, but not necessarily is closed. Clearly, the set B of those
b’s for which (I) is almost solvable is exactly the closure clB of B. Now, by Conic Theorem on
Alternative, solvability of (II) is exactly the same as the fact that b 6∈ B. When B is closed, to
be outside of B and outside of B is the same, and in this special case (I) is unsolvable if and
only if (II) is solvable. For example, this is the case when K is a nonnegative orthant, since
here B is polyhedrally representable and therefore polyhedral and therefore closed. However, in
general B not necessarily is closed, this is why there is a gap between insolvability of (I) (that
is, the fact that b 6∈ B) and solvability of (II) (that is, the fact that b 6∈ B).
Note that we can slightly refine Conic Theorem on Alternative by considering the case when
the conic constraint in question has “general” and “polyhedral” parts:

Proposition 1.4.3 [Refined Conic Theorem on Alternative] Consider a linear vector inequality
split into “general” and “polyhedral” parts

P x − p ≥L 0, Qx − q ≥ 0 (I)

(i) If there exists λ and µ satisfying

λ ≥L∗ 0, µ ≥ 0, P ∗ λ + QT µ = 0, hλ, pi + µT q > 0, (II)

(that is, aggregating the components of (I) with weights λ and µ leads to a contradictory inequal-
ity), then (I) has no solutions.
(ii) If (II) has no solutions, then (I) is “almost solvable” – for every positive  there exists
p0 such that kp0 − pk2 <  and the perturbed system

P x − p0 ≥L 0, Qx − q ≥ 0

is solvable.
Moreover,
(iii) (II) is solvable if and only if (I) is not “almost solvable”.

Note the difference with Proposition 1.4.2: now “almost solvability” means the possibility to
make (I) feasible by arbitrarily small perturbation of block p of the conic constraint Ax − b :=
[P x; Qx] − [p; q] ≥K 0, K = L × Rn+ , while in Proposition 1.4.2 we were allowed to perturb the
entire b.
Proof. It suffices to prove (iii). In one direction: let (II) be solvable, and λ, µ be a solution to
(II), and in particular P ∗ λ+QT µ = 0. Then for small positive  and all p0 such that kp0 −pk2 ≤ 
we have hλ, p0 i + µT q > 0, implying that for every x it holds

hλ, P x − p0 i + µT (Qx − q) = −hλ, p0 i − µT q < 0,

implying that there is no x such that P x − p0 ≥L 0 and Qx ≥ q (recall that λ ≥L∗ 0 and µ ≥ 0).
Thus, all small enough perturbations of p keep (I) infeasible, so that (I) is not almost solvable,
as claimed.
In the opposite direction: assume that (II) is unsolvable, and let us prove that (I) is almost
solvable. Assume that the latter is not the case, that is (II) is unsolvable and (I) is not almost
1.4. FROM LINEAR TO CONIC PROGRAMMING 69

solvable, and let us lead this assumption to contradiction. First, observe that the system of
constraints Qx − q ≥ 0 is solvable, since otherwise by the usual Theorem on Alternative there
exists µ ≥ 0 such that QT µ = 0 and q T µ > 0; augmenting µ by λ = 0, we get a solution to (II),
which is unsolvable. Next, let us set

B = {r : ∃x, y : r = P x − y, y ∈ L, Qx ≥ q}

Since the system Qx ≥ q of constraints on x is solvable, B is a nonempty (and clearly convex)


set; it is comprised of all r’s such that the system

P x − r ≥L 0, Qx − q ≥ 0

of constraints on x is solvable. Since (I) is not almost solvable p does not belong to the closure
B of B. By Separation Theorem, p can be strictly separated from B: there exist α > 0 and λ
such that
hλ, pi ≥ α + supr∈B hλ, ri
(1.4.4)
= α + supx,y {hλ, P x − yi : Qx ≥ q, y ∈ L}
we see that the concluding supx,y is finite, implying that λ ∈ L∗ and therefore

∞ > sup {hλ, P x − yi : Qx ≥ q, y ∈ L} = sup{hλ, P xi : Qx ≥ q}.


x,y x

Since the system Qx ≥ q is solvable and supx {hλ, P xi : Qx ≥ q} is finite, applying LP Duality,
we conclude that there exists µ ≥ 0 such that QT µ + P ∗ λ = 0 and

sup{hλ, P xi : Qx ≥ q} = −µT q.
x

Invoking (1.4.4), we get


hλ, pi ≥ α − q T µ,
and, as we have seen λ ∈ L∗ , µ ≥ 0, P ∗ λ + QT µ = 0, implying that (λ, µ) is a solution to (II),
which is the desired contradiction. 2

1.4.7.2 When is a scalar linear inequality a consequence of a given linear vector


inequality?
The question we are interested in is as follows: given a linear vector inequality

Ax ≥K b (V)

and a scalar inequality


cT x ≥ d (S)
we want to check whether (S) is a consequence of (V). If K is the nonnegative orthant, the
answer is given by the Inhomogeneous Farkas Lemma:

Inequality (S) is a consequence of a feasible system of linear inequalities Ax ≥ b if


and only if (S) can be obtained from (V) and the trivial inequality 1 ≥ 0 in a linear
fashion (by taking weighted sum with nonnegative weights).
70 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

In the general conic case we can get a slightly weaker result:


Proposition 1.4.4 (i) If (S) can be obtained from (V) and from the trivial inequality 1 ≥ 0 by
admissible aggregation, i.e., there exist weight vector λ ≥K∗ 0 such that
A∗ λ = c, hλ, bi ≥ d,
then (S) is a consequence of (V).
(ii) If (S) is a consequence of an essentially strictly feasible linear vector inequality (V), then
(S) can be obtained from (V) by an admissible aggregation.
The difference between the case of the partial ordering ≥ and a general partial ordering ≥K
is in the words “essentially strictly” in (ii).
Proof of the proposition. (i) is evident (why?). To prove (ii), assume that (V) is strictly feasible and
(S) is a consequence of (V) and consider the conic problem
     
x Ax − b
min t : Ā − b̄ ≡ ≥ 0 ,
x,t t d − cT x + t K̄

K̄ = {(x, t) : x ∈ K, t ≥ 0}
The problem is clearly essentially strictly feasible (choose x to be an essentially strictly feasible solution
to (V) and then choose t to be large enough). The fact that (S) is a consequence of (V) says exactly
that the optimal value in the problem is nonnegative. By the (refined) Conic Duality Theorem, the dual
problem    
λ
max hb, λi − dµ : A∗ λ − c = 0, µ = 1, ≥K̄∗ 0
λ,µ µ
has a feasible solution with the value of the objective ≥ 0. Since, as it is easily seen, K̄∗ = {(λ, µ) : λ ∈
K∗ , µ ≥ 0}, the indicated solution satisfies the requirements
λ ≥K∗ 0, A∗ λ = c, hb, λi ≥ d,
i.e., (S) can be obtained from (V) by an admissible aggregation. 2

1.4.7.3 “Robust solvability status”


Examples 1.4.3 – 1.4.4 make it clear that in the general conic case we may meet “pathologies”
which do not occur in LP. E.g., a feasible and bounded problem may be unsolvable, the dual
to a solvable conic problem may be infeasible, etc. Where the pathologies come from? Looking
at our “pathological examples”, we arrive at the following guess: the source of the pathologies
is that in these examples, the “solvability status” of the primal problem is non-robust – it can
be changed by small perturbations of the data. This issue of robustness is very important in
modelling, and it deserves a careful investigation.

Data of a conic problem. When asked “What are the data of an LP program min{cT x :
Ax − b ≥ 0}”, everybody will give the same answer: “the objective c, the constraint matrix A
and the right hand side vector b”. Similarly, for a conic problem
n o
min cT x : Ax − b ≥K 0 , (CP)

its data, by definition, is the triple (c, A, b), while the sizes of the problem – the dimension n
of x and the dimension m of K, same as the underlying cone K itself, are considered as the
structure of (CP).
1.4. FROM LINEAR TO CONIC PROGRAMMING 71

Robustness. A question of primary importance is whether the properties of the program (CP)
(feasibility, solvability, etc.) are stable with respect to perturbations of the data. The reasons
which make this question important are as follows:

• In actual applications, especially those arising in Engineering, the data are normally inex-
act: their true values, even when they “exist in the nature”, are not known exactly when
the problem is processed. Consequently, the results of the processing say something defi-
nite about the “true” problem only if these results are robust with respect to small data
perturbations i.e., the properties of (CP) we have discovered are shared not only by the
particular (“nominal”) problem we were processing, but also by all problems with nearby
data.

• Even when the exact data are available, we should take into account that processing them
computationally we unavoidably add “noise” like rounding errors (you simply cannot load
something like 1/7 to the standard computer). As a result, a real-life computational routine
can recognize only those properties of the input problem which are stable with respect to
small perturbations of the data.

Due to the above reasons, we should study not only whether a given problem (CP) is feasi-
ble/bounded/solvable, etc., but also whether these properties are robust – remain unchanged
under small data perturbations. As it turns out, the Conic Duality Theorem allows to recognize
“robust feasibility/boundedness/solvability...”.
Let us start with introducing the relevant concepts. We say that (CP) is

• robust feasible, if all “sufficiently close” problems (i.e., those of the same structure
(n, m, K) and with data close enough to those of (CP)) are feasible;

• robust infeasible, if all sufficiently close problems are infeasible;

• robust bounded below, if all sufficiently close problems are bounded below (i.e., their
objectives are bounded below on their feasible sets);

• robust unbounded, if all sufficiently close problems are not bounded;

• robust solvable, if all sufficiently close problems are solvable.

Note that a problem which is not robust feasible, not necessarily is robust infeasible, since among
close problems there may be both feasible and infeasible (look at Example 1.4.3 – slightly shifting
and rotating the plane Im A − b, we may get whatever we want – a feasible bounded problem,
a feasible unbounded problem, an infeasible problem...). This is why we need two kinds of
definitions: one of “robust presence of a property” and one more of “robust absence of the same
property”.
Now let us look what are necessary and sufficient conditions for the most important robust
forms of the “solvability status”.

Proposition 1.4.5 [Robust feasibility] (CP) is robust feasible if and only if it is strictly feasible,
in which case the dual problem (D) is robust bounded above.
Assuming KerA = {0}, (D) is robust feasible if and only if (D) is strictly feasible.
72 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

Proof. The statements are nearly tautological.


Problem (CP): let us fix δ >K 0. If (CP) is robust feasible, then for small enough  > 0 the perturbed
problem min{cT x : Ax − b − δ ≥K 0} should be feasible; a feasible solution to the perturbed problem
clearly is a strictly feasible solution to (CP). The inverse implication is evident (a strictly feasible solution
to (CP) remains feasible for all problems with close enough data). It remains to note that if all problems
sufficiently close to (CP) are feasible, then their duals, by the Weak Conic Duality Theorem, are bounded
above, so that (D) is robust above bounded.
Problem (D): let us fix δ >K∗ 0. If (D) is robust feasible, the system A∗ λ = c − A∗ δ for a small
enough  > 0 should have a solution λ ≥K∗ 0, implying that A∗ [λ + δ] = c and [λ + δ] >K∗ 0, that
is, (D) is strictly feasible. Vice versa, if (D) is strictly feasible: A∗ λ̄ = c with λ̄ >K∗ 0, for A0 close
enough to A and c0 close enough to c the vector ∆ = A0 ([A0 ]∗ A0 )−1 [c0 − [A0 ]∗ λ̄] is well defined (recall that
KerA = {0}), and setting λ = λ̄ + ∆, we clearly have [A0 ]∗ λ = c0 ; besides this ∆ → 0 as A0 → A and
c0 → c, implying, due to λ̄ >K∗ 0, that the above λ is ≥K∗ 0 whenever A0 is close enough to A, and c0 is
close enough to c, that is, (D) is robust feasible. 2

Proposition 1.4.6 [Robust infeasibility] Let KerA = {0}. Then (CP) is robust infeasible if
and only if the system
hb, λi = 1, A∗ λ = 0, λ >K∗ 0 (1.4.5)
has a solution.

Proof. First assume that (1.4.5) is solvable, and let us prove that all problems sufficiently close to (CP)
are infeasible. Let us fix a solution λ̄ to (1.4.5). Since A is of full column rank, simple Linear Algebra
says that the systems [A0 ]∗ λ = 0 are solvable for all matrices A0 from a small enough neighbourhood U
of A; moreover, the corresponding solution λ(A0 ) can be chosen to satisfy λ(A) = λ̄ and to be continuous
in A0 ∈ U . Since λ(A0 ) is continuous and λ(A) >K∗ 0, we have λ(A0 ) >K∗ 0 in a neighbourhood of A;
shrinking U appropriately, we may assume that λ(A0 ) >K∗ 0 for all A0 ∈ U . Now, bT λ̄ = 1; by continuity
reasons, there exists a neighbourhood V of b and a neighbourhood U 0 of A such that b0 ∈ V and all
A0 ∈ U 0 one has hb0 , λ(A0 )i > 0.
Thus, we have seen that there exist a neighbourhood U 0 of A and a neighbourhood V of b, along with
a function λ(A0 ), A0 ∈ U 0 , such that

hb0 , λ(A0 )i > 0, [A0 ]∗ λ(A0 ) = 0, λ(A0 ) ≥K∗ 0

for all b0 ∈ V and A0 ∈ U . By Proposition 1.4.2.(i) it means that all the problems

min [c0 ]T x : A0 x − b0 ≥K 0


with b0 ∈ V and A0 ∈ U 0 are infeasible, so that (CP) is robust infeasible.


Now let us assume that (CP) is robust infeasible, and let us prove that then (1.4.5) is solvable. Indeed,
by the definition of robust infeasibility, there exist neighbourhoods U of A and V of b such that all vector
inequalities
A0 x − b0 ≥ K 0
with A0 ∈ U and b0 ∈ V are unsolvable. It follows that whenever A0 ∈ U and b0 ∈ V , the vector inequality

A0 x − b0 ≥K 0

is not almost solvable (see Proposition 1.4.2). We conclude from Proposition 1.4.2.(ii) that for every
A0 ∈ U and b0 ∈ V there exists λ = λ(A0 , b0 ) such that

hb0 , λ(A0 , b0 )i > 0, [A0 ]∗ λ(A0 , b0 ) = 0, λ(A0 , b0 ) ≥K∗ 0.


1.5. EXERCISES FOR LECTURE 1 73

Now let us choose λ0 >K∗ 0. For all small enough positive  we have A = A + b[A∗ λ0 ]T ∈ U . Let us
choose an  with the latter property to be so small that hb, λ0 i > −1 and set A0 = A , b0 = b. According
to the previous observation, there exists λ = λ(A0 , b) such that

hb, λi > 0, [A0 ]∗ λ ≡ A∗ [λ + hb, λiλ0 ] = 0, λ ≥K∗ 0.

Setting λ̄ = λ + hb, λiλ0 , we get λ̄ >K∗ 0 (since λ ≥K∗ 0, λ0 >K∗ 0 and hb, λi > 0), while A∗ λ̄ = 0 and
hb, λ̄i = hb, λi(1 + hb, λ0 i) > 0. Multiplying λ̄ by appropriate positive factor, we get a solution to (1.4.5).
2

Now we are able to formulate our main result on “robust solvability”.


Proposition 1.4.7 For a conic problem (CP) with KerA = {0} the following conditions are
equivalent to each other
(i) (CP) is robust feasible and robust bounded (below);
(ii) (CP) is robust solvable;
(iii) (D) is robust solvable;
(iv) (D) is robust feasible and robust bounded (above);
(v) Both (CP) and (D) are strictly feasible.
In particular, under every one of these equivalent assumptions, both (CP) and (D) are solv-
able with equal optimal values.
Proof. (i) ⇒ (v): If (CP) is robust feasible, it also is strictly feasible (Proposition 1.4.5) and therefore
remains strictly feasible for all small enough perturbations of data. If, in addition, (CP) is robust bounded
below, then (D) is robust solvable (by the Conic Duality Theorem); in particular, (D) is robust feasible.
Let λ0 ∈ int K∗ . Since (D) is robust feasible, the system A∗ λ = c − A∗ λ0 for small  > 0 should have
a solution λ ≥K∗ 0, implying that A∗ [λ + λ0 ] = c; since λ + λ0 >K∗ 0, we see that (D) is strictly
feasible, completing justification of (v).
(v) ⇒ (ii) and (v) ⇒ (iii): when (v) holds true, problems (CP) and (D) remain strictly feasible for
all small enough perturbations of data (for (CP) it is evident, for (D) is readily given by the fact that A
has trivial kernel), so that (ii) and (iii) hold true due to the Conic Duality Theorem.
(ii) ⇒ (i): trivial.
We have seen that (i)≡(ii)≡(v).
(iv) ⇒ (v): By Proposition 1.4.5, in the case of (iv) (D) is strictly feasible, and since KerA = {0}, (D)
remains strictly feasible for all small enough perturbations of data. Thus, (iv) ⇒ (ii) (by Conic Duality
Theorem), and we already know that (ii)≡(v).
(iii) ⇒ (iv): trivial.
We have seen that (v) ⇒ (iii) ⇒ (iv) ⇒ (v), whence (v)≡(iii)≡(iv). 2

1.5 Exercises for Lecture 1


Solutions to exercises/parts of exercises colored in cyan can be found in section 6.1.

1.5.1 Around General Theorem on Alternative


Exercise 1.3 Derive General Theorem on Alternative from Homogeneous Farkas Lemma
Hint: Verify that the system

aTi x >

bi , i = 1, ..., ms ,
(S) :
aTi x ≥ bi , i = ms + 1, ..., m.
74 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

in variables x has no solution if and only if the homogeneous inequality

≤0

in variables x, , t is a consequence of the system of homogeneous inequalities


 T
 ai x − bi t −  ≥ 0, i = 1, ..., ms ,
aTi x − bi t ≥ 0, i = ms + 1, ..., m,
t ≥ ,

in these variables.

There exist several particular cases of GTA (which in fact are equivalent to GTA); the goal of
the next exercise is to prove the corresponding statements.
Exercise 1.4 Derive the following statements from the General Theorem on Alternative:

1. [Gordan’s Theorem on Alternative] One of the inequality systems

(I) Ax < 0, x ∈ Rn ,

(II) AT y = 0, 0 6= y ≥ 0, y ∈ Rm ,
(A being an m × n matrix, x are variables in (I), y are variables in (II)) has a solution if
and only if the other one has no solutions.
2. [Inhomogeneous Farkas Lemma] A linear inequality in variables x

aT x ≤ p (1.5.1)

is a consequence of a solvable system of linear inequalities

Nx ≤ q (1.5.2)

if and only if it is a ”linear consequence” of the system and the trivial inequality

0T x ≤ 1,

i.e., if it can be obtained by taking weighted sum, with nonnegative coefficients, of the
inequalities from the system and this trivial inequality.
Algebraically: (1.5.1) is a consequence of solvable system (1.5.2) if and only if

a = NT ν

for some nonnegative vector ν such that

ν T q ≤ p.

3. [Motzkin’s Theorem on Alternative] The system

Sx < 0, N x ≤ 0

in variables x has no solutions if and only if the system

S T σ + N T ν = 0, σ ≥ 0, ν ≥ 0, σ 6= 0

in variables σ, ν has a solution.


1.5. EXERCISES FOR LECTURE 1 75

Exercise 1.5 Consider the linear inequality

x+y ≤2

and the system of linear inequalities

x≤1


−x ≤ −100

Our inequality clearly is a consequence of the system – it is satisfied at every solution to it (simply
because there are no solutions to the system at all). According to the Inhomogeneous Farkas
Lemma, the inequality should be a linear consequence of the system and the trivial inequality
0 ≤ 1, i.e., there should exist nonnegative ν1 , ν2 such that

−1
     
1 1
= ν1 + ν2 , ν1 − 1000ν2 ≤ 2,
1 0 0

which clearly is not the case. What is the reason for the observed “contradiction”?

1.5.2 Around cones


Attention! In what follows, if otherwise is not explicitly stated, “cone” is a shorthand for
regular (i.e., closed pointed cone with a nonempty interior) cone, K denotes a cone, and K∗ is
the cone dual to K.

Exercise 1.6 Let K be a cone, and let x̄ >K 0. Prove that x >K 0 if and only if there exists
positive real t such that x ≥K tx̄.

Exercise 1.7 1) Prove that if 0 6= x ≥K 0 and λ >K∗ 0, then λT x > 0.


2) Assume that λ ≥K∗ 0. Prove that λ >K∗ 0 if and only if λT x > 0 whenever 0 6= x ≥K 0.
3) Prove that λ >K∗ 0 if and only if the set

{x ≥K 0 : λT x ≤ 1}

is compact.

1.5.2.1 Calculus of cones


Exercise 1.8 Prove the following statements:
1) [stability with respect to direct multiplication] Let Ki ⊂ Rni be cones, i = 1, ..., k. Prove
that the direct product of the cones:

K = K1 × ... × Kk = {(x1 , ..., xk ) : xi ∈ Ki , i = 1, ..., k}

is a cone in Rn1 +...+nk = Rn1 × ... × Rnk .


Prove that the cone dual to K is the direct product of the cones dual to Ki , i = 1, .., k.
2) [stability with respect to taking inverse image] Let K be a cone in Rn and u 7→ Au be
a linear mapping from certain Rk to Rn with trivial null space (Null(A) = {0}) and such that
ImA ∩ int K 6= ∅. Prove that the inverse image of K under the mapping:

A−1 (K) = {u : Au ∈ K}
76 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

is a cone in Rk .
Prove that the cone dual to A−1 (K) is AT K∗ , i.e.

(A−1 (K))∗ = {AT λ : λ ∈ K∗ }.

3) [stability with respect to taking linear image] Let K be a cone in Rn and y = Ax be a linear
mapping from Rn onto RN (i.e., the image of A is the entire RN ). Assume Null(A) ∩ K = {0}.
Prove that then the set
AK = {Ax : x ∈ K}
is a cone in RN .
Prove that the cone dual to AK is

(AK)∗ = {λ ∈ RN : AT λ ∈ K∗ }.

Demonstrate by example that if in the above statement the assumption Null(A) ∩ K = {0} is
weakened to Null(A) ∩ int K = ∅, then the set A(K) may happen to be non-closed.
Hint. Look what happens when the 3D ice-cream cone is projected onto its tangent plane.

1.5.2.2 Primal-dual pairs of cones and orthogonal pairs of subspaces


Exercise 1.9 Let A be a m × n matrix of full column rank and K be a cone in Rm .
1) Prove that at least one of the following facts always takes place:
(i) There exists a nonzero x ∈ Im A which is ≥K 0;
(ii) There exists a nonzero λ ∈ Null(AT ) which is ≥K∗ 0.
Geometrically: given a primal-dual pair of cones K, K∗ and a pair L, L⊥ of linear subspaces
which are orthogonal complements of each other, we either can find a nontrivial ray in the
intersection L ∩ K, or in the intersection L⊥ ∩ K∗ , or both.
2) Prove that there exists λ ∈ Null(AT ) which is >K∗ 0 (this is the strict version of (ii)) if
and only if (i) is false. Prove that, similarly, there exists x ∈ ImA which is >K 0 (this is the
strict version of (i)) if and only if (ii) is false.
Geometrically: if K, K∗ is a primal-dual pair of cones and L, L⊥ are linear subspaces which
are orthogonal complements of each other, then the intersection L ∩ K is trivial (i.e., is the
singleton {0}) if and only if the intersection L⊥ ∩ int K∗ is nonempty.

1.5.2.3 Several interesting cones


Given a cone K along with its dual K∗ , let us call a complementary pair every pair x ∈ K,
λ ∈ K∗ such that
λT x = 0.
Recall that in “good cases” (e.g., under the premise of item 4 of the Conic Duality Theorem) a
pair of feasible solutions (x, λ) of a primal-dual pair of conic problems
n o
min cT x : Ax − b ≥K 0
n o
max bT λ : AT λ = c, λ ≥K∗ 0
is primal-dual optimal if and only if the “primal slack” y = Ax − b and λ are complementary.
1.5. EXERCISES FOR LECTURE 1 77

Exercise 1.10 [Nonnegative orthant] Prove that the n-dimensional nonnegative orthant Rn+ is
a cone and that it is self-dual:
(Rn+ )∗ = Rn+ .
What are complementary pairs?
Exercise 1.11 [Ice-cream cone] Let Ln be the n-dimensional ice-cream cone:
q
Ln = {x ∈ Rn : xn ≥ x21 + ... + x2n−1 }.

1) Prove that Ln is a cone.


2) Prove that the ice-cream cone is self-dual:
(Ln )∗ = Ln .
3) Characterize the complementary pairs.
Exercise 1.12 [Positive semidefinite cone] Let Sn+ be the cone of n × n positive semidefinite
matrices in the space Sn of symmetric n × n matrices. Assume that Sn is equipped with the
Frobenius inner product
n
X
hX, Y i = Tr(XY ) = Xij Yij .
i,j=1

1) Prove that Sn+ indeed is a cone.


2) Prove that the semidefinite cone is self-dual:
(Sn+ )∗ = Sn+ ,
i.e., that the Frobenius inner products of a symmetric matrix Λ with all positive semidefinite ma-
trices X of the same size are nonnegative if and only if the matrix Λ itself is positive semidefinite.
3) Prove the following characterization of the complementary pairs:
Two matrices X ∈ Sn+ , Λ ∈ (Sn+ )∗ ≡ Sn+ are complementary (i.e., hΛ, Xi = 0) if and
only if their matrix product is zero: ΛX = XΛ = 0. In particular, matrices from a
complementary pair commute and therefore share a common orthonormal eigenbasis.

1.5.3 Around conic problems: Several primal-dual pairs


Exercise 1.13 [The min-max Steiner problem] Consider the problem as follows:
Given N points b1 , ..., bN in Rn , find a point x ∈ Rn which minimizes the maximum
(Euclidean) distance from itself to the points b1 , ..., bN , i.e., solve the problem
min max kx − bi k2 .
x i=1,...,N

Imagine, e.g., that n = 2, b1 , ..., bN are locations of villages and you are interested to locate
a fire station for which the worst-case distance to a possible fire is as small as possible.
1) Pose the problem as a conic quadratic one – a conic problem associated with a direct product
of ice-cream cones.
2) Build the dual problem.
3) What is the geometric interpretation of the dual? Are the primal and the dual strictly
feasible? Solvable? With equal optimal values? What is the meaning of the complementary
slackness?
78 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

Exercise 1.14 [The weighted Steiner problem] Consider the problem as follows:

Given N points b1 , ..., bN in Rn along with positive weights ωi , i = 1, ..., N , find a


point x ∈ Rn which minimizes the weighted sum of its (Euclidean) distances to the
points b1 , ..., bN , i.e., solve the problem

N
X
min ωi kx − bi k2 .
x
i=1

Imagine, e.g., that n = 2, b1 , ..., bN are locations of N villages and you are interested to place
a telephone station for which the total cost of cables linking the station and the villages is
as small as possible. The weights can be interpreted as the per mile cost of the cables (they
may vary from village to village due to differences in populations and, consequently, in the
required capacities of the cables).

1) Pose the problem as a conic quadratic one.


2) Build the dual problem.
3) What is the geometric interpretation of the dual? Are the primal and the dual strictly
feasible? Solvable? With equal optimal values? What is the meaning of the complementary
slackness?

1.5.4 Feasible and level sets of conic problems


Default assumption: Everywhere in this Section matrix A is of full column rank (i.e., with
linearly independent columns).
Consider a feasible conic problem
n o
min cT x : Ax − b ≥K 0 . (CP)

In many cases it is important to know whether the problem has


1) bounded feasible set {x : Ax − b ≥K 0}
2) bounded level sets
{x : Ax − b ≥K 0, cT x ≤ a}
for all real a.

Exercise 1.15 Let (CP) be feasible. Then the following four properties are equivalent:
(i) the feasible set of the problem is bounded;
(ii) the set of primal slacks Y = {y : y ≥K 0, y = Ax − b} is bounded.
(iii) Im A ∩ K = {0};
(iv) the system of vector (in)equalities

AT λ = 0, λ >K∗ 0

is solvable.
Corollary. The property of (CP) to have a bounded feasible set is independent of the particular
value of b, provided that with this b (CP) is feasible!
1.5. EXERCISES FOR LECTURE 1 79

Exercise 1.16 Let problem (CP) be feasible. Prove that the following two conditions are equiv-
alent:
(i) (CP) has bounded level sets;
(ii) The dual problem n o
max bT λ : AT λ = c, λ ≥K∗ 0

is strictly feasible.
Corollary. The property of (CP) to have bounded level sets is independent of the particular
value of b, provided that with this b (CP) is feasible!

1.5.5 Operational exercises on engineering applications of LP


Operational Exercise 1.5.1

1. Mutual incoherence. Let A be an m × n matrix with columns Aj normalized to have


Euclidean lengths equal to 1. The quantity

µ(A) = max |ATi Aj |


i6=j

is called mutual incoherence of A, the smaller is this quantity, the closer the columns of A
are to mutual orthogonality.
µ(A) µ(A)+1
(a) Prove that γ1 (A) = γb1 (A) ≤ µ(A)+1 and that whenever s < 2µ(A) , the relation
(1.3.7) is satisfied with

sµ(A) sµ(A)
γ= < 1/2, β = .
µ(A) + 1 µ(A) + 1

Hint: Look what happens with the verifiable sufficient condition for s-goodness when
µ(A)
H = µ(A)+1 A.
(b) Let A be randomly selected m×n matrix, 1  m ≤ n, with independent entries taking

values ±1/ m with probabilities p 0.5. Verify that for a properly chosen absolute
constant C we have µ(A) ≤ C ln(n)/m. Derive from this observation that the above
verifiable sufficient condition for s-goodness forpproperly selected m × n matrices can
certify their s -goodness with s as large as O( m/ ln(n)).

2. Limits of performance of verifiable sufficient condition for s-goodness. Demonstrate that


when m ≤ n/2, our verifiable sufficient condition for s-goodness does not allow to cer-

tify s-goodness of A with s ≥ C m, C being an appropriate absolute constant.

3. Upper-bounding the goodness level. In order to demonstrate that A is not s-good for a
given s, it suffices to point out a vector z ∈ KerA\{0} such that kzk1 ≤ 1 and kzks,1 ≥ 12 .
The simplest way to attempt to achieve this goal is as follows: Start with a whatever
v 0 ∈ Vs and solve the LP max{[v 0 ]T z : Az = 0, kzk1 ≤ 1}, thus getting z 1 such that
z
kz 1 ks,1 ≥ [v 0 ]T z 1 and Az 1 = 0, kzn1 k1 ≤ 1. Now choose v 1 ∈oVs such that [v 1 ]T z 1 = kz 1 ks,1 ,
and solve the LP program max [v 1 ]T z : Az = 0, kzk1 ≤ 1 , thus getting a new z = z 2 ,
z
then define v 2 ∈ Vs such that [v 2 ]T z 2 = kz 2 ks,1 , solve the new LP, and so on. In short,
80 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

the outlined process is an attempt to lower-bound γs (A) = max v T z by switching


v∈Vs
z:Az=0,kzk1 ≤1
from maximization in z to maximization in v and vice versa. What we can ensure is
that kz t ks,1 grows with t and Az t = 0, kz t k1 ≤ 1 for all t. With luck, at certain step
we get kz t ks,1 ≥ 1/2, meaning that A is not s-good, and z t can be converted into an s-
sparse signal for which the `1 recovery does not work properly. Alternatively, the routine
eventually “gets stuck:” the norms kz t ks,1 nearly do not grow with t, which, practically
speaking, means that we have nearly reached a local maximum of the function kzks,1 on
the set {Az = 0, kzk1 ≤ 1}. This local maximum, however, is not necessary global, so that
getting stuck with the value of kzks,1 like 0.4 (or even 0.499999) does not allow us to make
any conclusion on whether or not γs (A) < 1/2. In this case it makes sense to restart our
procedure from a new randomly selected v 0 , and run this process with restarts during the
time we are ready to spend on upper-bounding of the goodness level.

4. Running experiment. Use cvx [9] to run experiment as follows:

• Build 64 × 64 Hadamard matrix13 , extract from it at random a 56 × 64 submatrix


and scale the columns of the resulting matrix to have Euclidean norms equal to 1;
what you get is your 56 × 64 sensing matrix A.
• Compute the best lower bounds on the level s∗ (A) of goodness of A as given by
mutual incoherence, our simplified sufficient condition “A is s-good for every s for
which sγb1 (A) < 1/2,” and our “full strength” verifiable sufficient condition “A is
s-good for every s such that γbs (A) < 1/2.”
• Compute an upper bound on s∗ (A). How far off is this bound from the lower one?
• Finally, take your upper bound on s∗ (A), increase it by about 50%, thus getting some
S such that A definitely is not S-good, and run 100 experiments where randomly
generated signals x with S nonzero entries each are recovered by `1 minimization
from their noiseless observations Ax. How many failures of the `1 recovery have you
observed?

Rerun your experiments with the 64 × 64 Hadamard matrix replaced with those drawn from
64 × 64 Rademacher14 and Gaussian15 matrix ensembles.

Operational Exercise 1.5.2 The goal of this exercise is to play with the outlined synthesis of
linear controllers via LP.

1. Situation and notation. From now on, we consider discrete time linear dynamic system
(1.3.16) which is time-invariant, meaning that the matrices At , ..., Dt are independent of t
(so that we write A, ..., D instead of At , ..., Dt ). As always, N is the time horizon.
Let us start with convenient notation. Affine p.o.b. control law ~η N on time horizon N
can be represented by a pair comprised of the block-vector ~h = [h0 ; h1 ; ..., hN −1 ] and the
13
 Hadamardmatrices are square 2k × 2k matrices Hk , k = 0, 1, 2, ... given by the recurrence H0 = 1, Hk+1 =
Hk Hk
. These are symmetric matrices with ±1 with columns orthogonal to each other.
Hk −Hk
14
random matrix with independent entries taking values ±1 with probabilities 1/2.
15
random matrix with independent standard (zero mean, unit variance) Gaussian entries.
1.5. EXERCISES FOR LECTURE 1 81

block-lower-triangular matrix

H00
 

 H01 H11 

H02 H12 H22
 
H= 
 .. .. .. ..

. . . .
 
 
H0N −1 N −1
· · · · · · · · · HN −1

with nu × ny blocks Hτt (from now on, when drawing a matrix, blanks represent zero
entries/blocks). Further, we can stack in long vectors other entities of interest, specifically,
 
x1
 .. 
" #

 . 

~x  xN 
wN := = [state-control trajectory]
 
~u u0

 
..
 
 
 . 
uN −1
v0
 

~v =
 .. 
[purified oputputs]
 . 
vN −1

and introduce block matrices


 
C D
 CA CR D 
ζ2v

=  CA2 CAR CR D  
 Note that ~v = ζ2v · [z; dN ]

 .. .. .. .. .. 
 . . . . . 
N −1 N −2 N −3
CA CA R CA R ··· ··· CR D
 
A R
2
A AR R
 
  Note that the contribution

ζ2x =  A3 A2 R AR R 
  of [z; dN ] to ~x for a given ~u 
.. .. .. .. is ζ2x · [z; dN ]
 
 . . . . 
N N −1 N −2
A A R A R · · · AR R
 
B
 AB B   
 A2 B AB B  Note that the contribution
u2x =  
 .. .. .. ..  of ~ x is u2x · ~
u to ~ u
 . . . . 
N −1 N −2
A B A B · · · AB B

In this notation, the dependencies between [z; dN ], ~η N = (h, H), wN and ~v are given by
N
~v = " · [z; d ],
ζ2v #
~x = [u2x · H · ζ2v + ζ2x] · [z; dN ] + u2x · h (1.5.3)
wN := .
~u = H · ζ2v · [z; dN ] + h

2. Task 1:

(a) Verify (1.5.3).


82 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

(b) Verify that the system closed by an a.p.o.b. control law ~η N = (~h, H) satisfies the
specifications
BwN ≤ b
whenever [z; dN ] belongs to a nonempty set given by polyhedral representation

{[z; dN ] : ∃u : P [z; dN ] + Qu ≤ r}

if and only if there exists entrywise nonnegative matrix Λ such that


" #
u2x · H · ζ2v + ζ2x
ΛP = B ·
H · ζ2v
ΛQ = 0 " #
u2x · h
Λr + B · ≤ b.
h

(c) Assume that at time t = 0 the system is at a given state, and what we want is to
have it at time N in another given state. Due to linearity, we lose nothing when
assuming that the initial state is 0, and the target state is a given vector x∗ . The
design specifications are as follows:
• If there are no disturbances (i.e., the initial state is exactly 0, and all d’s are
zeros, we want the trajectory (let us call it the nominal one) wN = wnomN to have
xN exactly equal to x∗ . In other words, we say that the nominal range of [z; dN ]
is just the origin, and when [z; dN ] is in this range, the state at time N should
be x∗ .
• When [z; dN ] deviates form 0, the trajectory wN will deviate from the nominal
N , and we want the scaled uniform norm kDiag{β}(w N −w N )k
trajectory wnom nom ∞
not to exceed a multiple, with the smallest possible factor, of the scaled uniform
norm kDiag{γ}[z; dN ]k∞ of the deviation of [z; dn ] from its normal range (the
origin). Here β and γ are positive weight vectors of appropriate sizes.
Prove that the outlined design specifications reduce to the following LP in variables
~η N = (~h, H) and additional matrix variable Λ of appropriate size:
  
  (a) H is block lower triangular 
(b) the last nx entries in u2x · ~h form the vector x∗

 
 


 
 

  
(c) Λij ≥ 0 ∀i, j

  
     


min α: Diag{γ} u2x · H · ζ2v + ζ2x (1.5.4)
~N =(~
η h,H),  (d) Λ =
−Diag{γ} −u2x · H · ζ2v − ζ2x
 
Λ,τ,α

  
     

β

 
 

(e) Λ · 1 ≤ α

   

β
  

where 1 is the all-one vector of appropriate dimension.

3. Task 2: Implement the approach summarized in (1.5.4) in the following situation:

Boeing 747 is flying horizontally along the X-axis in the XZ plane with the
velocity 774 ft/sec at the altitude 40000 ft and has 100 sec to change the altitude
to 41000 ft. You should plan this maneuver.
1.5. EXERCISES FOR LECTURE 1 83

The details are as follows16 . First, you should keep in mind that in the description to follow
the state variables are deviations of the “physical” state variables (like velocity, altitude,
etc.) from their values at the “physical” state of the aircraft at the initial time instant,
and not the physical states themselves, and similarly for the controls – they are deviations
of actual controls from some “background” controls. The linear dynamics approximation
is enough accurate, provided that these deviations are not too large. This linear dynamics
is as follows:

• State variables: ∆h: deviation in altitude, ft; ∆u: deviation in velocity along the
aircraft’s X-axis; ∆v: deviation in velocity orthogonal to the aircraft’s axis, ft/sec
(positive is down); ∆θ: deviation in angle θ between the aircraft’s axis and the X-axis
(positive is up), crad (hundredths of radian); q: angular velocity of the aircraft (pitch
rate), crad/sec.
• Disturbances: uw : wind velocity along the aircraft’s axis, ft/sec; vw : wind velocity
orthogonal to the aircraft’s axis, ft/sec. We assume that there is no side wind.
Assuming that δ stays small, we will not distinguish between ∆u (∆v) and the devi-
ations of the aircraft’s velocities along the X-, respectively, Z-axes, and similarly for
uw and vw .
• Controls: δe : elevator angle deviation (positive is down); δt : thrust deviation.
• Outputs: velocity deviation ∆u and climb rate ḣ = −∆v + 7.74θ.
• Linearized dynamics:
       
∆h 0 0 −1 0 0 ∆h 0 0
∆u 0 −.003 .039 0 −.322 ∆u .01 1  
 δe
      
d
∆v =  0 −.065 −.319 7.74 0 · ∆v + −.18 −.04
      
ds    δ
t
 q   0 .020 −.101 −.429 0   q   −1.16 .598 
| {z }
∆θ 0 0 0 1 0 ∆θ 0 0 υ(s)
| {z } | {z } | {z }
A χ(s) B
 
0 −1
 .003 −.039   
uw
+  .065 .319  ·
 
 −.020 vw
.101  | {z }
0 0 δ(s)
| {z }
R  
∆h
   ∆u
0 1 0 0 0

y = · ∆v
 
0 0 −1 0 7.74

| {z }
 q 
C ∆θ

• From continuous to discrete time: Note that our system is a continuous time one;
we denote this time by s and measure it in seconds (this is why in the Boeing state
d
dynamics the derivative with respect to time is denoted by ds , Boeing’s state in a
(continuous time) instant s is denoted by χ(s), etc.) Our current goal is to approxi-
mate this system by a discrete time one, in order to use our machinery for controller
design. To this end, let us choose a “time resolution” ∆s > 0, say, ∆s = 10 sec, and
look at the physical system at continuous time instants t∆s, t = 0, 1, ..., so that the
state xt of the system at discrete time instant t is χ(t∆s), and similarly for controls,
16
Up to notation, the data below are taken from Lecture 14, of Prof. Stephen Boyd’s EE263 Lecture Notes, see
http://www.stanford.edu/∼boyd/ee263/lectures/aircraft.pdf
84 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

outputs, disturbances, etc. With this approach, 100 sec-long maneuver corresponds
to the discrete time horizon N = 10.
We now should decide how to approximate the actual – continuous time – system with
a discrete time one. The simplest way to do so would be just to replace differential
equations with their finite-difference approximations:

xt+1 = χ(t∆s + ∆s) ≈ χ(t∆s) + (∆sA)χ(t∆s) + (∆sB)υ(t∆s) + (∆sR)δ(t∆s)

We can, however, do better than this. Let us associate with discrete time instant t
the continuous time interval [t∆s, (t + 1)∆s), and assume (this is a quite reasonable
assumption) that the continuous time controls are kept constant on these continuous
time intervals:
t∆s ≤ s < (t + 1)∆s ⇒ υ(s) ≡ ut ; (∗)
ut will be our discrete time controls. Let us now define the matrices of the discrete
time dynamics in such a way that with our piecewise constant continuous time con-
trols, the discrete time states xt of the discrete time system will be exactly the same
as the states χ(t∆s) of the continuous time system. To this end let us note that with
the piecewise constant continuous time controls (∗), we have exact equations
hR i
∆s
χ((t + 1)∆s) = [exp{∆sA}] χ(t∆s) + 0 exp{rA}Bdr ut
hR i
∆s
+ 0 exp{rA}Rδ((t + 1)∆s − r)dr ,

where the exponent of a (square) matrix can be defined by either one of the straight-
forward matrix analogies of the (equivalent to each other) definitions of the univariate
exponent:
exp{B} = limn→∞ (I + n−1 B)n , or
P∞ 1 n
exp{B} = n=0 n! B .

We see that when setting


"Z #
∆s Z ∆s
A = exp{∆sA}, B = exp{rA}Bdr , et = exp{rA}Rδ((t + 1)∆s − r)dr,
0 0

we get exact equalities

xt+1 := χ(t∆s) = Axt + But + et , x0 = χ(0) ($)

If we now assume (this again makes complete sense) that continuous time outputs
are measured at continuous time instants t∆s, t = 0, 1, ..., we can augment the above
discrete time dynamics with description of discrete time outputs:

yt = Cxt .

Now imagine that the decisions on ut are made as follows: at “continuous time”
instant t∆s, after yt is measured, we immediately specify ut as an affine function of
y1 , ..., yt and keep the controls in the “physical time” interval [t∆s, (t + 1)∆s) at the
level ut . Now both the dynamics and the candidate affine controllers in continuous
time fit their discrete time counterparts, and we can pass from affine output based
controllers to affine purified output based ones, and utilize the outlined approach
1.5. EXERCISES FOR LECTURE 1 85

for synthesis of a.p.o.b. controllers to build efficiently a controller for the “true”
continuous time system. Note that with this approach we should not bother much
on how small is ∆s, since our discrete time approximation reproduces exactly the
behaviour of the continuous time system along the discrete grid t∆s, t = 0, 1, .... The
only restriction on ∆s comes from the desire to make design specifications expressed
in terms of discrete time trajectory meaningful in terms of the actual continuous time
behavior, which usually is not too difficult.
The only difficulty which still remains unresolved is how to translate our a priori
information on continuous time disturbances δ(s) into information on their discrete
time counterparts et . For example, when assuming that the normal range of the
continuous time disturbances δ(·) is given by an upper bound ∀s, kδ(s)k ≤ ρ on a
given normnk · k of the disturbance, the “translated” informationo on et is et ∈ ρE,
R ∆s
where E = e = 0 exp{rA}Rδ(r)dr : kδ(r)k ≤ 1, 0 ≤ r ≤ ∆s .17 . We then can set
dt = et , R = Inx and say that the normal range of the discrete time disturbances
dt is dt ∈ E for all t. A disadvantage of this approach is that for our purposes, we
need E to admit a simple representation, which not necessary is the case with E “as
it is.” Here is a (slightly conservative) way to resolve this difficulty. Assume that
the normal range of the continuous time disturbance is given as kδ(s)k∞ ≤ ρ ∀s. Let
abs[M ] be the matrix comprised of the moduli of the entries of a matrix M . When
kδ(s)k∞ ≤ 1 for all s, we clearly have
"Z #
∆s Z ∆s
abs exp{rA}Rδ(r)dr ≤ r := abs[exp{rA}R] · 1dr,
0 0

meaning that E ⊂ {Diag{r}d : kdk∞ ≤ 1}. In other words, we can only increase
the set of allowed disturbances et by assuming that they are of the form et = Rdt
with R = Diag{r} and kdt k∞ ≤ ρ. As a result, every controller for the discrete time
system
xt+1 = Axt + But + Rdt ,
yt = Cxt
with just defined A, B, R and C = C which meets desired design specifications, the
normal range of dN being {kdt k∞ ≤ ρ ∀t}, definitely meets the design specifications
for the system ($), the normal range of the disturbances et being et ∈ ρE for all t.
Now you are prepared to carry out your task:
(a) Implement the outlined strategy to the Boeing maneuver example. The still missing
components of the setup are as follows:
time resolution: ∆s = 10 sec ⇒ N = 10;
state at s = 0: x0 = [∆h = 0; ∆u = 0; ∆v = 0; q = 0; ∆δ = 0]
[cruiser horizontal flight at the speed 774+73.3 ft/sec, altitude 40000 ft]
state at s = 100: x10 = [∆h = 1000; ∆u = 0; ∆v = 0; q = 0; ∆δ = 0]
[cruiser horizontal flight at the speed 774+73.3 ft/sec, altitude 41000 ft]
weights in (1.5.4): γ = 1, β = 1
17
Note that this “translation” usually makes et vectors of the same dimension as the state vectors xt , even
when the dimension of the “physical disturbances” δ(t) is less than the dimension of the state vectors. This is
natural: continuous time dynamics usually “spreads” the potential contribution, accumulated over a continuous
time interval [t∆s, (t + 1)∆s), of low-dimensional time-varying disturbances δ(s) to the state χ((t + 1)∆s) over a
full-dimensional domain in the state space.
86 LECTURE 1. FROM LINEAR TO CONIC PROGRAMMING

(774 ft/sec ≈ 528 mph is the aircraft speed w.r.t to the air, 73.3 ft/sec= 50 mph is
the steady-state tail wind).

• Specify the a.o.p.b. control as given by an optimal solution to (1.5.4). What is the
optimal value α∗ in the problem?
• With the resulting control law, specify the nominal state-control trajectories of the
discrete and the continuous time models of the maneuver (i.e., zero initial state, no
wind); plot the corresponding XZ trajectory of the aircraft.
• Run 20 simulations of the discrete time and the continuous time models of the ma-
neuver when the initial state and the disturbances are randomly selected vectors
of uniform norm not exceeding ρ = 1 (which roughly corresponds to up to 1 mph
randomly oriented in the XZ plane deviations of the actual wind velocity from the
steady-state tail 50 mph one) and plot the resulting bunch of the aircraft’s XZ tra-
jectories. Check that the uniform deviation of the state-control trajectories from the
nominal one all the time does not exceed α∗ ρ (as it should be due to the origin of
α∗ ). What is the mean, over the observed trajectories, ratio of the deviation to αρ?
• Repeat the latter experiment with ρ increased to 10 (up to 10 mph deviations in the
wind velocity).

My results are as follows:


1200 1200

1000 1000

800 800

600 600

400 400

200 200

0 0

−200 −200
0 2 4 6 8 10 12 14 16 18 0 2 4 6 8 10 12 14 16 18

ρ=1 ρ = 10
Bunches of 20 XZ trajectories
[X-axis: mi; Z-axis: ft, deviations from 40000 ft initial altitude]

ρ=1 ρ = 10
mean max mean max
14.2% 22.8% 12.8% 36.3%
kwN −wnom
N
k∞
ρα∗ ,
data
over 20 simulations, α∗ = 57.18
Lecture 2

Conic Quadratic Programming

Several “generic” families of conic problems are of special interest, both from the viewpoint
of theory and applications. The cones underlying these problems are simple enough, so that
one can describe explicitly the dual cone; as a result, the general duality machinery we have
developed becomes “algorithmic”, as in the Linear Programming case. Moreover, in many cases
this “algorithmic duality machinery” allows to understand more deeply the original model,
to convert it into equivalent forms better suited for numerical processing, etc. The relative
simplicity of the underlying cones also enables one to develop efficient computational methods
for the corresponding conic problems. The most famous example of a “nice” generic conic
problem is, doubtless, Linear Programming; however, it is not the only problem of this sort. Two
other nice generic conic problems of extreme importance are Conic Quadratic and Semidefinite
Programming (CQP and SDP for short). We are about to consider the first of these two
problems.

2.1 Conic Quadratic problems: preliminaries


Recall the definition of the m-dimensional ice-cream (≡second-order≡Lorentz) cone Lm :
q
Lm = {x = (x1 , ..., xm ) ∈ Rm : xm ≥ x21 + ... + x2m−1 }, m ≥ 2.

Note: In full accordance with the standard convention that the value of empty sum is 0, we
allow in this definition for m = 1, resulting in L1 = R+ , and thus identifying linear vector
inequalities involving L1 and scalar linear inequalities.
A conic quadratic problem is a conic problem
n o
min cT x : Ax − b ≥K 0 (CP)
x

for which the cone K is a direct product of several ice-cream cones:

Lm1 ×Lm2 × 
K =  ... × Lmk 

 y[1] 

  y[2]  

m
 (2.1.1)
= y=  : y[i] ∈ L , i = 1, ..., k .
i
 

  ...  

 

y[k] 

87
88 LECTURE 2. CONIC QUADRATIC PROGRAMMING

In other words, a conic quadratic problem is an optimization problem with linear objective and
finitely many “ice-cream constraints”

Ai x − bi ≥Lmi 0, i = 1, ..., k,

where  
[A1 ; b1 ]
 [A2 ; b2 ] 
[A; b] = 
 
...

 
[Ak ; bk ]
is the partition of the data matrix [A; b] corresponding to the partition of y in (2.1.1). Thus, a
conic quadratic program can be written as
n o
min cT x : Ai x − bi ≥Lmi 0, i = 1, ..., k . (2.1.2)
x

Recalling the definition of the relation ≥Lm and partitioning the data matrix [Ai , bi ] as
" #
Di di
[Ai ; bi ] =
pTi qi

where Di is of the size (mi − 1) × dim x, we can write down the problem as
n o
min cT x : kDi x − di k2 ≤ pTi x − qi , i = 1, ..., k ; (QP)
x

this is the “most explicit” form is the one we prefer to use. In this form, Di are matrices of the
same row dimension as x, di are vectors of the same dimensions as the column dimensions of
the matrices Di , pi are vectors of the same dimension as x and qi are reals.
It is immediately seen that (2.1.1) is indeed a cone, in fact a self-dual one: K∗ = K.
Consequently, the problem dual to (CP) is
n o
max bT λ : AT λ = c, λ ≥K 0 .
λ
 
λ1
 λ 
Denoting λ =  2  with mi -dimensional blocks λi (cf. (2.1.1)), we can write the dual problem
 
 ... 
λk
as ( k k
)
X X
max bTi λi : ATi λi = c, λi ≥Lmi 0, i = 1, ..., k .
λ1 ,...,λm
i=1 i=1
 
µi
Recalling the meaning of ≥Lmi 0 and representing λi = with scalar component νi , we
νi
finally come to the following form of the problem dual to (QP):
( k k
)
X X
max [µTi di + νi qi ] : [DiT µi + νi pi ] = c, kµi k2 ≤ νi , i = 1, ..., k . (QD)
µi ,νi
i=1 i=1

The design variables in (QD) are vectors µi of the same dimensions as the vectors di and reals
νi , i = 1, ..., k.
2.2. EXAMPLES OF CONIC QUADRATIC PROBLEMS 89

2.2 Examples of conic quadratic problems


2.2.1 Contact problems with static friction [35]
Consider a rigid body in R3 and a robot with N fingers. When can the robot hold the body? To
pose the question mathematically, let us look what happens at the point pi of the body which
is in contact with i-th finger of the robot:
Fi
fi
pi

O vi

Geometry of i-th contact


[pi is the contact point; fi is the contact force; v i is the inward normal to the surface]
Let v i be the unit inward normal to the surface of the body at the point pi where i-th finger
touches the body, f i be the contact force exerted by i-th finger, and F i be the friction force
caused by the contact. Physics (Coulomb’s law) says that the latter force is tangential to the
surface of the body:
(F i )T v i = 0 (2.2.1)
and its magnitude cannot exceed µ times the magnitude of the normal component of the contact
force, where µ is the friction coefficient:
kF i k2 ≤ µ(f i )T v i . (2.2.2)
Assume that the body is subject to additional external forces (e.g., gravity); as far as their
mechanical consequences are concerned, all these forces can be represented by a single force –
their sum – F ext along with the torque T ext – the sum of vector products of the external forces
and the points where they are applied.
In order for the body to be in static equilibrium, the total force acting at the body and the
total torque should be zero:
N
(f i + F i ) + F ext = 0
P
i=1 (2.2.3)
N
pi × (f i + F i ) + T ext
P
= 0,
i=1

where p × q stands for the vector product of two 3D vectors p and q 1) .

p1
!
q1
!
1)
Here is the definition: if p = p2 ,q= q2 are two 3D vectors, then
p3 q3
  
p2 p3
Det
 q2 q3 
  
 p3 p1 
[p, q] =  Det 
q q1
 3
 
 
 p1 p2 
Det
q1 q2
90 LECTURE 2. CONIC QUADRATIC PROGRAMMING

The question “whether the robot is capable to hold the body” can be interpreted as follows.
Assume that f i , F ext , T ext are given. If the friction forces F i can adjust themselves to satisfy
the friction constraints (2.2.1) – (2.2.2) and the equilibrium equations (2.2.3), i.e., if the system
of constraints (2.2.1), (2.2.2), (2.2.3) with respect to unknowns F i is solvable, then, and only
then, the robot holds the body (“the body is in a stable grasp”).
Thus, the question of stable grasp is the question of solvability of the system (S) of constraints
(2.2.1), (2.2.2) and (2.2.3), which is a system of conic quadratic and linear constraints in the
variables f i , F ext , T ext , {F i }. It follows that typical grasp-related optimization problems can
be posed as CQPs. Here is an example:

The robot should hold a cylinder by four fingers, all acting in the vertical direction.
The external forces and torques acting at the cylinder are the gravity Fg and an
externally applied torque T along the cylinder axis, as shown in the picture:
1
0 f2 f1 f1
0
1 1
0
0
1 0
1
0
1 0
1
0
1 0
1
0
1 0
1
0
1 0
1
0
1
0
1
0
1 T
0
1
0
1
0
1 F
0
1
0
1 g
0
1
0
1 Fg
T

f3 f4 f3

Perspective, front and side views

The magnitudes νi of the forces fi may vary in a given segment [0, Fmax ].
What can be the largest magnitude τ of the external torque T such that a stable
grasp is still possible?

Denoting by ui the directions of the fingers, by v i the directions of the inward normals to
cylinder’s surface at the contact points, and by u the direction of the axis of the cylinder, we
can pose the problem as the optimization program

max τ
s.t.
4
(νi ui + F i ) + Fg = 0
P
[total force equals 0]
i=1
4
pi × (νi ui + F i ) + τ u
P
= 0 [total torque equals 0]
i=1
(v i )T F i = 0, i = 1, ..., 4 [F i are tangential to the surface]
i i T i
kF k2 ≤ [µ[u ] v ]νi , i = 1, ..., 4 [Coulomb’s constraints]
0 ≤ νi ≤ Fmax , i = 1, ..., 4 [bounds on νi ]

in the design variables τ, νi , Fi , i = 1, ..., 4. This is a conic quadratic program, although not
in the standard form (QP). To convert the problem to this standard form, it suffices, e.g., to
replace all linear equalities by pairs of linear inequalities and further represent linear inequalities
αT x ≤ β as conic quadratic constraints
" #
0
Ax − b ≡ ≥L2 0.
β − αT x
The vector [p, q] is orthogonal to both p and q, and k[p, q]k2 = kpk2 kqk2 sin(pq).
b
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 91

2.3 What can be expressed via conic quadratic constraints?


Optimization problems arising in applications are not normally in their “catalogue” forms, and
thus an important skill required from those interested in applications of Optimization is the
ability to recognize the fundamental structure underneath the original formulation. The latter
is frequently in the form
min{f (x) : x ∈ X}, (2.3.1)
x
where f is a “loss function”, and the set X of admissible design vectors is typically given as
m
\
X= Xi ; (2.3.2)
i=1

every Xi is the set of vectors admissible for a particular design restriction which in many cases
is given by
Xi = {x ∈ Rn : gi (x) ≤ 0}, (2.3.3)
where gi (x) is i-th constraint function2) .
It is well-known that the objective f in always can be assumed linear, otherwise we could
move the original objective to the list of constraints, passing to the equivalent problem
n o
min t : (t, x) ∈ X
b ≡ {(x, t) : x ∈ X, t ≥ f (f )} .
t,x

Thus, we may assume that the original problem is of the form


m
( )
\
min cT x : x ∈ X = Xi . (P)
x
i=1

In order to recognize that X is in one of our “catalogue” forms, one needs a kind of dictionary,
where different forms of the same structure are listed. We shall build such a dictionary for
the conic quadratic programs. Thus, our goal is to understand when a given set X can be
represented by conic quadratic inequalities (c.q.i.’s), i.e., one or several constraints of the type
kDx − dk2 ≤ pT x − q. The word “represented” needs clarification, and here it is:
We say that a set X ⊂ Rn can be represented via conic quadratic inequalities (for
short: is CQr – Conic Quadratic representable),
  if there exists a system S of finitely
x
many vector inequalities of the form Aj − bj ≥Lmj 0 in variables x ∈ Rn and
u
additional variables u such that X is the projection of the solution set of S onto the
x-space, i.e., x ∈ X if and only if one can extend x to a solution (x, u) of the system
S:  
x
x ∈ X ⇔ ∃u : Aj − bj ≥Lmj 0, j = 1, ..., N.
u
Every such system S is called a conic quadratic representation (for short: a CQR) of
the set X. We call such a CQR strictly/essentially strictly feasible, if S, considered
as a single conic inequality (by passing from the cones Kj to their direct product) is
strictly, resp., essentially strictly feasible, see Definition 1.4.3
2)
Speaking about a “real-valued function on Rn ”, we assume that the function is allowed to take real values
and the value +∞ and is defined on the entire space. The set of those x where the function is finite is called the
domain of the function, denoted by Dom f .
92 LECTURE 2. CONIC QUADRATIC PROGRAMMING

Equivalently:

X is CQr when X can be represented as

X = {x : ∃u : Ax + Bu + b ≥K 0}

for properly selected A, B, b and K being the direct product of finitely many Lorenz
cones.
Such a representation is a CQR of X, and this CQR is strictly/essentoially feasible
if the conic inequality
Ax + Bu + b ≥K 0
is so.

The idea behind this definition is clarified by the following observation:

Consider an optimization problem


n o
min cT x : x ∈ X
x

and assume that X is CQr. Then the problem is equivalent to a conic quadratic
program. The latter program can be written down explicitly, provided that we are
given a CQR of X.

Indeed, let S be a CQR of X, and u be the corresponding vector of additional variables. The
problem
min cT x : (x, u) satisfy S

x,u

with design variables x, u is equivalent to the original problem (P), on one hand, and is a
conic quadratic program, on the other hand.

Let us call a problem of the form (P) with CQ-representable X a good problem.
How to recognize good problems, i.e., how to recognize CQ-representable sets? Well, how
we recognize continuity of a given function, like f (x, y) = exp{sin(x + exp{y})} ? Normally it is
not done by a straightforward verification of the definition of continuity, but by using two kinds
of tools:

A. We know a number of simple functions – a constant, f (x) = x, f (x) = sin(x), f (x) =


exp{x}, etc. – which indeed are continuous: “once for the entire life” we have verified it
directly, by demonstrating that the functions fit the definition of continuity;

B. We know a number of basic continuity-preserving operations, like taking products, sums,


superpositions, etc.

When we see that a function is obtained from “simple” functions – those of type A – by operations
of type B (as it is the case in the above example), we immediately infer that the function is
continuous.
This approach which is common in Mathematics is the one we are about to follow. In fact,
we need to answer two kinds of questions:

(?) What are CQ-representable sets


2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 93

(??) What are CQ-representable functions g(x), i.e., functions which possess CQ-representable
epigraphs
Epi{g} = {(x, t) ∈ Rn × R : g(x) ≤ t}.

Our interest in the second question is motivated by the following

Observation: If a function g is CQ-representable, then so are all it level sets {x :


g(x) ≤ a}, and every CQ-representation of (the epigraph of) g explicitly induces
CQ-representations of the level sets.

Indeed, assume that we have a CQ-representation of the epigraph of g:

g(x) ≤ t ⇔ ∃u : kαj (x, t, u)k2 ≤ βj (x, t, u), j = 1, ..., N,

where αj and βj are, respectively, vector-valued and scalar affine functions of their arguments.
In order to get from this representation a CQ-representation of a level set {x : g(x) ≤ a}, it
suffices to fix in the conic quadratic inequalities kαj (x, t, u)k2 ≤ βj (x, t, u) the variable t at
the value a.

We list below our “raw materials” – simple functions and sets admitting CQR’s.

2.3.1 Elementary CQ-representable functions/sets


1. A constant function g(x) ≡ a.
Indeed, the epigraph of the function {(x, t) | a ≤ t} is given by a linear inequality, and a linear inequality
0 ≤ pT z − q is at the same time conic quadratic inequality due to L1 = R+ .

2. An affine function g(x) = aT x + b.


Indeed, the epigraph of an affine function is given by a linear inequality.

3. The Euclidean norm g(x) = kxk2 .


Indeed, the epigraph of g is given by the conic quadratic inequality kxk2 ≤ t in variables x, t.

4. The squared Euclidean norm g(x) = xT x.


(t+1)2 (t−1)2
Indeed, t = 4 − 4 , so that

(t − 1)2 (t + 1)2
 
x t+1
xT x ≤ t ⇔ xT x + ≤ ⇔ t−1 ≤
4 4 2 2 2

(check the second ⇔!), and the last relation is a conic quadratic inequality.
 T
 xsx,
 s>0
5. The fractional-quadratic function g(x, s) = 0, s = 0, x = 0 (x vector, s scalar).

+∞, otherwise

Indeed, with the convention that (xT x)/0 is 0 or +∞, depending on whether x = 0 or not, and taking
2 2
into account that ts = (t+s)
4 − (t−s)
4 , we have:
T (t−s)2 (t+s)2
{ x s x≤ t, s ≥ T T
0} ⇔ {x x ≤ ts, t ≥ 0, s ≥ 0} ⇔ {x x + 4 ≤ 4 ,t ≥ 0, s ≥ 0}
x
⇔ t−s ≤ t+s 2
2 2
94 LECTURE 2. CONIC QUADRATIC PROGRAMMING

(check the third ⇔!), and the last relation is a conic quadratic inequality.

The level sets of the CQr functions 1 – 5 provide us with a spectrum of “elementary” CQr
sets. We add to this spectrum one more set:

6. (A branch of) Hyperbola {(t, s) ∈ R2 : ts ≥ 1, t > 0}.


Indeed,
 t−s  2
2
(t−s)2 (t+s)2
{ts ≥ 1, t > 0} ⇔ { (t+s)

4 ≥ 1 + 4 & t > 0} ⇔ { 2 ≤
1 4 }
 t−s  2

⇔ { 2 ≤ t+s 2 }
1 2

(check the last ⇔!), and the latter relation is a conic quadratic inequality.
Next we study simple operations preserving CQ-representability of functions/sets.

2.3.2 Operations preserving CQ-representability of sets


To save words, in the sequel SO stands for the family of cones which are finite direct products
of Lorentz cones.

A. Intersection: If sets Xi ⊂ Rn , i = 1, ..., k, are CQr:

Xi = {x : ∃ui : Ai x + Bi ui + bi ∈ Ki } [Ki ∈ SO]

k
T
so is their intersection X = Xi .
i=1
Indeed,
k
\
Xi = {x : ∃u = [u1 ; ...; uk ] : [A1 ; ...; Ak ]x + Diag{B1 , ..., Bk }u + [c1 ; ...; ck ] ∈ K := K1 ×, , , ×Kk }
i=1

and K ∈ SO.

Corollary 2.3.1 A polyhedral set – a set in Rn given by finitely many scalar linear inequalities
aTi x ≤ bi , i = 1, ..., m – is CQr.

Indeed, a polyhedral set is the intersection of finitely many level sets of affine functions, and
all these functions (and thus – their level sets) are CQr.
Since a scalar linear equality is equivalent to a pair of opposite scalar linear inequalities, the
words “inequalities” in Corollary 2.3.1 can be replaced with “inequalities and equalities.”

Corollary 2.3.2 If every one of the sets Xi in problem (P) is CQr, then the problem is good
– it can be rewritten in the form of a conic quadratic problem, and such a transformation is
readily given by CQR’s of the sets Xi , i = 1, ..., m.
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 95

B. Direct product: If sets Xi ⊂ Rni , i = 1, ..., k, are CQr:

Xi = {xi : ∃ui : Ai xi + Bi ui + bi ∈ Ki } [Ki ∈ SO]

so is their direct product X1 × ... × Xk .


Indeed,

X1 × ... × Xk = x = [x1 ; ...; xk ] : ∃u = [u1 ; ...; uk ] :

Diag{A1 , ..., Ak ]x + Diag{B1 , ..., Bk ]u + [c1 ; ...; cN k] ∈ K := K1 × ... × Kk

and K ∈ SO.

C. Affine image (“Projection”): If a set X ⊂ Rn is CQr:

X = {x : ∃u : P x + Qu + r ≥K 0} [K ∈ SO]

and x 7→ y = `(x) := Ax + b is an affine mapping of Rn to Rk , then the image `(X) of the set
X under the mapping is CQr.
Indeed,
`(X) = {y : ∃(x, u) : P x + Qu + r ≥K 0, y − Ax − b = 0}
and the right hand side representation is a CQR.
Corollary 2.3.3 A nonempty set X is CQr if and only if its characteristic function
x∈X

0,
χ(x) =
+∞, otherwise
is CQr.
Indeed, Epi{χ} is the direct product of X and the nonnegative ray; therefore if X is CQr, so is χ(·) (see
B. and Corollary 2.3.1). Vice versa, if χ is CQr, then X is CQr by C., since X is the projection of the
Epi{χ} on the space of x-variables.

D. Inverse affine image: Let X ⊂ Rn be a CQr set:

X = {x : ∃u : P x + Qu + r ≥K 0} [K ∈ SO]

and let `(y) = Ay + b be an affine mapping from Rk to Rn . Then the inverse image `−1 (X) =
{y ∈ Rk : Ay + b ∈ X} of X under the mapping is CQr.
Indeed,
`−1 (X) = {y : ∃u : P Ay + Qu + [r + P b] ≥K 0}

Corollary 2.3.4 Consider a good problem (P) and assume that we restrict its design variables
to be given affine functions of a new design vector y. Then the induced problem with the design
vector y is also good.
It should be stressed that the above statements are not just existence theorems – they are
“algorithmic”: given CQR’s of the “operands” (say, m sets X1 , ..., Xm ), we may build completely
m
T
mechanically a CQR for the “result of the operation” (e.g., for the intersection Xi ).
i=1
96 LECTURE 2. CONIC QUADRATIC PROGRAMMING

2.3.3 Operations preserving CQ-representability of functions


Recall that a function g(x) is called CQ-representable, if its epigraph Epi{g} = {(x, t) : g(x) ≤ t}
is a CQ-representable set; a CQR of the epigraph of g is called conic quadratic representation of
g. Recall also that a level set of a CQr function is CQ-representable. Here are transformations
preserving CQ-representability of functions:

E. Taking maximum: If functions gi (x), i = 1, ..., m, are CQr, then so is their maximum
g(x) = max gi (x).
i=1,...,m
Epi{gi }, and the intersection of finitely many CQr sets again is CQr.
T
Indeed, Epi{g} =
i

F. Summation with nonnegative weights: If functions gi (x), x ∈ Rn , are CQr, i = 1, ..., m, and
m
P
αi are nonnegative weights, then the function g(x) = αi gi (x) is also CQr.
i=1
This is a particular case of

Theorem 2.3.1 [Theorem on superposition] Let gi (x) : Rn → R ∪ {+∞}, i ≤ m, be CQr:

{t ≥ gi (x)} ⇔ {∃ui : Ai x + tbi + Bi ui + ci ∈ Ki ∈ SO}

and F : Rm → R ∪ {+∞} be CQr:

{t ≥ F (y)} ⇔ {∃u : Ay + tb + Bu + c ∈ K ∈ SO}

and nonincreasing in every one of its arguments. Then the composition


(
F (f1 (x), ..., fm (x)), fi (x) < ∞, i ≤ m
g(x) =
+∞, otherwise

is CQr.

Indeed, as it is immediately seen,


  “says” that fi (x) ≤ ti


 
 z }| { 


  A x+t b +B u +c ∈K ,

i≤m 

i i i i i i i
{t ≥ g(x)} ⇔ ∃u, u1 , ..., um , t1 , ..., tm :

 
 A[t 1 ; ...; t m ] + tb + Bu + c ∈ K 


 
 | {z } 

“says” that F ([t1 ; ...; tm ]) ≤ t

and the system of constraints on variables x, ti , t, ui , u in the right hand side is a linear in these
variables vector inequality with cone from SO.
In order to get from Theorem on superposition the “sum with nonnegative weights” rule, it
P
suffices to set F (y) = i λi yi .
Note that when some of the fi are affine, Theorem on superposition remain true when skipping
the requirement for F to be monotone in the arguments yi for which fi are affine. Assuming
that f1 , ..., fs are affine, a CQR of g is given by the equivalence
  

  fi (x) − ti = 0,
 i≤s 

{t ≥ g(x)} ⇔ ∃u, us+1 , us+2 , ..., um , t1 , ..., tm : Ai x + ti bi + Bi ui + ci ∈ Ki , s<i≤m

  A[t ; ...; t ] + tb + Bu + c ∈ K
 

1 m
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 97

G. Direct summation: If functions gi (xi ), xi ∈ Rni , i = 1, ..., m, are CQr, so is their direct
sum
g(x1 , ..., xm ) = g1 (x1 ) + ... + gm (xm ).

Indeed, the functions ĝi (x1 , ..., xm ) = gi (xi ) are clearly CQr – their epigraphs are inverse images of the
P
epigraphs of gi under the affine mappings (x1 , ..., xm , t) 7→ (xi , t). It remains to note that g = ĝi .
i

H. Affine substitution of argument: If a function g(x), x ∈ Rn , is CQr and y 7→ Ay + b is an


affine mapping from Rk to Rn , then the superposition g → (y) = g(Ay + b) is CQr.
Indeed, the epigraph of g → is the inverse image of the epigraph of g under the affine mapping (y, t) 7→
(Ay + b, t).

I. Partial minimization: Let g(x) be CQr. Assume that x is partitioned into two sub-vectors:
x = (v, w), and let ĝ be obtained from g by partial minimization in w:

ĝ(v) = inf g(v, w),


w

and assume that for every v such that ĝ(v) < ∞ the minimum in w is achieved. Then ĝ is CQr.
Indeed, under the assumption that the minimum in w is achieved at every v for which g(v, ·) is not
identically +∞, Epi{ĝ} is the image of the epigraph of Epi{g} under the projection (v, w, t) 7→ (v, t).

2.3.4 More operations preserving CQ-representability


Let us list a number of more “advanced” operations with sets/functions preserving CQ-
representability.

J. Arithmetic summation of sets. Let Xi , i = 1, ..., k, be nonempty convex sets in Rn , and let
X1 + X2 + ... + Xk be the arithmetic sum of these sets:

X1 + ... + Xk = {x = x1 + ... + xk : xi ∈ Xi , i = 1, ..., k}.

We claim that

If all Xi are CQr, so is their sum.

Indeed, the direct product

X = X1 × X2 × ... × Xk ⊂ Rnk

is CQr by B.; it remains to note that X1 +...+Xk is the image of X under the linear mapping

(x1 , ..., xk ) 7→ x1 + ... + xk : Rnk → Rn ,

and by C. the image of a CQr set under an affine mapping is also CQr (see C.)
98 LECTURE 2. CONIC QUADRATIC PROGRAMMING

Figure 2.1: Closed conic hulls of two closed convex sets shown in blue: segment (left) and ray
(right). Magenta sets are liftings of the blue ones; red angles are the closed conic hulls of blue
sets. To get from closed conic hulls the conic hulls per se, you should eliminate from the red
angles their parts on the x-axis. When the original set is bounded (left picture), all you need to
eliminate is the origin; when it is unbounded (right picture), you need to eliminate much more.

J.1. inf-convolution. The operation with functions related to the arithmetic summation of sets
is the inf-convolution defined as follows. Let fi : Rn → R ∪ {∞}, i = 1, ..., n, be functions.
Their inf-convolution is the function

f (x) = inf{f1 (x1 ) + ... + fk (xk ) : x1 + ... + xk = x}. (∗)

We claim that
If all fi are CQr, their inf-convolution is > −∞ everywhere and for every x for which
the inf in the right hand side of (*) is finite, this infimum is achieved, then f is CQr.

Indeed, under the assumption in question the epigraph Epi{f } = Epi{f1 } + ... + Epi{fk }.

K. Taking conic hull of a convex set. Let X ⊂ Rn be a nonempty convex set. Its conic hull is
the set
X + = {(x, t) ∈ Rn × R : t > 0, t−1 x ∈ X}.
Geometrically (see Fig. 2.1): we add to the coordinates of vectors from Rn a new coordinate
equal to 1:
(x1 , ..., xn )T 7→ (x1 , ..., xn , 1)T ,
thus getting an affine embedding of Rn in Rn+1 . We take the image of X under this mapping
– “lift” X by one along the (n + 1)st axis – and then form the set X + by taking all (open) rays
emanating from the origin and crossing the “lifted” X.
The conic hull is not closed (e.g., it does not contain the origin, which clearly is in its closure).
The closed conic hull of X is the closure of its conic hull:
 
b + = cl X + = (x, t) ∈ Rn × R : ∃{(xi , ti )}∞ : ti > 0, t−1 xi ∈ X, t = lim ti , x = lim xi .
X i=1 i i i
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 99

Note that if X is a closed convex set, then the conic hull X + of X is nothing but the intersection
b + and the open half-space {t > 0} (check!); thus, the closed conic hull of
of the closed conic hull X
a closed convex set X is larger than the conic hull by some part of the hyperplane {t = 0}. When
X is closed and bounded, then the difference between the hulls is pretty small: X b + = X + ∪ {0}
(check!). Note also that if X is a closed convex set, you can obtain it from its (closed) conic hull
by taking intersection with the hyperplane {t = 1}:
b + ⇔ (x, 1) ∈ X + .
x ∈ X ⇔ (x, 1) ∈ X

Proposition 2.3.1 (i) If a set X is CQr:

X = {x : ∃u : Ax + Bu + b ≥K 0} , (2.3.4)

with ∈ SO, then the conic hull X + is CQr as well:


 
2
X + = {(x, t) : ∃(u, s) : Ax + Bu + tb ≥K 0,  s − t  ≥L3 0}. (2.3.5)
 
s+t

(ii) If the set X given by (2.3.4) is closed, then the CQr set
\
e + = {(x, t) : ∃u : Ax + Bu + tb ≥K 0}
X {(x, t) : t ≥ 0} (2.3.6)

is “between” the conic hull X + and the closed conic hull X


b + of X:

X+ ⊂ X
e+ ⊂ X
b +.

e+ = X
(iii) If the CQR (2.3.4) is such that Bu ∈ K implies that Bu = 0, then X b + , so that
b + is CQr.
X
Proof. (i): We have

X+ ≡ {(x, t) : t > 0, x/t ∈ X}


= {(x, t) : ∃u : A(x/t) + Bu + b ≥K 0, t > 0}
= {(x, t) : ∃v : Ax + Bv + tb ≥K 0, t > 0}
= {(x, t) : ∃v, s : Ax + Bv + tb ≥K 0, t, s ≥ 0, ts ≥ 1},

and we arrive at (2.3.5).


(ii): We should prove that the set Xe + (which by construction is CQr) is between X + and Xb + . The
+ + + +
inclusion X ⊂ X e ⊂X
e is readily given by (2.3.5). Next, let us prove that X b . Let us choose a point
x̄ ∈ X, so that for a properly chosen ū it holds

Ax̄ + B ū + b ≥K 0,

i.e., (x̄, 1) ∈ Xe + . Since X


e + is convex (this is true for every CQr set), we conclude that whenever (x, t)
+
belongs to X , so does every pair (x = x + x̄, t = t + ) with  > 0:
e

∃u = u : Ax + Bu + t b ≥K 0.

It follows that t−1


 x ∈ X, whence (x , t ) ∈ X
+
⊂X b + . As  → +0, we have (x , t ) → (x, t), and since
+
b is closed, we get (x, t) ∈ X + +
e ⊂X +
X b . Thus, X b .
(ii): Assume that Bu ∈ K only if Bu = 0, and let us show that X e+ = Xb + . We just have to prove
+
that X is closed, which indeed is the case due to the following
e
100 LECTURE 2. CONIC QUADRATIC PROGRAMMING

Lemma 2.3.1 Let Y be a CQr set with CQR

Y = {y : ∃v : P y + Qv + r ≥K 0}

such that Qv ∈ K only when Qv = 0. Then


(i) There exists a constant C < ∞ such that

P y + Qv + r ∈ K ⇒ kQvk2 ≤ C(1 + kP y + rk2 ); (2.3.7)

(ii) Y is closed.

Proof of Lemma. (i): Assume, on the contrary to what should be proved, that there exists a sequence
{yi , vi } such that

P yi + Qvi + r ∈ K, kQvi k2 ≥ αi (1 + kP yi + rk2 ), αi → ∞ as i → ∞. (2.3.8)

By Linear Algebra, for every b such that the linear system Qv = b is solvable, it admits a solution v such
that kvk2 ≤ C1 kbk2 with C1 < ∞ depending on Q only; therefore we can assume, in addition to (2.3.8),
that
kvi k2 ≤ C1 kQvi k2 (2.3.9)
for all i. Now, from (2.3.8) it clearly follows that

kQvi k2 → ∞ as i → ∞; (2.3.10)

setting
1
vbi = vi ,
kQvi k2
we have
(a) kQb vi k2 = 1 ∀i,
(b) kb
vi k ≤ C1 ∀i, [by (2.3.9)]
(c) Qbvi + kQvi k−1
2 (P yi + r) ∈ K ∀i,
(d) kQvi k−1
2 kP yi + rk2 ≤ αi
−1
→ 0 as i → ∞ [by (2.3.8)]
Taking into account (b) and passing to a subsequence, we can assume that vbi → vb as i → ∞; by (c, d)
Qbv ∈ K, while by (a) kQb v k2 = 1, i.e., Qb
v 6= 0, which is the desired contradiction.
(ii) To prove that Y is closed, assume that yi ∈ Y and yi → y as i → ∞, and let us verify that
y ∈ Y . Indeed, since yi ∈ Y , there exist vi such that P yi + Qvi + r ∈ K. Same as above, we can assume
that (2.3.9) holds. Since yi → y, the sequence {Qvi } is bounded by (2.3.7), so that the sequence {vi }
is bounded by (2.3.9). Passing to a subsequence, we can assume that vi → v as i → ∞; passing to the
limit, as i → ∞, in the inclusion P yi + Qvi + r ∈ K, we get P y + Qv + r ∈ K, i.e., y ∈ Y . 2

K.1. “Projective transformation” of a CQr function. The operation with functions related
to taking conic hull of a convex set is the “projective transformation” which converts a function
f (x) : Rn → R ∪ {∞} 3) into the function

f + (x, s) = sf (x/s) : {s > 0} × Rn → R ∪ {∞}.

The epigraph of f + is the conic hull of the epigraph of f with the origin excluded:

{(x, s, t) : s > 0, t ≥ f + (x, s)} = (x, s, t) : s > 0, s−1 t ≥ f (s−1 x)




= (x, s, t) : s > 0, s−1 (x, t) ∈ Epi{f } .


3)
Recall that “a function” for us means a proper function – one which takes a finite value at least at one point
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 101

The set cl Epi{f + } is the epigraph of certain function, let it be denoted fb+ (x, s); this function is
called the projective transformation of f . E.g., the fractional-quadratic function from Example
5 is the projective transformation of the function f (x) = xT x. Note that the function fb+ (x, s)
does not necessarily coincide with f + (x, s) even in the open half-space s > 0; this is the case
if and only if the epigraph of f is closed (or, which is the same, f is lower semicontinuous:
whenever xi → x and f (xi ) → a, we have f (x) ≤ a). We are about to demonstrate that the
projective transformation “nearly” preserves CQ-representability:

Proposition 2.3.2 Let f : Rn → R ∪ {∞} be a lower semicontinuous function which is CQr:

Epi{f } ≡ {(x, t) : t ≥ f (x)} = {(t, x) : ∃u : Ax + tp + Bu + b ≥K 0} , (2.3.11)

where K ∈ SO. Assume that the CQR is such that Bu ≥K 0 implies that Bu = 0. Then the
projective transformation fb+ of f is CQr, namely,

Epi{fb+ } = {(x, t, s) : s ≥ 0, ∃u : Ax + tp + Bu + sb ≥K 0} .

Indeed, let us set


G = {(x, t, s) : ∃u : s ≥ 0, Ax + tp + Bu + sb ≥K 0} .
As we remember from the previous combination rule, G is exactly the closed conic hull of the epigraph
of f , i.e., G = Epi{fb+ }.

L. The polar of a convex set. Let X ⊂ Rn be a convex set containing the origin. The polar of
X is the set n o
X∗ = y ∈ Rn : y T x ≤ 1 ∀x ∈ X .
In particular,

• the polar of the singleton {0} is the entire space;

• the polar of the entire space is the singleton {0};

• the polar of a linear subspace is its orthogonal complement (why?);

• the polar of a closed convex pointed cone K with a nonempty interior is −K∗ , minus the
dual cone (why?).

Polarity is “symmetric”: if X is a closed convex set containing the origin, then so is X∗ , and
twice taken polar is the original set: (X∗ )∗ = X.
We are about to prove that the polarity X 7→ X∗ “nearly” preserves CQ-representability:

Proposition 2.3.3 Let X ⊂ Rn , 0 ∈ X, be a CQr set:

X = {x : ∃u : Ax + Bu + b ≥K 0} , (2.3.12)

where K ∈ SO.
Assume that the conic inequality in (2.3.12 is essentially strictly feasible (see Definition
1.4.3). Then the polar of X is the CQr set
n o
X∗ = y : ∃ξ : AT ξ + y = 0, B T ξ = 0, bT ξ ≤ 1, ξ ≥K 0 (2.3.13)
102 LECTURE 2. CONIC QUADRATIC PROGRAMMING

Indeed, consider the following conic quadratic problem:


n o
min −y T x : Ax + Bu + b ≥K 0 . (Py )
x,u

A vector y belongs to X∗ if and only if (Py ) is bounded below and its optimal value is at least −1.
Since (Py ) is essentially strictly feasible, from the (refined) Conic Duality Theorem it follows
that these properties of (Py ) hold if and only if the dual problem
n o
max −bT ξ : AT ξ = −y, B T ξ = 0, ξ ≥K 0
ξ

(recall that K is self-dual) has a feasible solution with the value of the dual objective at least
-1. Thus, n o
X∗ = y : ∃ξ : AT ξ + y = 0, B T ξ = 0, bT ξ ≤ 1, ξ ≥K 0 ,

as claimed in (2.3.13). The resulting representation of X∗ clearly is a CQR. 2

L.1. The Legendre transformation of a CQr function. The operation with functions re-
lated to taking polar of a convex set is the Legendre (or Fenchel conjugate) transformation.
The Legendre transformation (≡ the Fenchel conjugate) of a function f (x) : Rn → R ∪ {∞} is
the function h i
f∗ (y) = sup y T x − f (x) .
x

In particular,

• the conjugate of a constant f (x) ≡ c is the function

−c,

y=0
f∗ (y) = ;
6 0
+∞, y =

• the conjugate of an affine function f (x) ≡ aT x + b is the function

−b,

y=a
f∗ (y) = ;
+∞, y 6= a

• the conjugate of a convex quadratic form f (x) ≡ 21 xT DT Dx + bT x + c with rectangular D


such that Null(DT ) = {0} is the function
T −2
1 T T
2 (y − b) D (DD ) D(y − b) − c, y − b ∈ ImDT
f∗ (y) = ;
+∞, otherwise

It is worth mentioning that the Legendre transformation is symmetric: if f is a proper convex


lower semicontinuous function (i.e., ∅ 6= Epi{f } is convex and closed), then so is f∗ , and taken
twice, the Legendre transformation recovers the original function: (f∗ )∗ = f .
We are about to prove that the Legendre transformation “nearly” preserves CQ-representability:
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 103

Proposition 2.3.4 Let f : Rn → R ∪ {∞} be CQr:

{(x, t) : t ≥ f (x)} = {(t, x) : ∃u : Ax + tp + Bu + b ≥K 0} ,

where K ∈ SO. Assume that the conic inequality in the right hand side is essentially strictly
convex. Then the Legendre transformation of f is CQr:
n o
Epi{f∗ } = (y, s) : ∃ξ : AT ξ = −y, B T ξ = 0, pT ξ = 1, s ≥ bT ξ, ξ ≥K 0 . (2.3.14)

Indeed, we have
n o n o
Epi{f∗ } = (y, s) : y T x − f (x) ≤ s ∀x = (y, s) : y T x − t ≤ s ∀(x, t) ∈ Epi{f } . (2.3.15)

Consider the conic quadratic program


n o
min −y T x + t : Ax + tp + Bu + b ≥K 0 . (Py )
x,t,u

By (2.3.15), a pair (y, s) belongs to Epi{f∗ } if and only if (Py ) is bounded below with optimal
value ≥ −s. Since (Py ) is essentially strictly feasible, this is the case if and only if the dual
problem n o
max −bT ξ : AT ξ = −y, B T ξ = 0, pT ξ = 1, ξ ≥K 0
ξ

has a feasible solution with the value of the dual objective ≥ −s. Thus,
n o
Epi{f∗ } = (y, s) : ∃ξ : AT ξ = −y, B T ξ = 0, pT ξ = 1, s ≥ bT ξ, ξ ≥K 0

as claimed in (2.3.14), and (2.3.14) is a CQR. 2

Corollary 2.3.5 Let X be a CQr set:

X = {x : ∃u : Ax + By + r ∈ K} [K ∈ SO]

and let the conic inequality in this CQR be essentially strictly feasible. Then the support function

SuppX (x) = sup xT y


y∈X

of a nonempty CQr set X is CQr.

Indeed, SuppX (·) clearly is the Fenchel conjugate of the characteristic function χ(x) of X, and
the latter function admits the CQR given by the equivalence

{t ≥ χ(x)} ⇔ {∃u : [Ax + By + c; t] ≥K×R+ 0}.

The conic inequality in the right hand side is essentially strictly feasible along with Ax + Bu +
c ≥K 0, and it remains to refer to Proposition 2.3.4. 2
104 LECTURE 2. CONIC QUADRATIC PROGRAMMING

M. Taking convex hull of a finite union. The convex hull of a set Y ⊂ Rn is the smallest
convex set which contains Y :
 
 kx
X X 
Conv(Y ) = x= αi xi : xi ∈ Y, αi ≥ 0, αi = 1
 
i=1 i

The closed convex hull Conv(Y ) = cl Conv(Y ) of Y is the smallest closed convex set containing
Y.
Following Yu. Nesterov, let us prove that taking convex hull “nearly” preserves CQ-
representability:
Proposition 2.3.5 Let X1 , ..., Xk ⊂ Rn be closed convex CQr sets:

Xi = {x : Ai x + Bi ui + bi ≥Ki 0, i = 1, ..., k}, (2.3.16)

with Ki ∈ SO. Then the CQr set


Y = {x : ∃ξ 1 , ..., ξ k , t1 , ..., tk , η 1 , ..., η k :
[A1 ξ 1 + B1 η 1 + t1 b1 ; ...; Ak ξ k + Bk η k + tk bk ] ≥K 0, K = K1 × ... × Kk
t1 , ..., tk ≥ 0, (2.3.17)
ξ + ... + ξ k =
1 x
t1 + ... + tk = 1},
is between the convex hull and the closed convex hull of the set X1 ∪ ... ∪ Xk :
k
[ k
[
Conv( Xi ) ⊂ Y ⊂ Conv( Xi ).
i=1 i=1

If, in addition to CQ-representability,


(i) all Xi are bounded,
or
(ii) Xi = Zi + W , where Zi are closed and bounded sets and W is a convex closed set,
then
k
[ k
[
Conv( Xi ) = Y = Conv( Xi )
i=1 i=1
is CQr.
k
S
Proof. First, the set Y clearly contains Conv( Xi ). Indeed, since the sets Xi are convex, the
i=1
convex hull of their union is
k k
( )
X X
i i
x= ti x : x ∈ Xi , ti ≥ 0, ti = 1
i=1 i=1

(why?); for a point


k k
" #
X X
i i
x= ti x x ∈ Xi , ti ≥ 0, ti = 1 ,
i=1 i=1

there exist ui , i = 1, ..., k, such that

Ai xi + Bi ui + bi ≥Ki 0.
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 105

We get
x = (t1 x1 ) + ... + (tk xk )
= ξ 1 + ... + ξ k ,
[ξ i = ti xi ];
t1 , ..., tk ≥ 0; (2.3.18)
t1 + ... + tk = 1;
Ai ξ i + Bi η i + ti bi ≥Ki 0, i = 1, ..., k,
[η i = ti ui ],
so that x ∈ Y (see the definition of Y ).
k
[
To complete the proof that Y is between the convex hull and the closed convex hull of Xi ,
i=1
k
[
it remains to verify that if x ∈ Y then x is contained in the closed convex hull of Xi . Let us
i=1
somehow choose x̄i ∈ Xi ; for properly chosen ūi we have

Ai x̄i + Bi ūi + bi ≥Ki 0, i = 1, ..., k. (2.3.19)

Since x ∈ Y , there exist ti , ξ i , η i satisfying the relations

x = ξ 1 + ... + ξ k ,
t1 , ..., tk ≥ 0,
(2.3.20)
t1 + ... + tk = 1,
Ai ξ i + Bi η i + ti bi ≥Ki 0, i = 1, ..., k.

In view of the latter relations and (2.3.19), we have for 0 <  < 1:

Ai [(1 − )ξ i + k −1 x̄i ] + Bi [(1 − )η i + k −1 ūi ] + [(1 − )ti + k −1 ]bi ≥Ki 0;

setting
ti, = (1 − )ti + k −1 ;
xi = t−1 i −1 i
 
i, (1 − )ξ + k x̄  ;
ui = t−1 i −1 i
i, (1 − )η + k ū ,
we get
Ai xi + Bi ui + bi ≥Ki 0 ⇒ xi ∈ Xi ,
t1, , ..., tk, ≥ 0,
t1, + ... + tk, = 1
whence
k
X k
[
x := ti, xi ∈ Conv( Xi ).
i=1 i=1
On the other hand, we have by construction
k h i k
−1 i
X X
i
x = (1 − )ξ + k x̄ →x= ξ i as  → +0,
i=1 i=1

k
[
so that x belongs to the closed convex hull of Xi , as claimed.
i=1
106 LECTURE 2. CONIC QUADRATIC PROGRAMMING

k
[
It remains to verify that in the cases of (i), (ii) the convex hull of Xi is the same as the
i=1
closed convex hull of this union. (i) is a particular case of (ii) corresponding to W = {0}, so
that it suffices to prove (ii). Assume that

k
µti [zti + pti ] → x as i → ∞
P
xt =
 i=1 
zti ∈ Zi , pti ∈ W, µti ≥ 0,
P
µti = 1
i

and let us prove that x belongs to the convex hull of the union of Xi . Indeed, since Zi are closed
and bounded, passing to a subsequence, we may assume that

zti → zi ∈ Zi and µti → µi as t → ∞.

It follows that µi ≥ 0,
P
i µi = 1, and the vectors

m
X k
X
pt = µti pti = xt − µti zti
i=1 i=1

converge as t → ∞ to some vector p, and since W is closed and convex, p ∈ W . We now have
" k # k k
X X X
x = lim µti zti + pt = µi zi + p = µi [zi + p],
i→∞
i=1 i=1 i=1

so that x belongs to the convex hull of the union of Xi (as a convex combination of points
zi + p ∈ Xi ). 2

N. The recessive cone of a CQr set. Let X be a closed convex set. The recessive cone Rec(X)
of X is the set
Rec(X) = {h : x + th ∈ X ∀(x ∈ X, t ≥ 0)}.
It can be easily verified that Rec(X) is a closed cone, and that

Rec(X) = {h : x̄ + th ∈ X ∀t ≥ 0} ∀x̄ ∈ X,

i.e., that Rec(X) is the set of all directions h such that the ray emanating from a point of X
and directed by h is contained in X.

Proposition 2.3.6 Let X be a nonempty CQr set with CQR

X = {x ∈ Rn : ∃u : Ax + Bu + b ≥K 0},

where K ∈ SO, and let the CQR be such that Bu ∈ K only if Bu = 0. Then X is closed, and
the recessive cone of X is CQr:

Rec(X) = {h : ∃v : Ah + Bv ≥K 0}. (2.3.21)


2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 107

Proof. The fact that X is closed is given by Lemma 2.3.1. In order to prove (2.3.21), let us temporary
denote by R the set in the right hand side of this relation; we should prove that R = Rec(X). The
inclusion R ⊂ Rec(X) is evident. To prove the inverse inclusion, let x̄ ∈ X and h ∈ Rec(X), so that for
every i = 1, 2, ... there exists ui such that

A(x̄ + ih) + Bui + b ∈ K. (2.3.22)

By Lemma 2.3.1,
kBui k2 ≤ C(1 + kA(x̄ + ih) + bk2 ) (2.3.23)
for certain C < ∞ and all i. Besides this, we can assume w.l.o.g. that

kui k2 ≤ C1 kBui k2 (2.3.24)

(cf. the proof of Lemma 2.3.1). By (2.3.23) – (2.3.24), the sequence {vi = i−1 ui } is bounded; passing to
a subsequence, we can assume that vi → v as i → ∞. By (2.3.22, we have for all i

i−1 A(x̄ + ih) + Bvi + i−1 b ∈ K,

whence, passing to limit as i → ∞, Ah + Bv ∈ K. Thus, h ∈ R. 2

2.3.5 More examples of CQ-representable functions/sets


We are sufficiently equipped to build the dictionary of CQ-representable functions/sets. Having
built already the “elementary” part of the dictionary, we can add now a more “advanced” part.
Numeration below continues the one on Section 2.3.1.

7. Convex quadratic form g(x) = xT Qx + q T x + r (Q is a positive semidefinite symmetric


matrix) is CQr.
Indeed, Q is positive semidefinite symmetric and therefore can be decomposed as Q = DT D, so that
g(x) = kDxk22 + q T x + r. We see that g is obtained from our “raw materials” – the squared Euclidean
norm and an affine function – by affine substitution of argument and addition.
Here is an explicit CQR of g:
T

T T T
Dx ≤ t − q x − r }

{(x, t) : x D Dx + q x + r ≤ t} = {(x, t) : t+q T x+r (2.3.25)
2

2
2

8. The cone K = {(x, σ1 , σ2 ) ∈ Rn × R × R : σ1 , σ2 ≥ 0, σ1 σ2 ≥ xT x} is CQr.


Indeed, the set is just the epigraph of the fractional-quadratic function xT x/s, see Example 5; we simply
write σ1 instead of s and σ2 instead of t.
Here is an explicit CQR for the set:
 
x σ1 + σ2
K = {(x, σ1 , σ2 ) : σ1 −σ2 ≤ } (2.3.26)
2 2 2

Surprisingly, our set is just the ice-cream cone, more precisely, its inverse image under the one-to-one
linear mapping
   
x x
 σ1  7→  σ1 −σ2  .
2
σ1 +σ2
σ2 2
108 LECTURE 2. CONIC QUADRATIC PROGRAMMING


9. The “half-cone” K+2 = {(x1 , x2 , t) ∈ R3 : x1 , x2 ≥ 0, 0 ≤ t ≤ x1 x2 } is CQr.
Indeed, our set is the intersection of the cone {t2 ≤ x1 x2 , x1 , x2 ≥ 0} from the previous example and the
half-space t ≥ 0.
Here is the explicit CQR of K+ :
 
t x1 + x2
K+ = {(x1 , x2 , t) : t ≥ 0, x1 −x2 ≤ }. (2.3.27)
2 2 2

10. The hypograph of the geometric mean – the set K 2 = {(x1 , x2 , t) ∈ R3 : x1 , x2 ≥ 0, t ≤ x1 x2 } –
is CQr.
Note the difference with the previous example – here t is not required to be nonnegative!
Here is the explicit CQR for K 2 (cf. Example 9):
   
2
τ x1 + x2
K = (x1 , x2 , t) : ∃τ : t ≤ τ ; τ ≥ 0, x1 −x2 ≤
.
2 2 2

l l
11. The hypograph of the geometric mean of 2l variables – the set K 2 = {(x1 , ..., x2l , t) ∈ R2 +1 :
l
xi ≥ 0, i = 1, ..., 2l , t ≤ (x1 x2 ...x2l )1/2 } – is CQr. To see it and to get its CQR, it suffices to iterate the
construction of Example 10. Indeed, let us add to our initial variables a number of additional x-variables:
– let us call our 2l original x-variables the variables of level 0 and write x0,i instead of xi . Let us add
one new variable of level 1 per every two variables of level 0. Thus, we add 2l−1 variables x1,i of level 1.
– similarly, let us add one new variable of level 2 per every two variables of level 1, thus adding 2l−2
variables x2,i ; then we add one new variable of level 3 per every two variables of level 2, and so on, until
level l with a single variable xl,1 is built.
Now let us look at the following system S of constraints:

layer 1: x1,i ≤ x0,2i−1 x0,2i , x1,i , x0,2i−1 , x0,2i ≥ 0, i = 1, ..., 2l−1

layer 2: x2,i ≤ x1,2i−1 x1,2i , x2,i , x1,2i−1 , x1,2i ≥ 0, i = 1, ..., 2l−2
.................

layer l: xl,1 ≤ xl−1,1 xl−1,2 , xl,1 , xl−1,1 , xl−1,2 ≥ 0
(∗) t ≤ xl,1
The inequalities of the first layer say that the variables of the zero and the first level should be nonnegative
and every one of the variables of the first level should be ≤ the geometric mean of the corresponding pair
of our original x-variables. The inequalities of the second layer add the requirement that the variables
of the second level should be nonnegative, and every one of them should be ≤ the geometric mean of
the corresponding pair of the first level variables, etc. It is clear that if all these inequalities and (*)
are satisfied, then t is ≤ the geometric mean of x1 , ..., x2l . Vice versa, given nonnegative x1 , ..., x2l and
a real t which is ≤ the geometric mean of x1 , ..., x2l , we always can extend these data to a solution of
l
S. In other words, K 2 is the projection of the solution set of S onto the plane of our original variables
x1 , ..., x2l , t. It remains to note that the set of solutions of S is CQr (as the intersection of CQr sets

{(v, p, q, r) ∈ RN × R3+ : r ≤ qp}, see Example 9), so that its projection is also CQr. To get a CQR of
l
K 2 , it suffices to replace the inequalities in S with their conic quadratic equivalents, explicitly given in
Example 9.

p/q
12. The convex increasing power function x+ of rational degree p/q ≥ 1 is CQr.
Indeed, given positive integers p, q, p > q, let us choose the smallest integer l such that p ≤ 2l , and
consider the CQr set
l l l
K 2 = {(y1 , ..., y2l , s) ∈ R2+ +1 : s ≤ (y1 y2 ...y2l )1/2 }. (2.3.28)
l
Setting r = 2l − p, consider the following affine parameterization of the variables from R2 +1 by two
variables ξ, t:
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 109

– s and r first variables yi are all equal to ξ (note that we still have 2l − r = p ≥ q “unused” variables
yi );
– q next variables yi are all equal to t;
– the remaining yi ’s, if any, are all equal to 1.
l
The inverse image of K 2 under this mapping is CQr and it is the set
l l
K = {(ξ, t) ∈ R2+ : ξ 1−r/2 ≤ tq/2 } = {(ξ, t) ∈ R2+ : t ≥ ξ p/q }.
p/q
It remains to note that the epigraph of x+ can be obtained from the CQr set K by operations preserving
the CQr property. Specifically, the set L = {(x, ξ, t) ∈ R3 : ξ ≥ 0, ξ ≥ x, t ≥ ξ p/q } is the intersection
p/q
of K × R and the half-space {(x, ξ, t) : ξ ≥ x} and thus is CQr along with K, and Epi{x+ } is the
projection of the CQr set L on the plane of x, t-variables.
 −p/q
x , x>0
13. The decreasing power function g(x) = (p, q are positive integers) is CQr.
+∞, x≤0
Same as in Example 12, we choose the smallest integer l such that 2l ≥ p + q, consider the CQr set
(2.3.28) and parameterize affinely the variables yi , s by two variables (x, t) as follows:
– s and the first (2l − p − q) yi ’s are all equal to one;
– p of the remaining yi ’s are all equal to x, and the q last of yi ’s are all equal to t.
l
It is immediately seen that the inverse image of K 2 under the indicated affine mapping is the epigraph
of g.

14. The even power function g(x) = x2p on the axis (p positive integer) is CQr.
Indeed, we already know that the sets P = {(x, ξ, t) ∈ R3 : x2 ≤ ξ} and K 0 = {(x, ξ, t) ∈ R3 : 0 ≤ ξ, ξ p ≤
t} are CQr (both sets are direct products of R and the sets with already known to us CQR’s). It remains
to note that the epigraph of g is the projection of P ∩ Q onto the (x, t)-plane.
Example 14 along with our combination rules allows to build a CQR for a polynomial p(x) of the
form
XL
p(x) = pl x2l , x ∈ R,
l=1

with nonnegative coefficients.

15. The concave monomial xπ1 1 ...xπnn . Let π1 = p1


p , ..., πn = pn
p be positive rational numbers
with π1 + ... + πn ≤ 1. The function

f (x) = −xπ1 1 ...xπnn : Rn+ → R

is CQr.
The construction is similar to the one of Example 12. Let l be such that 2l ≥ p. We recall that the set
l
Y = {(y1 , ..., y2l , s) : y1 , ..., y2l , s) : y1 , ..., y2l ≥ 0, 0 ≤ s ≤ (y1 ..., y2l )1/2 }

is CQr, and therefore so is its inverse image under the affine mapping

(x1 , ..., xn , s) 7→ (x1 , ..., x1 , x2 , ..., x2 , ..., xn , ..., xn , s, ..., s, 1, ..., 1 , s),
| {z } | {z } | {z } | {z } | {z }
p1 p2 pn 2l −p p−p1 −...−pn

i.e., the set


l l
Z = {(x1 , ..., xn , s) : x1 , ..., xn ≥ 0, 0 ≤ s ≤ (xp11 ...xpnn s2 −p )1/2 }
p /p p /p
= {(x1 , ..., xn , s) : x1 , ..., xn ≥ 0, 0 ≤ s ≤ x11 ...xnn }.
110 LECTURE 2. CONIC QUADRATIC PROGRAMMING

Since the set Z is CQr, so is the set


Z 0 = {(x1 , ..., xn , t, s) : x1 , ..., xn ≥ 0, s ≥ 0, 0 ≤ s − t ≤ xπ1 1 ...xπnn },
which is the intersection of the half-space {s ≥ 0} and the inverse image of Z under the affine mapping
(x1 , ..., xn , t, s) 7→ (x1 , ..., xn , s − t). It remains to note that the epigraph of f is the projection of Z 0 onto
the plane of the variables x1 , ..., xn , t.

16. The convex monomial x−π 1 −πn


1 ...xn . Let π1 , ..., πn be positive rational numbers. The func-
tion
f (x) = x1−π1 ...x−π
n
n
: {x ∈ Rn : x > 0} → R
is CQr.
The verification is completely similar to the one in Example 15.

 n 1/p
The p-norm kxkp = |xi |p : Rn → R (p ≥ 1 is a rational number). We claim that
P
17a.
i=1
the function kxkp is CQr.
It is immediately seen that
n
1/p
X
kxkp ≤ t ⇔ t ≥ 0 & ∃v1 , ..., vn ≥ 0 : |xi | ≤ t(p−1)/p vi , i = 1, ..., n, vi ≤ t. (2.3.29)
i=1
n n
|xi |p ≤ tp−1 vi ≤ tp , i.e., kxkp ≤ t. Vice versa,
P P
Indeed, if the indicated vi exist, then
i=1 i=1
assume that kxkp ≤ t. If t = 0, then x = 0, and the right hand side relations in (2.3.29)
are satisfied for vi = 0, i = 1, ..., n. If t > 0, we can satisfy these relations by setting
vi = |xi |p t1−p .
(2.3.29) says that the epigraph of kxkp is the projection onto the (x, t)-plane of the set of
solutions to the system of inequalities
t≥0
vi ≥ 0, i = 1, ..., n
1/p
xi ≤ t(p−1)/p vi , i = 1, ..., n
1/p
−xi ≤ t(p−1)/p vi , i = 1, ..., n
v1 + ... + vn ≤ t
Each of these inequalities defines a CQr set (in particular, for the nonlinear inequalities this
is due to Example 15). Thus, the solution set of the system is CQr (as an intersection of
finitely many CQr sets), whence its projection on the (x, t)-plane – i.e., the epigraph of kxkp
– is CQr.

 n 1/p
The function kx+ kp = maxp [x : Rn → R (p ≥ 1 a rational number) is
P
17b. i , 0]
i=1
CQr.
Indeed,
t ≥ kx+ kp ⇔ ∃y1 , ..., yn : 0 ≤ yi , xi ≤ yi , i = 1, ..., n, kykp ≤ t.
Thus, the epigraph of kx+ kp is a projection of the CQr set (see Example 17) given by the
system of inequalities in the right hand side.

From the above examples it is seen that the “expressive abilities” of c.q.i.’s are indeed strong:
they allow to handle a wide variety of very different functions and sets.
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 111

2.3.6 Fast CQr approximations of exponent and logarithm


The epigraph of exponent (after “changing point of view,” becoming the hypograph of logarithm)
is important for applications convex set which is not representable via Magic Cones. However,
“for all practical purposes” this set is CQr (rigorously speaking, admits “short” high-accuracy
CQr approximation).
Exponent exp{x} which lives in our mind is defined on the entire real axis and rapidly goes
to 0 as x → −∞ and to +∞ as x → ∞. Exponent which lives in a computer is a different beast:
if you ask a computer with the usual floating point arithmetics what is exp{−750} or exp{750},
it will return 0 in the first, and +∞ in the second case. Thus, “for all practical purposes” we
can restrict the domain of the exponent — pass from exp{x} to
(
exp{x}, |x| ≤ R
ExpR (x) =
+∞, otherwise

with once for ever fixed moderate (few hundreds) R.


Proposition 2.3.7 “For all practical purposes,” ExpR (·) is CQr. Rigorously speaking: for
every  ∈ (0, 0.1) we can point out a CQr function ER, with domain [−R, R] and with explicit
CQR (involving O(1) ln(R/) variables and conic quadratic constraints) such that

∀x ∈ [−R, R] : (1 − ) exp{x} ≤ ER, (x) ≤ exp{x}.

Proof. is given by explicit construction as follows. Let k be positive integer such that 2k > 2R.
For x ∈ [−R, R], setting y = 2−k x, we have |y| ≤ 21 , whence, as is immediately seen,

exp{y − 4y 2 } ≤ 1 + y ≤ exp{y} & exp{x} = exp{2k y}.

Consequently,
k)
exp{x} exp{−2k+2 y 2 } ≤ [1 + y](2 ≤ exp{x}.
We have 2k+2 y 2 = 22−k R2 . Consequently, with properly chosen O(1) and k =cO(1) ln(R/)b,
we have i k
h (2 )
(1 − ) exp{x} ≤ 1 + 2−k x ≤ exp{x} ∀x ∈ [−R, R].
With the just defined k, the CQr function ER, (x) given by the CQR

t ≥ ER, (x) ⇔ |x| ≤ R, ∃u0 , ..., uk−1 : 1 + 2−k x ≤ u0 , u20 ≤ u1 , u21 ≤ u2 , ..., u2k−2 ≤ uk−1 , u2k−1 ≤ t


is the required CQr approximation of ExpR (·). 2

Fast CQr approximation of logarithm which “lives in computer.” Tight CQr ap-
proximation of “computer exponent” ExpR (·) yields tight CQr approximation of the (minus)
“computer logarithm.” The construction is as follows:
Given  ∈ (0, 0.1) and R, we have built a CQr set Q = QR, ⊂ R2 with “short” (with
O(1) ln(R/) variables and conic quadratic constraints) explicit CQR and have ensured that
A) If (x, t) ∈ Q and t0 ≥ t, then (x, t0 ) ∈ Q

B) If (x, t) ∈ Q, then |x| ≤ R and t ≥ (1 − ) exp{x}


112 LECTURE 2. CONIC QUADRATIC PROGRAMMING

C) If |x| ≤ R, then there exists t such that (x, t) ∈ Q and t ≤ (1 + ) exp{x}

Now let
∆ = ∆R, = (1 + )[exp{−R}, exp{R}]
(with R like 700, ∆, “‘for all practical purposes”, is the entire positive ray), and let

Q = QR, = {(x, t) ∈ QR, : t ∈ ∆}.

Note that Q is CQr with explicit and short CQR readily given by the CQR of Q. Let function
Ln(t) := LnR, (t) : R → R ∪ {−∞} be defined by the relation

z ≤ Ln(t) ⇔ ∃x : z ≤ x & (x, t) ∈ Q (2.3.30)

From A) – C) immediately follows

Proposition 2.3.8 Ln(t) is a concave function with hypograph given by explicit CQR, and this
function approximates ln(t) on ∆ within accuracy O():

1
 
t 6∈ ∆ ⇒ Ln(t) = −∞ & t ∈ ∆ ⇒ − ln(1 + ) ≤ Ln(t) − ln(t) ≤ ln
1−

Verification is immediate. When t 6∈ ∆, the right hand side condition in (2.3.30) never takes
place (since (x, t) ∈ Q implies t ∈ ∆) , implying that Ln(t) = −∞ outside of ∆. Now let t ∈ ∆.
If z ≤ Ln(t), then there exists x ≥ z such that (x, t) ∈ Q ⊂ Q, whence exp{x}(1 − ) ≤ t by B),
that is, z ≤ x ≤ ln(t) − ln(1 − ). Since this relation holds true for every z ≤ Ln(t), we get

Ln(t) ≤ ln(t) − ln(1 − ).

On the other hand, let xt = ln(t) − ln(1 + ), that is, exp{xt }(1 + ) = t. Since t ∈ ∆, we have
|xt | ≤ R, which, by C), implies that there exists t0 such that (xt , t0 ) ∈ Q and t0 ≤ (1+) exp{xt } =
t. By A) it follows that (xt , t) ∈ Q, and since t ∈ ∆, we have also (xt , t) ∈ Q. Setting z = xt , we
get z ≤ xt and (xt , t) ∈ Q, that is z = xt ≤ Ln(t) by (2.3.30). Thus, Ln(t) ≥ xt = ln(t)−ln(1+),
completing the proof of Proposition. 2

Refinement. Our construction of fast CQr approximation of ExpR (·) (which, as a byproduct,
gives fast CQr approximation of LnR (·)) has two components:

• Computing exp{x} for large x reduces to computing exp{2−k x} and squaring the results
k times;

• For small y, exp{y} ≈ 1 + y, and this simplest approximation is accurate enough for our
purposes.

Note that the second component can be improved: we can approximate exp{y} by a larger part
of the Taylor expansion, provided that the epigraph of this part is CQr. For example,

y2 y3 y4 y5 y6
g6 (y) = 1 + y + + + + +
2 6 24 120 720
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 113

for small y approximates exp{y} much better than g1 (y) = 1 + y and happens to be convex
function of y representable as

g6 (y) = c0 + c2 (α2 + x)2 + c4 (α2 + y)4 + c6 (α6 + y)6 [ci > 0 ∀i]

As a result, g6 is CQr with simple CQR 4 . As a result, the CQr function E(x) with the CQR
 
−k
t ≥ E(x) ⇔ |x| ≤ R, ∃u0 , ..., uk−1 : g6 (2 x) ≤ u0 , u20 ≤ u1 , u21 ≤ u2 , ..., u2k−2 ≤ uk−1 , u2k−1 ≤t
| {z }
CQr

ensures the target relation

|x| ≤ R ⇒ (1 − ) exp{x} ≤ E(x) ≤ (1 + ) exp{x}

with smaller k than in our initial construction. For example, with g6 in the role of g1 , R = 700,
k = 15 we ensure  =3.0e-11. It is an “honest” result – it indeed is what happens on actual
computer. In this respect it should be mentioned that our previous considerations contain an
element of cheating (hopefully, recognized by a careful reader): by reasons similar to those which
make “computer exponent” of 750 equal +∞, applying the standard floating point arithmetics to
numbers like 1 + y for “very small” y leads to significant loss of accuracy, and in our reasoning
we tacitly assumed (as everywhere in this course) that operating with CQR’s we use precise
Real Arithmetics. With floating point implementation, the best  achievable with our initial
construction, as applied with R = 700, value of  is as “large” as 1.e-5, the corresponding k
being 35.

2.3.7 From CQR’s to K-representations of functions and sets


On a closest inspection, the calculus of conic quadratic representations of convex sets and func-
tions we have developed so far has nothing in common with Lorentz cones per se — we could
speak about “K-representable” functions and sets, where K is a family of regular cones in finite-
dimensional Euclidean spaces such that K

1. contains nonnegative ray,

2. contains the 3D Lorentz cone,

3. is closed w.r.t. taking finite direct products of its members, and

4. is closed w.r.t. passing from a cone to its dual.

Given such a family, we can define K-representation of a set X ⊂ Rn as a representation

X = {x ∈ Rn : ∃u : Ax + Bu + c ∈ K}

with K ∈ K, and K-representation of a function f (x) : Rm → R ∪ {+∞} - as a K-representation


of the epigraph of the function.
What we have developed so far dealt with the case when K is comprised of finite direct
products of Lorentz cones; however, straightforward inspection shows that the calculus rules
4
There is “numerical evidence” that the same holds true for all even degree 2k “truncations” of the Taylor series
of exp{x}: these truncations are combinations, with nonnegative coefficients, of even degrees of linear functions
of x and therefore are CQr. At least, this definitely is the case when 1 ≤ k ≤ 6.
114 LECTURE 2. CONIC QUADRATIC PROGRAMMING

remain intact when replacing conic quadratic representability with K-representability5 . What
changes when passing from one family K to another is the “raw materials,” and therefore the
scope of K-representable sets and functions. The importance of calculus of K-representability
stems from the fact that given a solver for conic problems on cones from K, this calculus allows
one to recognize that in the problem of interest minx∈X Ψ(x) the objective Ψ and the domain
X are K-representable6 and thus the problem of interest can be converted to a conic problem
on cone from K and thus can be solved by the solver at hand.
As a matter of fact, as far as its paradigm and set of rules (not the set of raw materials!) are
concerned,“calculus of K-representability” we have developed so far covers basically all needs of
“well-structured” convex optimization. There is, however, an exception – this is the case when
the objective in the convex problem of interest

min Ψ(x) (P )
x∈X

is given implicitly:
Ψ(x) = sup ψ(x, y) (2.3.31)
y∈Y

where Y is convex set and ψ : X × Y → R is convex-concave (i.e., convex in x ∈ X and concave


in y ∈ Y) and continuous. Problem (P ) with objective given by (2.3.31) is called “primal
problem associated with the convex-concave saddle point problem minx∈X maxy∈Y ψ(x, y)” (see
Section D.4), and problems of this type do arise in some applications of well-structured convex
optimization. We are about to present a saddle point version of K-representability along with
the corresponding calculus which allows to convert “K-representable convex-concave saddle point
problems” into usual conic problems on cones from K.
What follows is taken from [31], where one can also find results on conic representations
of the “most general problems with convex structure” – variational inequalities with monotone
operators, the topic to be considered in Section 5.7.1.
From now on we fix a family K of regular cones in Euclidean spaces which contains nonneg-
ative rays, L3 , and is closed w.r.t. taking finite direct products and passing from a cone K to
its dual K∗ . Unless otherwise is explicitly stated, all cones below belong to K.

2.3.7.1 Conic representability of convex-concave function—definition


Let X , Y be nonempty convex sets given by K-representations:

X = {x : ∃ξ : Ax + Bξ ≤KX c}, Y = {y : ∃η : Cy + Dη ≤KY e}.

Let us say that a convex-concave continuous function ψ(x, y) : X × Y → R is K-representable


on X × Y, if it admits representation of the form
n o
∀(x ∈ X , y ∈ Y) : ψ(x, y) = inf f T y + t : P f + tp + Qu + Rx ≤K s (2.3.32)
f,t,u

5
in this respect it should be noted that as far as calculus rules are concerned, the only place where the inclusion
3
L ∈ K was used was when claiming that the positive ray {x > 0} on the real axis is K-representable, see Section
2.3.4.K.
6
the “one allowed to recognize” not necessary should be a human being – calculus rules are fully algorithmic
and thus can be implemented by a compiler, the most famous example being the cvx software of M. Grant and S.
Boyd capable to handle “semidefinite representability” (K is comprised of direct products of semidefinite cones),
and as a byproduct – conic quadratic representability.
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 115

where K ∈ K. We call representation (2.3.32) essentially strictly feasible, if the conic constraint

P f + tp + Qu ≤K s − Rx

in variables f, t, u is essentially strictly feasible for every x ∈ X .

2.3.7.2 Main observation


Assume that Y is compact and is given by essentially strictly feasible K-representation

Y = {y : ∃η : Cy + Dη ≤KY e}. (2.3.33)

Then problem (P ) can be processed as follows: for x ∈ X we have


h i
Ψ(x) = max inf f T y + t : P f + tp + Qu + Rx ≤K s
y∈Y 
f,t,u 
= inf max[f T y + t] : P f + tp + Qu + Rx ≤K s
f,t,u
" y∈Y #
Sion-Kakutani Theorem (Theorem D.4.3); recall
that Y is convex and compact
 h i 
= inf max f T y : Cy + Dη ≤KY e + t : P f + tp + Qu + Rx ≤K s
f,t,u  y,η 
h i
= inf min λT e : C T λ = f, DT λ = 0, λ ≥K∗Y 0 + t : P f + tp + Qu + Rx ≤K s
f,t,u λ " #
by strong conic duality, Theorem 1.4.4;
recall that (2.3.33) is essentially strictly feasible,

so that the problem of interest


min Ψ(x) (a)
x∈X
reduces to the explicit K-conic problem
 

 P f + tp + Qu + Rx ≤K s, 

min eT λ + t : C T λ = f, DT λ = 0, λ ≥K∗Y 0, . (b)
x,ξ,f,t,u,λ  
 Ax + Bξ ≤KX c 

Here,“reduction” means that the x-component of a feasible solution ζ = (x, ξ, f, t, u, λ) to (b)


is a feasible solution to (a) with the value of the objective of the latter problem at x being ≤
the value of the objective of (b) at ζ, and the optimal values in (a) and (b) are the same. Thus,
as far as building feasible approximate solutions of a prescribed accuracy  > 0 in terms of the
objective are concerned, problem (a) reduces to the explicit conic problem (b). Note, however,
that (a) and (b) are not “exactly the same”—it may happen that (a) is solvable while (b) is
not so. “For all practical purposes,” this subtle difference is of no importance since in actual
computations exactly optimal solutions usually are not reachable anyway.

Discussion. Note that for continuous convex-concave function ψ : X × Y → R the set

Z = {[f ; t; x] : x ∈ X , f T y + t ≥ ψ(x, y) ∀y ∈ Y}

clearly is convex, and by the standard Fenchel duality we have


h i
∀(x ∈ X , y ∈ Y) : ψ(x, y) = inf f T y + t : [f ; t; x] ∈ Z . (2.3.34)
f,t
116 LECTURE 2. CONIC QUADRATIC PROGRAMMING

K-representability of ψ on X × Y means that (2.3.34) is preserved when replacing the set Z with
its properly selected K-representable subset. Given that Z is convex, this assumption seems
to be not too restrictive; taken together with K-representability of X and Y, it can be treated
as the definition of K-representability of the convex-concave function ψ. The above derivation
shows that convex-concave saddle point problem with K-representable domain and cost function
(more precisely, the primal minimization problem (P ) induced by this saddle point problem)
can be represented in explicit K-conic form, at least when the K-representations of the cost and
of (compact) Y are essentially strictly feasible.
Note also that if X and Y are convex sets and a function ψ(x, y) : X × Y → R admits
representation (2.3.32), then ψ automatically is convex in x ∈ X and concave in y ∈ Y.

2.3.7.3 Symmetry
Assume that representation (2.3.32) is essentially strictly feasible. Then for all x ∈ X , y ∈ Y we
have by conic duality
n o
ψ(x, y) = inf f T y + t : P f + tp + Qu + Rx ≤K s
f,t,u n o
= sup uT [Rx − s] : P T u + y = 0, pT u + 1 = 0, QT u = 0 ,
u∈K∗

whence, setting

X = Y, Y = X , x = y, y = x, ψ(x, y) = −ψ(y, x) = −ψ(x, y),

we have
(∀x ∈ X , y ∈ Y) : n o
ψ(x, y) = −ψ(x, y) = inf ∗ −uT [Rx − s] : P T u + y = 0, pT u + 1 = 0, QT u = 0
u∈K" #
f = −RT u, t = sT u, QT u = 0,

T
= inf f y+t:
f ,t,u pT u + 1 = 0, P T u + x = 0, u ∈ K∗
| {z }
⇔P f +tp+Qu+Rx≤K s

with K ∈ K. We see that a (essentially strictly feasible) K-representation of convex-concave


function ψ on X × Y induces straightforwardly a K-representation of the “symmetric entity”—
the convex-concave function ψ(y, x) = −ψ(x, y) on Y × X , with immediate consequences for
converting the optimization problem
 
sup Ψ(y) := inf ψ(x, y) (D)
y∈Y x∈X

into the standard conic form.

2.3.7.4 Calculus of conic representations of convex-concave functions


Representations of the form (2.3.32) admit a calculus.
2.3.7.4.A. Raw materials for the calculus are given by
1. Functions ψ(x, y) = a(x), where a(x), Dom a ⊃ X , is K-representable:

t ≥ a(x) ⇔ ∃u : Rx + tp + Qu ≤K s [K ∈ K].
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 117

In this case  
T
ψ(x, y) = inf f y + t : f = 0, Rx + tp + Qu ≤K s .
f,t,u | {z }
⇔P f +tp+Qu+Rx≤K s with K ∈ K

2. Functions ψ(x, y) = −b(y), where b(y), Dom b ⊃ Y, is K-representable:

t ≥ b(y) ⇔ ∃u : Ry + tp + Qu ≤K s [K ∈ K]

with essentially strictly feasible K-representation. In this case


n o
ψ(x, y) = −b(y) = − inf t : Ry + tp + Qu ≤K s
t,u
T T
n o
= − sup −[R u]T y − sT u : −uT p = 1, Q u = 0 [by conic duality]

u∈K
 
T T T T
= inf fT y + t : f = R u, t = s u, p u + 1 = 0, Q u = 0, u ≥K 0 . ∗
f,t,u | {z }
⇔P f +tp+Qu≤K s with K ∈ K

3. Bilinear functions:
 
T T T T T T
ψ(x, y) ≡ a x + b y + x Ay + c ⇒ ψ(x, y) = min f y + t : f = A x + b, t = a x + c .
f,t | {z }
⇔P f +tp+Rx≤s

4. “Generalized bilinear functions.” Let U ∈ K and E be the embedding Euclidean space of


U.

(a) Let X be a nonempty K-representable set, and let continuous mapping F (x) : X → E
possess K-representable U-epigraph7

EpiU F := {(x, z) ∈ X ×E, z ≥U F (x)} = {(x, z) : ∃u : Rx+Sz+T u ≤K s} [K ∈ K].

Then the function


ψ(x, y) = y T F (x) : X × U∗ → R
is K-representable on X × U∗ :

∀(x ∈ X , y ∈ U∗ ) : n o
ψ(x, y) = y T F (x) = inf f T y : f ≥U F (x)
n f o
= inf f T y : Rx + Sz + T u ≤K s .
f,u

(b) Let Y be a nonempty K-representable set, and let continuous mapping G(y) : Y → E
possess K-representable U∗ -hypograph,

HypoU∗ G := {(y, w) ∈ Y × E : w ≤U∗ G(y)}


= {(y, w) : ∃u : Ry + Sw + Qu ≥K s} [K ∈ K],
7
This implies, in particular that the U-epigraph of F is convex, or, which is the same, that F is U-convex:

∀(x0 ; x00 ∈ X , λ ∈ [0, 1]) : F (λx0 + (1 − λ)x00 ) ≤U λF (x0 ) + (1 − λ)F (x00 ).


118 LECTURE 2. CONIC QUADRATIC PROGRAMMING

the representation being essentially strictly feasible. Then the function

ψ(x, y) = xT G(y) : U × Y → R

is K-representable on U × Y:
∀(x ∈ U, y ∈ Y) : n o
ψ(x, y) = xT G(y) = sup xT w : w ≤U∗ G(y) [due to x ∈ U]
n w o
= sup xT w : Ry + Sw + Qu ≥K s
w,u n o
= inf λ [s − Ry]T λ : S T λ = x, QT λ = 0, λ ∈ K∗ [by conic duality]
" #
f + RT λ = 0, sT λ = t,

= inf f,t,u=[λ;w] fT y +t: [K ∈ K].
S T λ = x, QT λ = 0, λ ∈ K∗
| {z }
⇔P f +tp+Qu+Rx≤K s
(2.3.35)
(c) Let Y and G(·) be as in item 4b) with G(Y) ⊂ U∗ , let X be a nonempty K-
representable set, and let
F (x) : X → U
be continuous U-convex mapping with K-representable U-epigraph:
EpiU F := {(x, z) : x ∈ X , z ≥U F (x)}
(2.3.36)
= {x, z : ∃v : Rx
b + Sz b ≤ sb}
b + Qv
K
b
b ∈ K]
[K

Then the function


ψ(x, y) = F T (x)G(y) : X × Y → R
is continuous convex-concave and admits K-representation as follows:
∀(x ∈ X , y ∈ Y) :
ψ(x, y) = F T n (x)G(y) o
= inf z T G(y) : z ≥U F (x) [since G(y) ∈ U∗ ]
z n o
= inf z T G(y) : Rx
b + Sz b ≤ sb
b + Qv
K
[by (2.3.36)]
z,v 
b

n o
= inf inf fT y + t : P f + tp + Qu + Rz ≤K s : Rx + Sz + Qv ≤K
b b b
b sb
z,v f,t

due to (2.3.35)—note that on the domain on which inf z,v is taken we have z ≥U
F (x) ∈ U, making (2.3.35) applicable. We conclude that
" #
P f + tp + Qu + Rz ≤K s,

T e ∈ K].
ψ(x, y) = inf f y+t: b ≤ sb [K
f,t,u=[z;v] Rx
b + Szb + Qv
K
b
| {z }
⇔P
ef +tpe+Qu+
e Rx≤
e c
e K
e

2.3.7.4.B. Basic calculus rules are as follows.


1. [Direct summation] Let θi > 0, i ≤ I, and let

∀(xi ∈ X i , y i ∈ Y i ,ni ≤ I) :
[Ki ∈ K].
o
ψi (xi , y i ) = inf fiT y i + ti : Pi fi + ti pi + Qi ui + Ri xi ≤Ki si
fi ,ti ,ui
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 119

Then
∀(x = [x1 ; ...; xI ] ∈ X = X1 × ... × XI , y = [y 1 ; ...; y I ] ∈ Y = Y1 × ... × YI ) :
θi ψi (xi , y i )
P
ψ(x, y) :=
i  P 
f = [θ1 f1 ; ...; θI fI ], t = i θi ti ,
= inf fT y +t:
f,t,u={fi ,ti ,ui ,i≤I} Pi fi + ti pi + Qi ui + Ri xi ≤Ki si , 1 ≤ i ≤ I
| {z }
⇔P f +tp+Qu+Rx≤K s with K ∈ K

2. [Affine substitution of variables] Let

∀(ξ ∈ X + , η ∈ Y + ) n:
f+T η + t+ : P+ f+ + t+ p+ + Q+ u+ + R+ ξ ≤K+ s+ ,
o
ψ+ (ξ, η) = inf
f+ ,t+ ,u+

and
x 7→ Ax + b : X → X + , y 7→ By + c : Y → Y + .
Then
∀(x ∈ X , y ∈ Y) :
ψ(x, y) := ψ+ (Ax +  b,TBy + c)
= inf f+ (By + c) + t+ : P+ f+ + t+ p+ + Q+ u+ + R+ [Ax + b] ≤K+ s+
f+ ,t+ ,u+
f = B T f+ , t = t+ + f+
T
  
c
= inf fT y + t : .
f,t,u=[f+ ;t+ ;u+ ] P+ f+ + t+ p+ + Q+ u+ + R+ Ax ≤K+ s+ − R+ b
| {z }
⇔P f +tp+Qu+Rx≤K s with K ∈ K

3. [Taking conic combinations] This rule, evident by itself, is a combination of the two pre-
ceding rules:
Let θi > 0 and ψi (x, y) : X × Y → R, i ≤ I, be such that

∀(x ∈ X , y ∈ Y) n
:
ψi (x, y) = inf fiT y i + ti : Pi fi + ti pi + Qi ui + Ri xi ≤Ki si .
o
fi ,ti ,ui

Then
∀(x ∈ X , y ∈ Y) :
P
ψ(x, y) := θi ψi (x, y)
i " #
 P P
f = i θi fi , t = i θi ti ,
= inf fT y +t: .
f,t,u={fi ,ti ,ui ,i≤I} Pi fi + ti pi + Qi ui + Ri xi ≤Ki si , 1 ≤ i ≤ I
| {z }
⇔P f +tp+Qu+Rx≤K s with K ∈ K

4. [Projective transformation in x-variable] Let


n o
∀(x ∈ X , y ∈ Y) : ψ(x, y) = inf f T y + t : P f + tp + Qu + Rx ≤K s [K ∈ K].
f,t,u

Then
(∀(α, x) : α > 0, α−1 x ∈ X , ∀y ∈ Y) 
ψ((α, x), y) := αψ(α−1 x, y) = inf f T y + t : P f + tp + Qu + Rx − αs ≤K 0 .
f,t,u
120 LECTURE 2. CONIC QUADRATIC PROGRAMMING

5. [Superposition in x-variable] Let X , Y be K-representable, X be a K-representable subset


of some Rn , and let K 3 U be a cone in Rn . Furthermore, assume that

ψ(x, y) : X × Y → R

is a continuous convex-concave function which is U-nondecreasing in x ∈ X , i.e.

∀(y ∈ Y, x0 , x00 ∈ X : x0 ≤U x00 ) : ψ(x00 , y) ≥ ψ(x0 , y),

and admits K-representation on X × Y:


n o
∀(x ∈ X , y ∈ Y) : ψ(x, y) = inf f T y + t : P f + tp + Qu + Rx ≤K s .
f,t,u

Let also
x 7→ X(x) : X 7→ X
be a U-convex mapping such that the intersection of the U-epigraph of the mapping with
X × X admits K-representation:

{(x, x) : x ∈ X , x ∈ X , x ≥U X(x)} = {(x, x) : ∃v : Ax + Bx + Cv ≤K


b d}
b ∈ K].
[K

Then the function ψ(x, y) = ψ(X(x), y) admits K-representation on X × Y:

∀(x ∈ X , y ∈ Y) :
ψ(x, y) = ψ(X(x), y) = inf {ψ(x, y) : x ∈ X & x ≥U X(x)}
( x )
x ∈ X , x ≥U X(x)
= inf f T y + t :
f,t,u,x P f + tp + Qu + Rx ≤K s
( )
T Ax + Bx + Cv ≤K d
= inf f y+t: b
f,t,u,x,v P f + tp + Qu + Rx ≤K s
" #


Ax + Bx + Cv d
= inf fT y + t : K
b .
f,t,u=[u,x,v] P f + tp + Qu + Rx ≤K s
| {z }
⇔P f +tp+Qu+Rx≤K s with K ∈ K

6. [Partial maximization] Let

∀(x ∈ X , y = [w; z] ∈nY) : o


ψ(x, [w; z]) = inf g T w + hT z + τ : Gg + Hh + τ p + Qu + Rx ≤K s [K ∈ K],
[g;h],τ,u

and let Y be compact and given by K-representation:

Y = {[w; z] : ∃v : Aw + Bz + Cv ≤L r}

such that the conic constraint Bz + Cv ≤L r − Aw in variables z, v is essentially strictly


feasible for every w ∈ W = {w : ∃z : [w; z] ∈ Y}. Then the function

ψ(x; w) := max {ψ(x, [w; z]) : [w; z] ∈ Y} : X × W → R


z
2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 121

is K-representable provided it is continuous8 :


∀(x ∈ X , w ∈ W)
n : h i o
ψ(x; w) = max inf g T w + hT z + τ : Gg + Hh + τ p + Qu + Rx ≤K s : [w; z] ∈ Y
z
n h [g;h],τ,u i o
= inf max g T w + hT z + τ : [w; z] ∈ Y : Gg + Hh + τ p + Qu + Rx ≤K s
[g;h],τ,u z
[by the Sion-Kakutani Theorem (Theorem D.4.3); note that for w ∈ W
n {z :h[w; z] ∈ Y} is nonempty, convex and compact]
the set i o
= inf max g T w + hT z + τ : Bz + Cv ≤L r − Aw : Gg + Hh + τ p + Qu + Rx ≤K s
[g;h],τ,u n z,v h i
= inf min g T w + (r − Aw)T ξ + τ : B T ξ = h, C T ξ = 0, ξ ≥L∗ 0 :
[g;h],τ,u ξ o
Gg + Hh + τ p + Qu + Rx ≤K s [by conic duality]
" #
f =g− AT ξ, t rT ξ BT ξ cT ξ

= + τ, = h, = 0,
= inf fT w +t: .
f,t,u=[g,h,τ,u,ξ] ξ ≥L 0, Gg + Hh + τ p + Qu + Rx ≤K s

| {z }
⇔P f +tp+Qu+Rx≤K s with K ∈ K
Note that the last three rules combine with symmetry to induce “symmetric” rules on
perspective transformation and superposition in y-variable and partial minimization in
x-variable.
7. [Taking Fenchel conjugate] Let X ⊂ Rn , Y ⊂ Rm be nonempty convex compact sets given
by essentially strictly feasible K-representations
X = {x : ∃ξ : Ax + Bξ ≤KX c}, Y = {y : ∃η : Cy + Dη ≤KY e},
and assume that the conic constraint
DT λ = 0 & λ ≥K∗Y 0
is essentially strictly feasible (this definitely is the case when KY is polyhedral). Let, next,
ψ(x, y) : X × Y → R be continuous convex-concave function given by essentially strictly
feasible K-representation:
n o
ψ(x, y) = inf f T y + t : P f + tΠ + Qu + Rx ≤K s .
f,t,u

Consider the Fenchel conjugate of ψ: the function


h i
ψ∗ (p, q) = max min pT x + q T y − ψ(x, y) : Rn × Rm → R.
x∈X y∈Y

(cf. item L.1 in Section 2.3.3). We claim that ψ∗ is a continuous convex-concave K-


representable function with K-representation readily given by the K-representations of X ,
Y, ψ.
The fact that ψ∗ is well defined and continuous is readily given by compactness of X ,
Y and continuity of ψ. These properties of the data imply by Sion-Kakutani Theorem
(Theorem D.4.3)) that
h i
ψ∗ (p, q) = min max pT x + q T y − ψ(x, y) .
y∈Y x∈X
8
the representation to follow holds true withou the latter assumption; we make it to stay consistent with the
definition of repreentability, where the function in question is continuous.
122 LECTURE 2. CONIC QUADRATIC PROGRAMMING

From the initial max min definition of ψ∗ it follows that


x y
 
ψ∗ (p, q) = max pT x + min[q T y − ψ(x, y)]
x∈X y∈Y

is the pointwise maximum of a family of affine functions of p and thus is convex in p. From
the min max representation of ψ∗ it follows that
y x

ψ∗ (p, q) = min [q T y + max[pT x − ψ(x, y)]]


y∈Y x∈X

is the pointwise minimum of a family of affine functions of q and thus is concave in q.


It remains to build K-representation of ψ∗ . We have
ψ∗ (p, q)  
  
= max min pT x + qT y − ψ(x, y) = max pT x + min sup [q − f ]T y − t : P f + tΠ + Qu + Rx ≤K s
x∈X y∈Y x∈X y∈Y f,t,u
  
= max pT x + sup min{[q − f ]T y} − t : P f + tΠ + Qu + Rx ≤K s
x∈X f,t,u y∈Y
[by Sion-Kakutani Theorem; note that Y is convex and compact, and [q − f ]T y − t is concave in f, t, u and convex in y]
  n o 
= max pT x + sup sup −eT λ : f − C T λ = q, DT λ = 0, λ ≥K∗ 0 − t : P f + tΠ + Qu + Rx ≤K s
x∈X Y
f,t,u λ
[by Conic Duality; note that Y is given by essentially strictly feasible K-representation]
 
 DT λ = 0 (a) 
f − CT λ = q
 
(b)
= sup pT x − t − eT λ : λ ≥ ∗ 0 (c) 
x∈X ,f,t,u,λ  KY
P f + tΠ + Qu + Rx ≤K s
 
(d)
DT λ = 0
 
(a) 
f − CT λ = q


 (b)  
= sup pT x − t − eT λ : λ ≥K∗Y 0 (c) (∗)
x,ξ,f,t,u,λ 

 P f + tΠ + Qu + Rx ≤K s (d)  

Ax + Bξ ≤KX c (e)

Now, the K-representation of X is essentially strictly feasible, so that (e) admits an es-
sentially strictly feasible solution x̄, ξ; ¯ note that x̄ ∈ X . The K-representation of ψ is
essentially strictly feasible, implying that x̄ can be augmented by f¯, t̄, ū in such a way
that (x̄, f¯, t̄, ū) is an essentially strictly feasible solution to (d). By the origin of the con-
straints (a) – (c), their system, as a system of constraints on λ, is feasible for all f, q.
Besides this, by assumption there exists a representation K∗Y = M × N with regular come
M and polyhedral cone N and λ0 ∈ [int M ] × N such that DT λ0 = 0. Taking into account
that the system (a) – (c), considered as a system in variable λ, is solvable for all f, q, there
exists λ00 ∈ KY ∗ such that DT λ00 = 0 and f¯ − C T λ00 = q + C T λ0 , implying that λ̄ = λ0 + λ00
taken together with f¯ is an essentially strictly feasible solution to the constraints (a) – (c)
in variables λ, f . The bottom line is that the constraints (a) – (e) in variables x, ξ, f, t, u, λ
form an essentially strictly feasible conic constraint. Besides this, problem (∗) by its origin
is bounded. Applying to (∗) Conic Duality, we get
 

γ ≥KY 0, δ ≥K∗ 0,  ≥K∗X 0 

ψ∗ (p, q) = min T T T T T T T
β q + s δ + c  : R δ + A  = p, B  = 0, P δ + β = 0
α,β,γ,δ, 
ΠT δ = −1, QT δ = 0, Dα − Cβ + e = γ
 


 δ ≥K∗ 0,  ≥K∗X 0, τ = sT δ + cT  

= min T
β q + τ : RT δ + AT  = p, B T  = 0, P T δ + β = 0 ,
β,τ,w=(α,δ,)  T T
Π δ = −1, Q δ = 0, Dα − Cβ + e ≥KY 0
 

2.3. WHAT CAN BE EXPRESSED VIA CONIC QUADRATIC CONSTRAINTS? 123

which is a desired K-representation of ψ∗ . 2

2.3.8 Illustrations
A. Our first illustration is motivated by a statistical application of saddle point optimization—
near-optimal recovery of linear forms in Discrete observation scheme, see [30, Section 3.1]. Let
!
X
xi
ψ(x, y) = ln e yi : X × Y → R,
i

X and Y be K-representable, and let Y, 0 6∈ Y, be is a compact subset of the nonnegative


orthant. Because for z > 0
ln z = inf zeu − u − 1,
u

for y ≥ 0 we clearly have


P
ln ( xi P xi u − u − 1] = inf [
P
− u − 1 : fi ≥ exi +u ]
i e yi ) = inf [(
u i e yi ) e i yi fi
n f,u o
= inf f T u + t : fi ≥ exi +u ∀i & t ≥ −u − 1
f,t,u

The resulting representation is a K-representation, provided that the closed w.r.t. taking fi-
nite direct products and passing to the dual cone family K of regular cones contains R+ , the
exponential cone
E=cl {[t; s; r] : t ≥ ser/s , s > 0},

and, therefore, its dual cone

E∗ = cl {[τ ; σ; −ρ] : τ > 0, ρ > 0, σ ≥ ρ ln(ρ/τ ) − ρ}.

B. Let now
n
!1/p
p
X
ψ(x, y) = θi (x)yi
i=1

where p > 1, θi (x) are nonnegative K-representable real-valued functions on K-representable set
X , and Y is a K-representable subset of the nonnegative orthant. In this case, as is easily seen,
for all (x ∈ X , y ∈ Y) it holds

p−1 1 
p−1
ψ(x, y) = inf f T y + t : t ≥ 0, f ≥ 0, t p fip ≥ κθi (x), i ≤ n [κ = p−1 (p − 1) p ]
[f ;t]

which can immediately beqconverted into K-representation, provided K contains 3D Lorentz


cone L3 = {x ∈ R3 : x3 ≥ x21 + x22 } and p is rational, see Section 2.3.5.
C. In our next example, X ⊂ Rm×n and Y ⊂ Sm + are nonempty convex sets, and
q
ψ(x, y) = 2 Tr(xT yx) : X × Y → R.
124 LECTURE 2. CONIC QUADRATIC PROGRAMMING


Taking into account that for a ≥ 0 one has 2 a = inf s>0 [a/s + s], we have
∀(x ∈ X , y ∈ Y)
q: n p o
ψ(x, y) = 2 Tr(y[xxT ]) = inf g 2 Tr(yg) : g  xxT
n o
= inf f,s Tr(yf ) + s : s > 0, f s  xxT
( " # )
f x
= inf Tr(yf ) + s : 0 .
f,s xT sIn
The resulting representation is K-representation, provided that K contains semidefinite cones.
This is how C works in Robust Markowitz Portfolio Selection (cf, e.g., [22, 25])
 q 
min max −rT x + 2ρ xT yx [ρ > 0]
x∈X y∈Y

(here x ∈ Rn is the composition of portfolio, r is the vector of expected returns,


and y is the uncertain covariance matrix of the returns). Assuming for the sake of
definiteness that Y is cut off Sn+ by the constraints
X
[aTiτ ybiτ + bTiτ yaiτ ]  pi , i ≤ I, y− ≤ y ≤ y+
τ

(where ≤ for matrices acts entrywise) p


and applying our machinery on the top of the
above semidefinite representation of 2 xT yx, the saddle point problem reduces to

− rT x + ρ [s + + Tr(µ+ y+ − µ− y− )] :
P
min i Tr(αi pi )
x,s,αi ,µ± " P P # 
T
+ biτ αi aTiτ ] + µ+ − µ− x
i τ [aiτ αi biτ 
0 
xT s .

αi  0, i ≤ I, µ± ≥ 0, x ∈ X

A good exercise for the reader is to apply Symmetry in order to build semidefinite
representation of the dual p problem - the one of identifying the y-component of the
T
saddle point of [−r x + 2ρ xT yx]. Here is the answer: assuming that X is compact
set given by essentially strictly feasible K-representation X = {x : ∃w : Ax+Bw ≤KX
c}, the y-component of a saddle point is the y-component of an optimal solution to
the problem
( " # )
T y z T T
max −c λ :  0, y ∈ Y, A λ = r + 2ρz, B λ = 0, λ ≥K∗X 0 .
λ,y,z zT 1

D. In our concluding example, K contains the products of semidefinite cones, X = Rm×n ,


Y = Sn+ , and
ψ(x, y) = Tr(xT xy 1/2 ) : X × Y → R.
This is a “generalized bilinear function”; in terms of item 4.c of Section 2.3.7.4.A we have
F (x) = xT x, G(y) = y 1/2 , U = U∗ = Sn+ , and
( " # )
z xT
EpiU F := {(x, z) : z  xT x} = (x, z) : 0 ,
x Im
( " # )
y v
HypoU∗ G := {(y, w) : y ∈ Sn+ , w  y 1/2 } = (y, w) : ∃v :  0, v  0, w  v .
v In
2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 125

With these data, the construction from item 4.c of Section 2.3.7.4.A leads straightforwardly to
the following semidefinite representation of ψ:
f ∈ Sn , β ∈ Rn×n , γ ∈ Sn , z ∈ Sn
 
 T

t = Tr(γ), z  β +β
 
ψ(x, y) := Tr(xT xy 1/2 ) = inf Tr(f y) + t :   .
f,t,u=(z,β,γ)  f β z xT
 0, 0 
βT γ
 
x Im

2.4 More applications: Robust Linear Programming


Equipped with abilities to treat a wide variety of CQr functions and sets, we can consider now
an important generic application of Conic Quadratic Programming, specifically, in the Robust
Linear Programming.

2.4.1 Robust Linear Programming: the paradigm


Consider an LP program n o
min cT x : Ax − b ≥ 0 . (LP)
x

In real world applications, the data c, A, b of (LP) is not always known exactly; what is typically
known is a domain U in the space of data – an “uncertainty set” – which for sure contains the
“actual” (unknown) data. There are cases in reality where, in spite of this data uncertainty,
our decision x must satisfy the “actual” constraints, whether we know them or not. Assume,
e.g., that (LP) is a model of a technological process in Chemical Industry, so that entries of
x represent the amounts of different kinds of materials participating in the process. Typically
the process includes a number of decomposition-recombination stages. A model of this problem
must take care of natural balance restrictions: the amount of every material to be used at a
particular stage cannot exceed the amount of the same material yielded by the preceding stages.
In a meaningful production plan, these balance inequalities must be satisfied even though they
involve coefficients affected by unavoidable uncertainty of the exact contents of the raw materials,
of time-varying parameters of the technological devices, etc.
If indeed all we know about the data is that they belong to a given set U, but we still have
to satisfy the actual constraints, the only way to meet the requirements is to restrict ourselves
to robust feasible candidate solutions – those satisfying all possible realizations of the uncertain
constraints, i.e., vectors x such that

Ax − b ≥ 0 ∀[A; b] such that ∃c : (c, A, b) ∈ U. (2.4.1)

In order to choose among these robust feasible solutions the best possible, we should decide how
to “aggregate” the various realizations of the objective into a single “quality characteristic”. To
be methodologically consistent, we use the same worst-case-oriented approach and take as an
objective function f (x) the maximum, over all possible realizations of the objective cT x:

f (x) = sup{cT x | c : ∃[A; b] : (c, A, b) ∈ U}.

With this methodology, we can associate with our uncertain LP program (i.e., with the family
 
LP(U) = min cT x|(c, A, b) ∈ U
x:Ax≥b
126 LECTURE 2. CONIC QUADRATIC PROGRAMMING

of all usual (“certain”) LP programs with the data belonging to U) its robust counterpart. In
the latter problem we are seeking for a robust feasible solution with the smallest possible value
of the “guaranteed objective” f (x). In other words, the robust counterpart of LP(U) is the
optimization problem
n o
min t : cT x ≤ t, Ax − b ≥ 0 ∀(c, A, b) ∈ U . (R)
t,x

Note that (R) is a usual – “certain” – optimization problem, but typically it is not an LP
program: the structure of (R) depends on the geometry of the uncertainty set U and can be
very complicated.
As we shall see in a while, in many cases it is reasonable to specify the uncertainty set
U as an ellipsoid – the image of the unit Euclidean ball under an affine mapping – or, more
generally, as a CQr set. As we shall see in a while, in this case the robust counterpart of an
uncertain LP problem is (equivalent to) an explicit conic quadratic program. Thus, Robust
Linear Programming with CQr uncertainty sets can be viewed as a “generic source” of conic
quadratic problems.
Let us look at the robust counterpart of an uncertain LP program
n n o o
min cT x : aTi x − bi ≥ 0, i = 1, ..., m |(c, A, b) ∈ U
x

in the case of a “simple” ellipsoidal uncertainty – one where the data (ai , bi ) of i-th inequality
constraint
aTi x − bi ≥ 0,
and the objective c are allowed to run independently of each other through respective ellipsoids
Ei , E. Thus, we assume that the uncertainty set is
a∗i
     
ai
U= (a1 , b1 ; ...; am , bm ; c) : ∃({ui , uTi ui ≤ 1}m
i=0 ) : c = c∗ + P0 u0 , = i
+ Pi u , i = 1, ..., m ,
bi b∗i

where c∗ , a∗i , b∗i are the “nominal data” and Pi ui , i = 0, 1, ..., m, represent the data perturbations;
the restrictions uTi ui ≤ 1 enforce these perturbations to vary in ellipsoids.
In order to realize that the robust counterpart of our uncertain LP problem is a conic
quadratic program, note that x is robust feasible if and only if for every i = 1, ..., m we have
    ∗ 
ai [u] a
0 ≤ min aTi [u]x − bi [u] : = i + Pi ui
ui :uT
i ui ≤1
bi [u] b∗i
 
x
= (a∗i x)T x − b∗i + min uTi PiT
ui :uT
i ui ≤1
−1
 
x
(a∗i )T x b∗i
T
= − − Pi

−1 2

Thus, x is robust feasible if and only if it satisfies the system of c.q.i.’s


 
x
≤ [a∗i ]T x − b∗i , i = 1, ..., m.
T
P
i −1 2

Similarly, a pair (x, t) satisfies all realizations of the inequality cT x ≤ t “allowed” by our ellip-
soidal uncertainty set U if and only if

cT∗ x + kP0T xk2 ≤ t.


2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 127

Thus, the robust counterpart (R) becomes the conic quadratic program
   
x ∗ T ∗
kP0T xk2 −cT∗ x
T
min t : ≤ + t; Pi
≤ [ai ] x − bi , i = 1, ..., m (RLP)
x,t −1 2

2.4.2 Robust Linear Programming: examples


Example 1: Robust synthesis of antenna array. Consider a monochromatic transmitting
antenna placed at the origin. Physics says that

1. The directional distribution of energy sent by the antenna can be described in terms of
antenna’s diagram which is a complex-valued function D(δ) of a 3D direction δ. The
directional distribution of energy sent by the antenna is proportional to |D(δ)|2 .

2. When the antenna is comprised of several antenna elements with diagrams D1 (δ),..., Dk (δ),
the diagram of the antenna is just the sum of the diagrams of the elements.

In a typical Antenna Design problem, we are given several antenna elements with diagrams
D1 (δ),...,Dk (δ) and are allowed to multiply these diagrams by complex weights xi (which in
reality corresponds to modifying the output powers and shifting the phases of the elements). As
a result, we can obtain, as a diagram of the array, any function of the form

k
X
D(δ) = xi Di (δ),
i=1

and our goal is to find the weights xi which result in a diagram as close as possible, in a prescribed
sense, to a given “target diagram” D∗ (δ).
Consider an example of a planar antenna comprised of a central circle and 9 concentric
rings of the same area as the circle (Fig. 2.2.(a)) in the XY -plane (“Earth’s surface”). Let the
wavelength be λ = 50cm, and the outer radius of the outer ring be 1 m (twice the wavelength).
One can easily see that the diagram of a ring {a ≤ r ≤ b} in the plane XY (r is the distance
from a point to the origin) as a function of a 3-dimensional direction δ depends on the altitude
(the angle θ between the direction and the plane) only. The resulting function of θ turns out to
be real-valued, and its analytic expression is

Zb Z2π
 
1   
Da,b (θ) = r cos 2πrλ−1 cos(θ) cos(φ) dφ dr.
2
a 0

Fig. 2.2.(b) represents the diagrams of our 10 rings for λ = 50cm.


Assume that our goal is to design an array with a real-valued diagram which should be axial
symmetric with respect to the Z-axis and should be “concentrated” in the cone π/2 ≥ θ ≥
π/2 − π/12. In other words, our target diagram is a real-valued function D∗ (θ) of the altitude
θ with D∗ (θ) = 0 for 0 ≤ θ ≤ π/2 − π/12 and D∗ (θ) somehow approaching 1 as θ approaches
π/2. The target diagram D∗ (θ) used in this example is given in Fig. 2.2.(c) (the dashed curve).
Finally, let us measure the discrepancy between a synthesized diagram and the target one
by the Tschebyshev distance, taken along the equidistant 120-point grid of altitudes, i.e., by the
128 LECTURE 2. CONIC QUADRATIC PROGRAMMING

0.25 1.2

0.2 1

0.15 0.8

0.1 0.6

0.05 0.4

0 0.2

−0.05 0

−0.1 −0.2
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

(a) (b) (c)


(a): 10 array elements of equal areas in the XY -plane
the outer radius of the largest ring is 1m, the wavelength is 50cm
(b): “building blocks” – the diagrams of the rings as functions of the altitude angle θ
(c): the target diagram (dashed) and the synthesied diagram (solid)

Figure 2.2: Synthesis of antennae array

quantity


10
X `π
τ = max D∗ (θ` ) −
xj Drj−1 ,rj (θ` ) , θ` = .
`=1,...,120
j=1 | {z } 240
Dj (θ )
`

Our design problem is simplified considerably by the fact that the diagrams of our “building
blocks” and the target diagram are real-valued; thus, we need no complex numbers, and the
problem we should finally solve is

 
 10
X 
min τ : −τ ≤ D∗ (θ` ) − xj Dj (θ` ) ≤ τ, ` = 1, ..., 120 . (Nom)
τ ∈R,x∈R10  
j=1

This is a simple LP program; its optimal solution x∗ results in the diagram depicted at Fig.
2.2.(c). The uniform distance between the actual and the target diagrams is ≈ 0.0621 (recall
that the target diagram varies from 0 to 1).
Now recall that our design variables are characteristics of certain physical devices. In reality,
of course, we cannot tune the devices to have precisely the optimal characteristics x∗j ; the best
we may hope for is that the actual characteristics xfct ∗
j will coincide with the desired values xj
within a small margin, say, 0.1% (this is a fairly high accuracy for a physical device):


xfct
j = pj xj , 0.999 ≤ pj ≤ 1.001.

It is natural to assume that the factors pj are random with the mean value equal to 1; it is
perhaps not a great sin to assume that these factors are independent of each other.
Since the actual weights differ from their desired values x∗j , the actual (random) diagram of
our array of antennae will differ from the “nominal” one we see on Fig.2.1.(c). How large could
2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 129

be the difference? Look at the picture:


1.2 8

6
1

4
0.8

2
0.6

0.4
−2

0.2
−4

0
−6

−0.2 −8
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

“Dream and reality”: the nominal (left, solid) and an actual (right, solid) diagrams
[dashed: the target diagram]
The diagram shown to the right is not even the worst case: we just have taken as pj a sample
of 10 independent numbers distributed uniformly in [0.999, 1.001] and have plotted the diagram
corresponding to xj = pj x∗j . Pay attention not only to the shape (completely opposite to what
we need), but also to the scale: the target diagram varies from 0 to 1, and the nominal diagram
(the one corresponding to the exact optimal xj ) differs from the target by no more than by
0.0621 (this is the optimal value in the “nominal” problem (Nom)). The actual diagram varies
from ≈ −8 to ≈ 8, and its uniform distance from the target is 7.79 (125 times the nominal
optimal value!). We see that our nominal optimal design is completely meaningless: it looks as
if we were trying to get the worse possible result, not the best possible one...
How could we get something better? Let us try to apply the Robust Counterpart approach.
To this end we take into account from the very beginning that if we want the amplification
coefficients to be certain xj , then the actual amplification coefficients will be xfct
j = pj xj , 0.999 ≤
pj ≤ 1.001, and the actual discrepancies will be
10
X
δ` (x) = D∗ (θ` ) − pj xj Dj (θ` ).
j=1

Thus, we in fact are solving an uncertain LP problem where the uncertainty affects the coeffi-
cients of the constraint matrix (those corresponding to the variables xj ): these coefficients may
vary within 0.1% margin of their nominal values.
In order to apply to our uncertain LP program the Robust Counterpart approach, we should
specify the uncertainty set U. The most straightforward way is to say that our uncertainty is “an
interval” one – every uncertain coefficient in a given inequality constraint may (independently
of all other coefficients) vary through its own uncertainty segment “nominal value ±0.1%”. This
approach, however, is too conservative: we have completely ignored the fact that our pj ’s are of
stochastic nature and are independent of each other, so that it is highly improbable that all of
them will simultaneously fluctuate in “dangerous” directions. In order to utilize the statistical
independence of perturbations, let us look what happens with a particular inequality
10
X
−τ ≤ δ` (x) ≡ D∗ (θ` ) − pj xj Dj (θ` ) ≤ τ (2.4.2)
j=1

when pj ’s are random. For a fixed x, the quantity δ` (x) is a random variable with the mean
10
δ`∗ (x) = D∗ (θ` ) −
X
xj Dj (θ` )
j=1
130 LECTURE 2. CONIC QUADRATIC PROGRAMMING

and the standard deviation


s
q 10
E{(δ` (x) − δ`∗ (x))2 } x2j Dj2 (θ` )E{(pj − 1)2 } ≤ κν` (x),
P
σ` (x) = =
j=1
s
10
x2j Dj2 (θ` ), κ = 0.001.
P
ν` (x) =
j=1

Thus, “a typical value” of δ` (x) differs from δ`∗ (x) by a quantity of order of σ` (x). Now let
us act as an engineer which believes that a random variable differs from its mean by at most
three times its standard deviation; since we are not obliged to be that concrete, let us choose
a “safety parameter” ω and ignore all events which result in |δ` (x) − δ`∗ (x)| > ων` (x) 9) . As
for the remaining events – those with |δ` (x) − δ`∗ (x)| ≤ ων` (x) – we take upon ourselves full
responsibility. With this approach, a “reliable deterministic version” of the uncertain constraint
(2.4.2) becomes the pair of inequalities

−τ ≤ δ`∗ (x) − ων` (x),


δ`∗ (x) + ων` (x) ≤ τ ;

Replacing all uncertain inequalities in (Nom) with their “reliable deterministic versions” and
recalling the definition of δ`∗ (x) and ν` (x), we end up with the optimization problem

minimize τ
s.t.
10
kQ` xk2 ≤ [D∗ (θ` ) −
P
xj Dj (θ` )] + τ, ` = 1, ..., 120
j=1 (Rob)
10
kQ` xk2 ≤ −[D∗ (θ` ) −
P
xj Dj (θ` )] + τ, ` = 1, ..., 120
j=1
[Q` = ωκDiag(D1 (θ` ), D2 (θ` ), ..., D10 (θ` ))]

It is immediately seen that (Rob) is nothing but the robust counterpart of (Nom) corresponding
to a simple ellipsoidal uncertainty, namely, the one as follows:

The only data of a constraint


10
X
A`j xj ≤ p` τ + q`
j=1

(all constraints in (Nom) are of this form) affected by the uncertainty are the coeffi-
cients A`j of the left hand side, and the difference dA[`] between the vector of these
coefficients and the nominal value (D1 (θ` ), ..., D10 (θ` ))T of the vector of coefficients
belongs to the ellipsoid

{dA[`] = ωκQ` u : u ∈ R10 , uT u ≤ 1}.

Thus, the above “engineering reasoning” leading to (Rob) was nothing but a reasonable way to
specify the uncertainty ellipsoids!
9∗
) It would be better to use here σ` instead of ν` ; however, we did not assume that we know the distribution
of pj , this is why we replace unknown σ` with its known upper bound ν`
2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 131

The bottom line of our “engineering reasoning” deserves to be formulated as a sep-


arate statement and to be equipped with a “reliability bound”:

Proposition 2.4.1 Consider a randomly perturbed linear constraint

a0 (x) + 1 a1 (x) + ... + n an (x) ≥ 0, (2.4.3)

where aj (x) are deterministic affine functions of the design vector x, and j are inde-
pendent random perturbations with zero means and such that |j | ≤ σj . Assume that
x satisfies the “reliable” version of (2.4.3), specifically, the deterministic constraint
q
a0 (x) − κ σ12 a21 (x) + ... + σn2 a2n (x) ≥ 0 (2.4.4)

(κ > 0). Then x satisfies a realization of (2.4.3) with probability at least 1 −


exp{−κ2 /2}.

Proof. All we need is to verify the following Hoeffding’s bound on probabilities of


large deviations:

If ai are deterministic reals and i are independent random variables with


zero means and such that |i | ≤ σi for given deterministic σi , then for
every κ ≥ 0 one has
 

 


 sX 

X 
p(κ) ≡ Prob i ai > κ a2i σi2 ≤ exp{−κ2 /2}.
 


 i i 

 | {z }

σ

Verification is easy: denoting by E the expectation, for γ > 0 we have


 
exp{γκσ}p(κ) ≤ E exp{γ ai i }
P
i
E {exp{γai i }}
Q
=
i
[since i are independent of each other]
Q n o
= E exp{γai i } − sinh(γai σi )σi−1 i
i
[since E{i } = 0]
≤ max−σi ≤si ≤σi [exp{γai si } − sinh(γai σi )si ]
Q
i  
Q Q P∞ [γ 2 a2i σi2 ]k
= cosh(γai σi ) = k=0 (2k)!
i  i
Q P∞ [γ 2 a2i σi2 ]k
≤ k=0 2k k!
i
γ 2 a2i σi2
}
Q
= exp{ 2
i
= exp{γ 2 σ 2 }.

Thus, ( )
γ 2σ2 κ2
p(κ) ≤ min exp{ − γκσ} = exp − . 2
γ>0 2 2
132 LECTURE 2. CONIC QUADRATIC PROGRAMMING

Now let us look what are the diagrams yielded by the Robust Counterpart approach – i.e.,
those given by the robust optimal solution. These diagrams are also random (neither the nominal
nor the robust solution cannot be implemented exactly!). However, it turns out that they are
incomparably closer to the target (and to each other) than the diagrams associated with the
optimal solution to the “nominal” problem. Look at a typical “robust” diagram:
1.2

0.8

0.6

0.4

0.2

−0.2
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

A “Robust” diagram. Uniform distance from the target is 0.0822.


[the safety parameter for the uncertainty ellipsoids is ω = 1]

With the safety parameter ω = 1, the robust optimal value is 0.0817; although it is by 30%
larger than the nominal optimal value 0.0635, the robust optimal value has a definite advantage
that it indeed says something reliable about the quality of actual diagrams we can obtain when
implementing the robust optimal solution: in a sample of 40 realizations of the diagrams cor-
responding to the robust optimal solution, the uniform distances from the target were varying
from 0.0814 to 0.0830.
We have built the robust optimal solution under the assumption that the “implementation
errors” do not exceed 0.1%. What happens if in reality the errors are larger – say, 1%? It turns
out that nothing dramatic happens: now in a sample of 40 diagrams given by the “old” robust
optimal solution (affected by 10 times larger “implementation errors”) the uniform distances
from the target were varying from 0.0834 to 0.116. Imagine what will happen with the nominal
solution under the same circumstances...
The last issue to be addressed here is: why is the nominal solution so unstable? And why
with the robust counterpart approach we were able to get a solution which is incomparably
better, as far as “actual implementation” is concerned? The answer becomes clear when looking
at the nominal and the robust optimal weights:
j 1 2 3 4 5 6 7 8 9 10
xnom
j 1624.4 -14701 55383 -107247 95468 19221 -138622 144870 -69303 13311
xrob
j -0.3010 4.9638 -3.4252 -5.1488 6.8653 5.5140 5.3119 -7.4584 -8.9140 13.237

It turns out that the nominal problem is “ill-posed” – although its optimal solution is far away
from the origin, there is a “massive” set of “nearly optimal” solutions, and among the latter
ones we can choose solutions of quite moderate magnitude. Indeed, here are the optimal values
obtained when we add to the constraints of (Nom) the box constraints |xj | ≤ L, j = 1, ..., 10:

L 1 10 102 103 104 105 106 107


Opt Val 0.09449 0.07994 0.07358 0.06955 0.06588 0.06272 0.06215 0.06215

Since the “implementation inaccuracies” for a solution are the larger the larger the solution is,
there is no surprise that our “huge” nominal solution results in a very unstable actual design.
In contrast to this, the Robust Counterpart penalizes the (properly measured) magnitude of x
2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 133

(look at the terms kQ` xk2 in the constraints of (Rob)) and therefore yields a much more stable
design. Note that this situation is typical for many applications: the nominal solution is on
the boundary of the nominal feasible domain, and there are “nearly optimal” solutions to the
nominal problem which are in the “deep interior” of this domain. When solving the nominal
problem, we do not take any care of a reasonable tradeoff between the “depth of feasibility”
and the optimality: any improvement in the objective is sufficient to make the solution just
marginally feasible for the nominal problem. And a solution which is only marginally feasible
in the nominal problem can easily become “very infeasible” when the data are perturbed. This
would not be the case for a “deeply interior” solution. With the Robust Counterpart approach,
we do use certain tradeoff between the “depth of feasibility” and the optimality – we are trying
to find something like the “deepest feasible nearly optimal solution”; as a result, we normally
gain a lot in stability; and if, as in our example, there are “deeply interior nearly optimal”
solutions, we do not loose that much in optimality.

Example 2: NETLIB Case Study. NETLIB is a collection of about 100 not very large LPs,
mostly of real-world origin, used as the standard benchmark for LP solvers. In the study to
be described, we used this collection in order to understand how “stable” are the feasibility
properties of the standard – “nominal” – optimal solutions with respect to small uncertainty in
the data. To motivate the methodology of this “Case Study”, here is the constraint # 372 of
the problem PILOT4 from NETLIB:

aT x ≡ −15.79081x826 − 8.598819x827 − 1.88789x828 − 1.362417x829 − 1.526049x830


−0.031883x849 − 28.725555x850 − 10.792065x851 − 0.19004x852 − 2.757176x853
−12.290832x854 + 717.562256x855 − 0.057865x856 − 3.785417x857 − 78.30661x858
−122.163055x859 − 6.46609x860 − 0.48371x861 − 0.615264x862 − 1.353783x863 (C)
−84.644257x864 − 122.459045x865 − 43.15593x866 − 1.712592x870 − 0.401597x871
+x880 − 0.946049x898 − 0.946049x916
≥ b ≡ 23.387405

The related nonzero coordinates in the optimal solution x∗ of the problem, as reported by CPLEX
(one of the best commercial LP solvers), are as follows:

x∗826 = 255.6112787181108 x∗827 = 6240.488912232100 x∗828 = 3624.613324098961


x∗829 = 18.20205065283259 x∗849 = 174397.0389573037 x∗870 = 14250.00176680900
x∗871 = 25910.00731692178 x∗880 = 104958.3199274139

The indicated optimal solution makes (C) an equality within machine precision.
Observe that most of the coefficients in (C) are “ugly reals” like -15.79081 or -84.644257.
We have all reasons to believe that coefficients of this type characterize certain technological
devices/processes, and as such they could hardly be known to high accuracy. It is quite natural
to assume that the “ugly coefficients” are in fact uncertain – they coincide with the “true” values
of the corresponding data within accuracy of 3-4 digits, not more. The only exception is the
coefficient 1 of x880 – it perhaps reflects the structure of the problem and is therefore exact –
“certain”.
Assuming that the uncertain entries of a are, say, 0.1%-accurate approximations of unknown
entries of the “true” vector of coefficients ã, we looked what would be the effect of this uncertainty
on the validity of the “true” constraint ãT x ≥ b at x∗ . Here is what we have found:
• The minimum (over all vectors of coefficients ã compatible with our “0.1%-uncertainty
hypothesis”) value of ãT x∗ − b, is < −104.9; in other words, the violation of the constraint can
be as large as 450% of the right hand side!
134 LECTURE 2. CONIC QUADRATIC PROGRAMMING

• Treating the above worst-case violation as “too pessimistic” (why should the true values of
all uncertain coefficients differ from the values indicated in (C) in the “most dangerous” way?),
consider a more realistic measure of violation. Specifically, assume that the true values of the
uncertain coefficients in (C) are obtained from the “nominal values” (those shown in (C)) by
random perturbations aj 7→ ãj = (1 + ξj )aj with independent and, say, uniformly distributed
on [−0.001, 0.001] “relative perturbations” ξj . What will be a “typical” relative violation
max[b − ãT x∗ , 0]
V = × 100%
b
of the “true” (now random) constraint ãT x ≥ b at x∗ ? The answer is nearly as bad as for the
worst scenario:
Prob{V > 0} Prob{V > 150%} Mean(V )
0.50 0.18 125%
Table 2.1. Relative violation of constraint # 372 in PILOT4
(1,000-element sample of 0.1% perturbations of the uncertain data)
We see that quite small (just 0.1%) perturbations of “obviously uncertain” data coefficients can
make the “nominal” optimal solution x∗ heavily infeasible and thus – practically meaningless.
Inspired by this preliminary experiment, we have carried out the “diagnosis” and the “treat-
ment” phases as follows.

“Diagnosis”. Given a “perturbation level”  (for which we have used the values 1%, 0.1%,
0.01%), for every one of the NETLIB problems, we have measured its “stability index” at this
perturbation level, specifically, as follows.
1. We computed the optimal solution x∗ of the program by CPLEX.
2. For every one of the inequality constraints
aT x ≤ b
of the program,
• We looked at the right hand side coefficients aj and split them into “certain” – those
which can be represented, within machine accuracy, as rational fractions p/q with
|q| ≤ 100, and “uncertain” – all the rest. Let J be the set of all uncertain coefficients
of the constraint under consideration.
• We defined the reliability index of the constraint as the quantity
rP
aT x∗ +  a2j (x∗j )2 − b
j∈J
× 100% (I)
max[1, |b|]
rP
Note that the quantity  a2j (x∗j )2 , as we remember from the Antenna story, is
j∈J
of order of typical difference between aT x∗ and ãT x∗ , where ã is obtained from a
by random perturbation aj 7→ ãj = pj aj of uncertain coefficients, with independent
random pj uniformly distributed in the segment [−, ]. In other words, the reliability
index is of order of typical violation (measured in percents of the right hand side)
of the constraint, as evaluated at x∗ , under independent random perturbations of
uncertain coefficients,  being the relative magnitude of the perturbations.
2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 135

Problem Sizea)  = 0.01%  = 0.1%  = 1%


Nbadb) Indexc) Nbad Index Nbad Index
80BAU3B 2263 × 9799 37 84 177 842 364 8,420
25FV47 822 × 1571 14 16 28 162 35 1,620
ADLITTLE 57 × 97 2 6 7 58
AFIRO 28 × 32 1 5 2 50
BNL2 2325 × 3489 24 34
BRANDY 221 × 249 1 5
CAPRI 272 × 353 10 39 14 390
CYCLE 1904 × 2857 2 110 5 1,100 6 11,000
D2Q06C 2172 × 5167 107 1,150 134 11,500 168 115,000
E226 224 × 282 2 15
FFFFF800 525 × 854 6 8
FINNIS 498 × 614 12 10 63 104 97 1,040
GREENBEA 2393 × 5405 13 116 30 1,160 37 11,600
KB2 44 × 41 5 27 6 268 10 2,680
MAROS 847 × 1443 3 6 38 57 73 566
NESM 751 × 2923 37 20
PEROLD 626 × 1376 6 34 26 339 58 3,390
PILOT 1442 × 3652 16 50 185 498 379 4,980
PILOT4 411 × 1000 42 210,000 63 2,100,000 75 21,000,000
PILOT87 2031 × 4883 86 130 433 1,300 990 13,000
PILOTJA 941 × 1988 4 46 20 463 59 4,630
PILOTNOV 976 × 2172 4 69 13 694 47 6,940
PILOTWE 723 × 2789 61 12,200 69 122,000 69 1,220,000
SCFXM1 331 × 457 1 95 3 946 11 9,460
SCFXM2 661 × 914 2 95 6 946 21 9,460
SCFXM3 991 × 1371 3 95 9 946 32 9,460
SHARE1B 118 × 225 1 257 1 2,570 1 25,700

Table 2.2. Bad NETLIB problems.


a)
# of linear constraints (excluding the box ones) plus 1 and # of variables
b)
# of constraints with index > 5%
c)
The worst, over the constraints, reliability index, in %

3. We treat the nominal solution as unreliable, and the problem - as bad, the level of per-
turbations being , if the worst, over the inequality constraints, reliability index of the
constraint is worse than 5%.

The results of the Diagnosis phase of our Case Study were as follows. From the total of 90
NETLIB problems we have processed,
• in 27 problems the nominal solution turned out to be unreliable at the largest ( = 1%)
level of uncertainty;
• 19 of these 27 problems are already bad at the 0.01%-level of uncertainty, and in 13 of
these 19 problems, 0.01% perturbations of the uncertain data can make the nominal solution
more than 50%-infeasible for some of the constraints.
The details are given in Table 2.2. Our diagnosis leads to the following conclusion:

♦ In real-world applications of Linear Programming one cannot ignore the possibility


that a small uncertainty in the data (intrinsic for most real-world LP programs)
can make the usual optimal solution of the problem completely meaningless from a
136 LECTURE 2. CONIC QUADRATIC PROGRAMMING

practical viewpoint.
Consequently,
♦ In applications of LP, there exists a real need of a technique capable of detecting
cases when data uncertainty can heavily affect the quality of the nominal solution,
and in these cases to generate a “reliable” solution, one which is immune against
uncertainty.

“Treatment”. At the treatment phase of our Case Study, we used the Robust Counterpart
methodology, as outlined in Example 1, to pass from “unreliable” nominal solutions of bad
NETLIB problems to “uncertainty-immunized” robust solutions. The primary goals here were to
understand whether “treatment” is at all possible (the Robust Counterpart may happen to be
infeasible) and how “costly” it is – by which margin the robust solution is worse, in terms of
the objective, than the nominal solution. The answers to both these questions turned out to be
quite encouraging:
• Reliable solutions do exist, except for the four cases corresponding to the highest ( = 1%)
uncertainty level (see the right column in Table 2.3).
• The price of immunization in terms of the objective value is surprisingly low: when  ≤
0.1%, it never exceeds 1% and it is less than 0.1% in 13 of 23 cases. Thus, passing to the
robust solutions, we gain a lot in the ability of the solution to withstand data uncertainty,
while losing nearly nothing in optimality.
The details are given in Table 2.3.

2.4.3 Robust counterpart of uncertain LP with a CQr uncertainty set


We have seen that the robust counterpart of uncertain LP with simple “constraint-wise” el-
lipsoidal uncertainty is a conic quadratic problem. This fact is a special case of the following

Proposition 2.4.2 Consider an uncertain LP


 
LP(U) = min cT x : (c, A, b) ∈ U
x:Ax≥b

and assume that the uncertainty set U is CQr:

U = ζ = (c, A, B) ∈ Rn × Rm×n × Rm : ∃u : A(ζ, u) ≡ P ζ + Qu + r ≥K 0 ,




where A(ζ, u) is an affine mapping and K ∈ SO. Assume, further, that the above CQR of U
is essentially strictly feasible, see Definition 1.4.3. Then the robust counterpart of LP(U) is
equivalent to an explicit conic quadratic problem.
Proof. Introducing an additional variable t and denoting by z = (t, x) the extended vector of
design variables, we can write down the instances of our uncertain LP in the form
n o
min dT z : αiT (ζ)z − βi (ζ) ≥ 0, i = 1, ..., m + 1 (LP[ζ])
z

with an appropriate vector d; here the functions

αi (ζ) = Ai ζ + ai , βi (ζ) = bTi ζ + ci


2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 137

Objective at robust solution


Nominal
Problem optimal  = 0.01%  = 0.1%  = 1%
value
80BAU3B 987224.2 987311.8 (+ 0.01%) 989084.7 (+ 0.19%) 1009229 (+ 2.23%)
25FV47 5501.846 5501.862 (+ 0.00%) 5502.191 (+ 0.01%) 5505.653 (+ 0.07%)
ADLITTLE 225495.0 225594.2 (+ 0.04%) 228061.3 (+ 1.14%)
AFIRO -464.7531 -464.7500 (+ 0.00%) -464.2613 (+ 0.11%)
BNL2 1811.237 1811.237 (+ 0.00%) 1811.338 (+ 0.01%)
BRANDY 1518.511 1518.581 (+ 0.00%)
CAPRI 1912.621 1912.738 (+ 0.01%) 1913.958 (+ 0.07%)
CYCLE 1913.958 1913.958 (+ 0.00%) 1913.958 (+ 0.00%) 1913.958 (+ 0.00%)
D2Q06C 122784.2 122793.1 (+ 0.01%) 122893.8 (+ 0.09%) Infeasible
E226 -18.75193 -18.75173 (+ 0.00%)
FFFFF800 555679.6 555715.2 (+ 0.01%)
FINNIS 172791.1 172808.8 (+ 0.01%) 173269.4 (+ 0.28%) 178448.7 (+ 3.27%)
GREENBEA -72555250 -72526140 (+ 0.04%) -72192920 (+ 0.50%) -68869430 (+ 5.08%)
KB2 -1749.900 -1749.877 (+ 0.00%) -1749.638 (+ 0.01%) -1746.613 (+ 0.19%)
MAROS -58063.74 -58063.45 (+ 0.00%) -58011.14 (+ 0.09%) -57312.23 (+ 1.29%)
NESM 14076040 14172030 (+ 0.68%)
PEROLD -9380.755 -9380.755 (+ 0.00%) -9362.653 (+ 0.19%) Infeasible
PILOT -557.4875 -557.4538 (+ 0.01%) -555.3021 (+ 0.39%) Infeasible
PILOT4 -64195.51 -64149.13 (+ 0.07%) -63584.16 (+ 0.95%) -58113.67 (+ 9.47%)
PILOT87 301.7109 301.7188 (+ 0.00%) 302.2191 (+ 0.17%) Infeasible
PILOTJA -6113.136 -6113.059 (+ 0.00%) -6104.153 (+ 0.15%) -5943.937 (+ 2.77%)
PILOTNOV -4497.276 -4496.421 (+ 0.02%) -4488.072 (+ 0.20%) -4405.665 (+ 2.04%)
PILOTWE -2720108 -2719502 (+ 0.02%) -2713356 (+ 0.25%) -2651786 (+ 2.51%)
SCFXM1 18416.76 18417.09 (+ 0.00%) 18420.66 (+ 0.02%) 18470.51 (+ 0.29%)
SCFXM2 36660.26 36660.82 (+ 0.00%) 36666.86 (+ 0.02%) 36764.43 (+ 0.28%)
SCFXM3 54901.25 54902.03 (+ 0.00%) 54910.49 (+ 0.02%) 55055.51 (+ 0.28%)
SHARE1B -76589.32 -76589.32 (+ 0.00%) -76589.32 (+ 0.00%) -76589.29 (+ 0.00%)

Table 2.3. Objective values for nominal and robust solutions to bad NETLIB problems
138 LECTURE 2. CONIC QUADRATIC PROGRAMMING

are affine in the data vector ζ. The robust counterpart of our uncertain LP is the optimization
program n o
min dT z → min : αiT (ζ)z − βi (ζ) ≥ 0 ∀ζ ∈ U ∀i = 1, ..., m + 1 . (RCini )
z

Let us fix i and ask ourselves what does it mean that a vector z satisfies the infinite system of
linear inequalities
αiT (ζ)z − βi (ζ) ≥ 0 ∀ζ ∈ U. (Ci )
Clearly, a given vector z possesses this property if and only if the optimal value in the optimiza-
tion program n o
min τ : τ ≥ αiT (ζ)z − βi (ζ), ζ ∈ U
τ,ζ

is nonnegative. Recalling the definition of U, we see that the latter problem is equivalent to the
conic quadratic program
 

 

 
min τ : τ ≥ αiT (ζ)z − βi (ζ) ≡ [Ai ζ + ai ]T z − [bTi ζ + ci ], A(ζ, u) ≡ P ζ + Qu + r ≥K 0
τ,ζ 
 | {z } | {z } 

αi (ζ)
 
βi (ζ)
(CQi [z])
in variables τ, ζ, u. Thus, z satisfies (Ci ) if and only if the optimal value in (CQi [z]) is nonneg-
ative.
Since by assumption the system of conic quadratic inequalities A(ζ, u) ≥K 0 is essentially
strictly feasible, the conic quadratic program (CQi [z]) is essentially strictly feasible. By the
refined Conic Duality Theorem, if (a) the optimal value in (CQi [z]) is nonnegative, then (b)
the dual to (CQi [z]) problem admits a feasible solution with a nonnegative value of the dual
objective. By Weak Duality, (b) implies (a). Thus, the fact that the optimal value in (CQi [z])
is nonnegative is equivalent to the fact that the dual problem admits a feasible solution with a
nonnegative value of the dual objective:

z satisfies (Ci )
m
Opt(CQi [z]) ≥ 0
 m
 N
∃λ ∈ R, ξ ∈ R (N is the dimension of K):

λ[aTi z − ci ] − ξ T r ≥ 0,





λ = 1,




−λATi z + bi + P T ξ = 0,
QT ξ = 0,





λ ≥ 0,




ξ ≥K 0.


m
N :


 ∃ξ ∈ R
T T
 ai z − ci − ξ r ≥ 0



−ATi z + bi + P T ξ = 0,
QT ξ = 0,





ξ ≥K 0.

We see that the set of vectors z satisfying (Ci ) is CQr:


2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 139

z satisfies (Ci )
m
∃ξ ∈ RN :



aTi z − ci − ξ T r ≥ 0,




−ATi z + bi + P T ξ = 0,
QT ξ = 0,





ξ ≥K 0.

Consequently, the set of robust feasible z – those satisfying (Ci ) for all i = 1, ..., m + 1 – is CQr
(as the intersection of finitely many CQr sets), whence the robust counterpart of our uncertain
LP, being the problem of minimizing a linear objective over a CQr set, is equivalent to a conic
quadratic problem. Here is this problem:

 minimize dT z

 ai z − ci − ξiT r ≥ 0,
T
 −AT z + b + P T ξ = 0,

i i i
, i = 1, ..., m + 1

 QT ξi = 0,


ξi ≥K 0

with design variables z, ξ1 , ..., ξm+1 . Here Ai , ai , ci , bi come from the affine functions αi (ζ) =
Ai ζ + ai and βi (ζ) = bTi ζ + ci , while P, Q, r come from the description of U:

U = {ζ : ∃u : P ζ + Qu + r ≥K 0}. 2

Remark 2.4.1 Looking at the proof of Proposition 2.4.2, we see that the assumption that
the uncertainty set U is CQr plays no crucial role. What indeed is important is that U is the
projection on the ζ-space of the solution set of an essentially strictly feasible conic inequality
associated with certain cone K. Whenever this is the case, the above construction demonstrates
that the robust counterpart of LP(U) is a conic problem associated with the cone which is a
direct product of several cones dual to K. E.g., when the uncertainty set is polyhedral (i.e., it
is given by finitely many scalar linear inequalities: K = Rm+ ), the robust counterpart of LP(U)
is an explicit LP program (and in this case we can eliminate the assumption that the conic
inequality defining U is essentially strictly feasible (why?)). Consider, e.g., an uncertain LP
with interval uncertainty in the data:
 

 n |cj − c∗j | ≤ j , j = 1, ..., n
o 

T
min c x : Ax ≥ b : Aij ∈ [A∗ij − ij , A∗ij + ij ], i = 1, ..., m, j = 1, ..., n .
 x

|bi − b∗i | ≤ δi , i = 1, ..., m

The (LP equivalent of the) Robust Counterpart of the program is


 
A∗ij xj − ij yj ≥ b∗i + δi , i = 1, ..., m 
P P
X
min [c∗j xj + j yj ] : j j
x,y 
j −yj ≤ xj ≤ yj , j = 1, ..., n 

(why ?)
140 LECTURE 2. CONIC QUADRATIC PROGRAMMING

2.4.4 CQ-representability of the optimal value in a CQ program as a function


of the data
Let us ask ourselves the following question: consider a conic quadratic program
n o
min cT x : Ax − b ≥K 0 , (P )
x

direct product of ice-cream cones; the dual of our problem is


n o
max bT λ : AT λ = c, λ ≥K 0 (D)
λ

The optimal value of (P ) clearly is a function of the data (c, A, b) of the problem. What can
be said about CQ-representability of this function? In general, not much: the function is not
even convex. There are, however, two modifications of our question which admit good answers.
Namely, under mild regularity assumptions
(a) With c, A fixed, the optimal value in (P ) is a CQ-representable function of the right hand
side vector b;
(b) with A, b fixed, the minus optimal value in (P ) is a CQ-representable function of c.
Here are the exact forms of our claims:
Proposition 2.4.3 Let c, A be fixed, (D) be essentially strictly feasible (this property is inde-
pendent of what b is). and let B Then the optimal value of (P ) is a CQr function of b.
The statement is quite evident: since (D) is essentially strictly feasible, for every b the optimal
value Opt(b) is either +∞, or is achieved (by refined Conic Duality Theorem); therefore in both
cases,
{t ≥ Opt(b)} ⇔ {∃x : cT x ≤ t & Ax − b ≥K 0}
this equivalence is nothing but a CQR of Opt(b). 2

Similarly, the exact form of (b) reads


Proposition 2.4.4 Let A, b be such that (P ) is essentially strictly feasible (this property of (P )
is independent of what c is). Then the minus optimal value −Opt(c) of (P ) is a CQr function
of c.
Proof is obtained from the previous one by swapping (P ) and (D): since (P ) is essentially
strictly feasible, by refined Conic Duality Theorem for every c either Opt(c) = −∞ (which
happens if and only if (D) is infeasible), or is equal to the optimal vale in the (solvable in the
case in question) problem (D). Therefore in both cases

{t ≤ Opt(c)} ⇔ {∃λ : AT λ = c, bT λ ≥ t, λ ≥K 0}

and this equivalence is nothing but a CQR of −Opt(c). 2

A careful reader could have realized that Proposition 2.4.2 is nothing but a straightforward
application of Proposition 2.4.4.
Remark 2.4.2 Same as Proposition 2.4.2, Propositions 2.4.3, 2.4.4 can be extended from conic
quadratic problems to general conic problems on regular cones, with the only modification that
the conic constraint λ ≥K 0 in (D) in the general case becomes λ ≥K∗ 0/
2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 141

2.4.5 Affinely Adjustable Robust Counterpart


The rationale behind our Robust Optimization paradigm is based on the following tacit assump-
tions:

1. All constraints are “a must”, so that a meaningful solution should satisfy all realizations
of the constraints from the uncertainty set.

2. All decisions are made in advance and thus cannot tune themselves to the “true” values of
the data. Thus, candidate solutions must be fixed vectors, and not functions of the true
data.

Here we preserve the first of these two assumptions and try to relax the second of them. The
motivation is twofold:

• There are situations in dynamical decision-making when the decisions should be made at
subsequent time instants, and decision made at instant t in principle can depend on the
part of uncertain data which becomes known at this instant.

• There are situations in LP when some of the decision variables do not correspond to actual
decisions; they are artificial “analysis variables” added to the problem in order to convert
it to a desired form, say, a Linear Programming one. The analysis variables clearly may
adjust themselves to the true values of the data.
To give an example, consider the problem where we look for the best, in the discrete L1 -
norm, approximation of a given sequence b by a linear combination of given sequences aj ,
j = 1, ..., n, so that the problem with no data uncertainty is
( )
T
|bt − atj xj | ≤ t
P P
min t : (P)
x,t t=1 j

( m )
T
yt ≤ t, −yt ≤ bt − atj xj ≤ yt , 1 ≤ t ≤ T
P P
min t : (LP)
t,x,y t=1 j

Note that (LP) is an equivalent LP reformulation of (P), and y are typical analysis vari-
ables; whether x’s do or do not represent “actual decisions”, y’s definitely do not represent
them. Now assume that the data become uncertain. Perhaps we have reasons to require
from (t, x)s to be independent of actual data and to satisfy the constraint in (P) for all
realizations of the data. This requirement means that the variables t, x in (LP) must be
data-independent, but we have absolutely no reason to insist on data-independence of y’s:
(t, x) is robust feasible for (P) if and only if (t, x), for all realizations of the data from the
uncertainty set, can be extended, by a properly chosen and perhaps depending on the data
vector y, to a feasible solution of (the corresponding realization of) (LP). In other words,
equivalence between (P) and (LP) is restricted to the case of certain data only; when the
data become uncertain, the robust counterpart of (LP) is more conservative than the one
of (P).

In order to take into account a possibility for (part of) the variables to adjust themselves to the
true values of (part of) the data, we could act as follows.
142 LECTURE 2. CONIC QUADRATIC PROGRAMMING

Adjustable and non-adjustable decision variables. Consider an uncertain LP program.


Without loss of generality, we may assume that the data are affinely parameterized by properly
chosen “perturbation vector” ζ running through a given perturbation set Z; thus, our uncertain
LP can be represented as the family of LP instances
n n o o
LP = min cT [ζ]x : A[ζ]x − b[ζ] ≥ 0 : ζ ∈ Z
x

Now assume that decision variable xj is allowed to depend on part of the true data. Since the
true data are affine functions of ζ, this is the same as to assume that xj can depend on “a part”
Pj ζ of the perturbation vector, where Pj is a given matrix. The case of Pj = 0 correspond to
“here and now” decisions xj – those which should be done in advance; we shall call these decision
variables non-adjustable. The case of nonzero Pj (“adjustable decision variable”) corresponds
to allowing certain dependence of xj on the data, and the case when Pj has trivial kernel means
that xj is allowed to depend on the entire true data.

Adjustable Robust Counterpart of LP. With our assumptions, a natural modification of


the Robust Optimization methodology results in the following adjustable Robust Counterpart
of LP:  n 
≤ ∀ζ ∈ Z
P


 cj [ζ]φ j (P j ζ) t 


j=1
minn t: Pn (ARC)
t,{φj (·)}j=1 
 φj (Pj ζ)Aj [ζ] − b[ζ] ≥ 0 ∀ζ ∈ Z 

 
j=1

Here cj [ζ] is j-th entry of the objective vector, and Aj [ζ] is j-th column of the constraint matrix.
It should be stressed that the variables in (ARC) corresponding to adjustable decision vari-
ables in the original problem are not reals; they are “decision rules” – real-valued functions of the
corresponding portion Pj ζ of the data. This fact makes (ARC) infinite-dimensional optimiza-
tion problem and thus problem which is extremely difficult for numerical processing. Indeed, in
general it is unclear how to represent in a tractable way a general-type function of three (not
speaking of three hundred) variables; and how could we hope to find, in an efficient manner,
optimal decision rules when we even do not know how to write them down? Thus, in general
(ARC) has no actual meaning – basically all we can do with the problem is to write it down on
paper and then look at it...

2.4.5.1 Affinely Adjustable Robust Counterpart of LP


A natural way to overcome the outlined difficulty is to restrict the decision rules to be “easily
representable”, specifically, to be affine functions of the allowed portions of data:

φj (Pj ζ) = µj + νjT Pj ζ.

With this approach, our new decision variables become reals µj and vectors νj , and (ARC)
becomes the following problem (called Affinely Adjustable Robust Counterpart of LP):
 
cj [z][µj + νjT Pj ζ] ≤ t ∀ζ ∈ Z
P

 

j
min t: P (AARC)
t,{µj ,νj }n 
j=1 
[µj + νjT Pj ]Aj [ζ] − b[ζ] ≥ 0 ∀ζ ∈ Z 

j

Note that the AARC is “in-between” the usual non-adjustable RC (no dependence of variables
on the true data at all) and the ARC (arbitrary dependencies of the decision variables on the
2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 143

allowed portions of the true data). Note also that the only reason to restrict ourselves with
affine decision rules is the desire to end up with a “tractable” robust counterpart, and even
this natural goal for the time being is not achieved. Indeed, the constraints in (AARC) are
affine in our new decision variables t, µj , νj , which is a good news. At the same time, they are
semi-infinite, same as in the case of the non-adjustable Robust Counterpart, but, in contrast
to this latter case, in general are quadratic in perturbations rather than to be linear in them.
This indeed makes a difference: as we know from Proposition 2.4.2, the usual – non-adjustable
– RC of an uncertain LP with CQr uncertainty set is equivalent to an explicit Conic Quadratic
problem and as such is computationally tractable (in fact, the latter remain true for the case
of non-adjustable RC of uncertain LP with arbitrary “computationally tractable” uncertainty
set). In contrast to this, AARC can become intractable for uncertainty sets as simple as boxes.
There are, however, good news on AARCs:

• First, there exist a generic “good case” where the AARC is tractable. This is the “fixed
recourse” case, where the coefficients of adjustable variables xj – those with Pj 6= 0 – are
certain (not affected by uncertainty). In this case, the left hand sides of the constraints
in (AARC) are affine in ζ, and thus AARC, same as the usual non-adjustable RC, is
computationally tractable whenever the perturbation set Z is so; in particular, Proposition
2.4.2 remains valid for both RC and AARC.

• Second, we shall see in Lecture 3 that even when AARC is intractable, it still admits tight,
in certain precise sense, tractable approximations.

2.4.5.2 Example: Uncertain Inventory Management Problem


The model. Consider a single product inventory system comprised of a warehouse and I
factories. The planning horizon is T periods. At a period t:

• dt is the demand for the product. All the demand must be satisfied;

• v(t) is the amount of the product in the warehouse at the beginning of the period (v(1) is
given);

• pi (t) is the i-th order of the period – the amount of the product to be produced during
the period by factory i and used to satisfy the demand of the period (and, perhaps, to
replenish the warehouse);

• Pi (t) is the maximal production capacity of factory i;

• ci (t) is the cost of producing a unit of the product at a factory i.

Other parameters of the problem are:

• Vmin - the minimal allowed level of inventory at the warehouse;

• Vmax - the maximal storage capacity of the warehouse;

• Qi - the maximal cumulative production capacity of i’th factory throughout the planning
horizon.
144 LECTURE 2. CONIC QUADRATIC PROGRAMMING

The goal is to minimize the total production cost over all factories and the entire planning
period. When all the data are certain, the problem can be modelled by the following linear
program:

min F
pi (t),v(t),F
T P
I
ci (t)pi (t) ≤ F
P
s.t.
t=1 i=1
0 ≤ pi (t) ≤ Pi (t), i = 1, . . . , I, t = 1, . . . , T
T (2.4.5)
pi (t) ≤ Q(i), i = 1, . . . , I
P
t=1
I
pi (t) − dt , t = 1, . . . , T
P
v(t + 1) = v(t) +
i=1
Vmin ≤ v(t) ≤ Vmax , t = 2, . . . , T + 1.

Eliminating v-variables, we get an inequality constrained problem:

min F
pi (t),F
T P
I
ci (t)pi (t) ≤ F
P
s.t.
t=1 i=1
0 ≤ pi (t) ≤ Pi (t), i = 1, . . . , I, t = 1, . . . , T (2.4.6)
T
pi (t) ≤ Q(i), i = 1, . . . , I
P
t=1
t P
I t
Vmin ≤ v(1) + pi (s) − ds ≤ Vmax , t = 1, . . . , T.
P P
s=1 i=1 s=1

Assume that the decision on supplies pi (t) is made at the beginning of period t, and that we are
allowed to make these decisions on the basis of demands dr observed at periods r ∈ It , where It
is a given subset of {1, ..., t}. Further, assume that we should specify our supply policies before
the planning period starts (“at period 0”), and that when specifying these policies, we do not
know exactly the future demands; all we know is that

dt ∈ [d∗t − θd∗t , d∗t + θd∗t ], t = 1, . . . , T, (2.4.7)

with given positive θ and positive nominal demand d∗t . We have now an uncertain LP, where the
uncertain data are the actual demands dt , the decision variables are the supplies pi (t), and these
decision variables are allowed to depend on the data {dτ : τ ∈ It } which become known when
pi (t) should be specified. Note that our uncertain LP is a “fixed recourse” one – the uncertainty
affects solely the right hand side. Thus, the AARC of the problem is computationally tractable,
which is good. Let us build the AARC. Restricting our decision-making policy with affine
decision rules
X
0 r
pi (t) = πi,t + πi,t dr , (2.4.8)
r∈It
2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 145

r are our new non-adjustable design variables, we get from (2.4.6) the
where the coefficients πi,t
s , F:
following uncertain Linear Programming problem in variables πi,t

min F
π,F !
T P
I
s.t.
P
ci (t) 0
πi,t +
P r d
πi,t ≤F
r
t=1 i=1 r∈It
0
0 ≤ πi,t +
P r d ≤ P (t), i = 1, . . . , I, t = 1, . . . , T
πi,t r i
r∈It !
T (2.4.9)
P 0 +
πi,t
P r d
πi,t ≤ Q(i), i = 1, . . . , I
r
t=1 r∈It !
t I t
Vmin ≤ v(1) +
P P 0
πi,s +
P r d
πi,s −
P
ds ≤ Vmax ,
r
s=1 i=1 r∈Is s=1
t = 1, . . . , T
∀{dt ∈ [d∗t − θd∗t , d∗t + θd∗t ], t = 1, . . . , T },
or, which is the same,
min F
π,F !
T P
I T I
s.t.
P 0
ci (t)πi,t +
P P P r
ci (t)πi,t dr − F ≤ 0
t=1 i=1 r=1 i=1 t:r∈It
t
0 +
πi,t
P r d ≤ P (t), i = 1, . . . , I, t = 1, . . . , T
πi,t r i
r∈I
Pt
0 +
πi,t r d ≥ 0, i = 1, . . . , I, t = 1, . . . , T
πi,t r
r∈It !
T T
P 0
πi,t +
P P r
πi,t dr ≤ Qi , i = 1, . . . , I (2.4.10)
t=1 r=1 t:r∈It !
t P I t I
0 + r −1 d ≤V
max − v(1)
P P P P
πi,s πi,s r
s=1 i=1 r=1 i=1 s≤t,r∈Is
t = 1, . . . , T !
t P
I t I

P 0
πi,s −
P P P r
πi,s − 1 dr ≤ v(1) − Vmin
s=1 i=1 r=1 i=1 s≤t,r∈Is
t = 1, . . . , T
∀{dt ∈ [d∗t − θd∗t , d∗t + θd∗t ], t = 1, . . . , T }.
Now, using the following equivalences
T
dt xt ≤ y, ∀dt ∈ [d∗t (1 − θ), d∗t (1 + θ)]
P
t=1
m
d∗t (1 − θ)xt + d∗t (1 + θ)xt ≤ y
P P
t:xt <0 t:xt >0
m
T T
d∗t xt + θ d∗t |xt | ≤ y,
P P
t=1 t=1

and defining additional variables


X X I
X X
r
αr ≡ ci (t)πi,t ; δir ≡ r
πi,t ; ξtr ≡ r
πi,s − 1,
t:r∈It t:r∈It i=1 s≤t,r∈Is
146 LECTURE 2. CONIC QUADRATIC PROGRAMMING

we can straightforwardly convert the AARC (2.4.10) into an equivalent LP (cf. Remark 2.4.1):

min F
π,F,α,β,γ,δ,ζ,ξ,η
I T P
I T T
r = α , −β ≤ α ≤ β , 1 ≤ r ≤ T, 0 + αr d∗r + θ βr d∗r ≤ F ;
P P P P P
ci (t)πi,t r r r r ci (t)πi,t
i=1 t:r∈It t=1 i=1 r=1 r=1
r
−γi,t r ≤ γ r , r ∈ I , π0 +
≤ πi,t
P r d∗ + θ
πi,t
P r d∗ ≤ P (t), 1 ≤ i ≤ I, 1 ≤ t ≤ T ;
γi,t
i,t t i,t r r i
r∈ItP r∈It
0 +
πi,t
P r d∗ − θ
πi,t
P r d∗ ≥ 0,
γi,t r = δ r , −ζ r ≤ δ r ≤ ζ r , 1 ≤ i ≤ I, 1 ≤ r ≤ T,
πi,t
r r i i i i
r∈It r∈It t:r∈It
T T T
P 0 + P δ r d∗ + θ P ζ r d∗ ≤ Q , 1 ≤ i ≤ I;
πi,t i r i r i
t=1 r=1 r=1
I
P P r
πi,s − ξtr = 1, −ηtr ≤ ξtr ≤ ηtr , 1 ≤ r ≤ t ≤ T,
i=1 s≤t,r∈Is
t P
I t t
0 + ξtr d∗r + θ ηtr d∗r ≤ Vmax − v(1), 1 ≤ t ≤ T,
P P P
πi,s
s=1 i=1 r=1 r=1
t P I t t
0 ξtr d∗r −θ ηtr d∗r ≥ v(1) − Vmin , 1 ≤ t ≤ T.
P P P
πi,s +
s=1 i=1 r=1 r=1
(2.4.11)

An illustrative example. There are I = 3 factories producing a seasonal product, and one warehouse.
The decisions concerning production are made every month, and we are planning production for 24 months,
thus the time horizon is T = 24 periods. The nominal demand d∗ is seasonal, reaching its maximum in winter,
specifically,   
1 π (t − 1)
d∗t = 1000 1 + sin , t = 1, . . . , 24.
2 12
We assume that the uncertainty level θ is 20%, i.e., dt ∈ [0.8d∗t , 1.2d∗t ], as shown on the picture.
1800

1600

1400

1200

1000

800

600

400
0 5 10 15 20 25 30 35 40 45 50

Demand
• Nominal demand (solid)
• “demand tube” – nominal demand ±20% (dashed)
• a sample realization of actual demand (dotted)
The production costs per unit of the product depend on the factory and on time and follow the same seasonal
pattern as the demand, i.e., rise in winter and fall in summer. The production cost for a factory i at a period t
is given by:
ci (t) = αi 1 + 21 sin π (t−1)

12
, t = 1, . . . , 24.
α1 = 1
α2 = 1.5
α3 = 2
2.4. MORE APPLICATIONS: ROBUST LINEAR PROGRAMMING 147

3.5

2.5

1.5

0.5

0
0 5 10 15 20 25

Production costs for the 3 factories

The maximal per month production capacity of each one of the factories per is Pi (t) = 567 units, and the total, for
the entire planning period, production capacity each one of the factories for a year is Qi = 13600. The inventory
at the warehouse should not be less then 500 units, and cannot exceed 2000 units.
With this data, the AARC (2.4.11) of the uncertain inventory problem is an LP, the dimensions of which vary,
depending on the “information basis” (see below), from 919 variables and 1413 constraints (empty information
basis) to 2719 variables and 3213 constraints (on-line information basis).

The experiments. In every one of the experiments, the corresponding management policy was tested
against a given number (100) of simulations; in every one of the simulations, the actual demand dt of period t
was drawn at random, according to the uniform distribution on the segment [(1 − θ)d∗t , (1 + θ)d∗t ] where θ was
the “uncertainty level” characteristic for the experiment. The demands of distinct periods were independent of
each other.
We have conducted two series of experiments:
1. The aim of the first series of experiments was to check the influence of the demand uncertainty θ on the
total production costs corresponding to the robustly adjustable management policy – the policy (2.4.8)
yielded by the optimal solution to the AARC (2.4.11). We compared this cost to the “ideal” one, i.e., the
cost we would have paid in the case when all the demands were known to us in advance and we were using
the corresponding optimal management policy as given by the optimal solution of (2.4.5).
2. The aim of the second series of experiments was to check the influence of the “information basis” allowed
for the management policy, on the resulting management cost. Specifically, in our model as described in
the previous Section, when making decisions pi (t) at time period t, we can make these decisions depending
on the demands of periods r ∈ It , where It is a given subset of the segment {1, 2, ..., t}. The larger are these
subsets, the more flexible can be our decisions, and hopefully the less are the corresponding management
costs. In order to quantify this phenomenon, we considered 4 “information bases” of the decisions:
(a) It = {1, ..., t} (the richest “on-line” information basis);
(b) It = {1, ..., t − 1} (this standard information basis seems to be the most natural “information basis”:
past is known, present and future are unknown);
(c) It = {1, ..., t − 4} (the information about the demand is received with a four-day delay);
(d) It = ∅ (i.e., no adjusting of future decisions to actual demands at all. This “information basis”
corresponds exactly to the management policy yielded by the usual RC of our uncertain LP.).

The results of our experiments are as follows:


1. The influence of the uncertainty level on the management cost. Here we tested the robustly
adjustable management policy with the standard information basis against different levels of uncertainty, specifi-
cally, the levels of 20%, 10%, 5% and 2.5%. For every uncertainty level, we have computed the average (over 100
simulations) management costs when using the corresponding robustly adaptive management policy. We saved
148 LECTURE 2. CONIC QUADRATIC PROGRAMMING

the simulated demand trajectories and then used these trajectories to compute the ideal management costs. The
results are summarized in the table below. As expected, the less is the uncertainty, the closer are our management
costs to the ideal ones. What is surprising, is the low “price of robustness”: even at the 20% uncertainty level,
the average management cost for the robustly adjustable policy was just by 3.4% worse than the corresponding
ideal cost; the similar quantity for 2.5%-uncertainty in the demand was just 0.3%.

AARC Ideal case


price of
Uncertainty Mean Std Mean Std
robustness
2.5% 33974 190 33878 194 0.3%
5% 34063 432 33864 454 0.6%
10% 34471 595 34009 621 1.6%
20% 35121 1458 33958 1541 3.4%
Management costs vs. uncertainty level

2. The influence of the information basis. The influence of the information basis on the performance of the
robustly adjustable management policy is displayed in the following table:

information basis Management cost


for decision pi (t) Mean Std
is demand in periods
1, ..., t 34583 1475
1, . . . , t − 1 35121 1458
1, . . . , t − 4 Infeasible
∅ Infeasible

These experiments were carried out at the uncertainty level of 20%. We see that the poorer is the information
basis of our management policy, the worse are the results yielded by this policy. In particular, with 20% level
of uncertainty, there does not exist a robust non-adjustable management policy: the usual RC of our uncertain
LP is infeasible. In other words, in our illustrating example, passing from a priori decisions yielded by RC to
“adjustable” decisions yielded by AARC is indeed crucial.
An interesting question is what is the uncertainty level which still allows for a priori decisions. It turns out
that the RC is infeasible even at the 5% uncertainty level. Only at the uncertainty level as small as 2.5% the RC
becomes feasible and yields the following management costs:

RC Ideal cost
price of
Uncertainty Mean Std Mean Std
robustness
2.5% 35287 0 33842 172 4.3%

Note that even at this unrealistically small uncertainty level the price of robustness for the policy yielded by the
RC is by 4.3% larger than the ideal cost (while for the robustly adjustable management this difference is just
0.3%.

Comparison with Dynamic Programming. An Inventory problem we have considered is a


typical example of sequential decision-making under dynamical uncertainty, where the informa-
tion basis for the decision xt made at time t is the part of the uncertainty revealed at instant t.
This example allows for an instructive comparison of the AARC-based approach with Dynamic
Programming, which is the traditional technique for sequential decision-making under dynami-
cal uncertainty. Restricting ourselves with the case where the decision-making problem can be
modelled as a Linear Programming problem with the data affected by dynamical uncertainty, we
could say that (minimax-oriented) Dynamic Programming is a specific technique for solving the
ARC of this uncertain LP. Therefore when applicable, Dynamic Programming has a significant
advantage as compared to the above AARC-based approach, since it does not impose on the
adjustable variables an “ad hoc” restriction (motivated solely by the desire to end up with a
tractable problem) to be affine functions of the uncertain data. At the same time, the above “if
2.5. DOES CONIC QUADRATIC PROGRAMMING EXIST? 149

applicable” is highly restrictive: the computational effort in Dynamical Programming explodes


exponentially with the dimension of the state space of the dynamical system in question. For
example, the simple Inventory problem we have considered has 4-dimensional state space (the
current amount of product in the warehouse plus remaining total capacities of the three facto-
ries), which is already computationally too demanding for accurate implementation of Dynamic
Programming. The main advantage of the AARC-based dynamical decision-making as compared
with Dynamic Programming (as well as with Multi-Stage Stochastic Programming) comes from
the “built-in” computational tractability of the approach, which prevents the “curse of dimen-
sionality” and allows to process routinely fairly complicated models with high-dimensional state
spaces and many stages.
By the way, it is instructive to compare the AARC approach with Dynamic Programming
when the latter is applicable. For example, let us reduce the number of factories in our Inventory
problem from 3 to 1, increasing the production capacity of this factory from the previous 567 to
1800 units per period, and let us make the cumulative capacity of the factory equal to 24 × 1800,
so that the restriction on cumulative production becomes redundant. The resulting dynamical
decision-making problem has just one-dimensional state space (all which matters for the future
is the current amount of product in the warehouse). Therefore we can easily find by Dynamic
Programming the “minimax optimal” inventory management cost (minimum over arbitrary
casual10) decision rules, maximum over the realizations of the demands from the uncertainty
set). With 20% uncertainty, this minimax optimal inventory management cost turns out to be
Opt∗ = 31269.69. The guarantees for the AARC-based inventory policy can be only worse than
for the minimax optimal one: we should pay a price for restricting the decision rules to be affine
in the demands. How large is this price? Computation shows that the optimal value in the
AARC is OptAARC = 31514.17, i.e., it is just by 0.8% larger than the minimax optimal cost
Opt∗ . And all this – at the uncertainty level as large as 20%! We conclude that the AARC is
perhaps not as bad as one could think...

2.5 Does Conic Quadratic Programming exist?


Of course it does. What is meant is whether CQP exists as an independent entity?. Specifically,
we ask:
(?) Whether a conic quadratic problem can be “efficiently approximated” by a Linear
Programming one?
To pose the question formally, let us say that a system of linear inequalities

P y + tp + Qu ≥ 0 (LP)

approximates the conic quadratic inequality

kyk2 ≤ t (CQI)

within accuracy  (or, which is the same, is an -approximation of (CQI)), if


(i) Whenever (y, t) satisfies (CQI), there exists u such that (y, t, u) satisfies (LP);
(ii) Whenever (y, t, u) satisfies (LP), (y, t) “nearly satisfies” (CQI), namely,

kyk2 ≤ (1 + )t. (CQI )


10)
That is, decision of instant t depends solely on the demands at instants τ < t
150 LECTURE 2. CONIC QUADRATIC PROGRAMMING

Note that given a conic quadratic program


n o
min cT x : kAi x − bi k2 ≤ cTi x − di , i = 1, ..., m (CQP)
x

with mi × n-matrices Ai and -approximations

P i yi + ti pi + Qi ui ≥ 0

of conic quadratic inequalities


kyi k2 ≤ ti [dim yi = mi ],
one can approximate (CQP) by the Linear Programming program
n o
min cT x : P i (Ai x − bi ) + (cTi x − di )pi + Qi ui ≥ 0, i = 1, ..., m ;
x,u

if  is small enough, this program, for every practical purpose, is “the same” as (CQP) 11) .

Now, in principle, any closed cone of the form

{(y, t) : t ≥ φ(y)}

can be approximated, in the aforementioned sense, by a system of linear inequalities within


any accuracy  > 0. The question of crucial importance, however, is how large should be
the approximating system – how many linear constraints and additional variables it requires.
With naive approach to approximating Ln+1 – “take tangent hyperplanes along a fine finite
grid of boundary directions and replace the Lorentz cone with the resulting polyhedral one” –
the number of linear constraints in, say, 0.5-approximation blows up exponentially as n grows,
rapidly making the approximation completely meaningless. Surprisingly, there is a much smarter
way to approximate Ln+1 :

Theorem 2.5.1 Let n be the dimension of y in (CQI), and let 0 <  < 1/2. There exists
(and can be explicitly written) a system of no more than O(1)n ln 1 linear inequalities of the
form (LP) with dim u ≤ O(1)n ln 1 which is an -approximation of (CQI). Here O(1)’s are
appropriate absolute constants.

To get an impression of the constant factors in the Theorem, look at the numbers I(n, ) of
linear inequalities and V (n, ) of additional variables u in an -approximation (LP) of the conic
quadratic inequality (CQI) with dim y = n:

n  = 10−1  = 10−6  = 10−14


V (n, ) I(N, ) V (n, ) I(n, ) V (n, ) I(n, )
4 6 17 31 69 70 148
16 30 83 159 345 361 745
64 133 363 677 1458 1520 3153
256 543 1486 2711 5916 6169 12710
1024 2203 6006 10899 23758 24773 51050
11)
Note that standard computers do not distinguish between 1 and 1 ± 10−17 . It follows that “numerically
speaking”, with  ∼ 10−17 , (CQI) is the same as (CQI ).
2.5. DOES CONIC QUADRATIC PROGRAMMING EXIST? 151

You can see that V (n, ) ≈ 0.7n ln 1 , I(n, ) ≈ 2n ln 1 .


The smart approximation described in Theorem 2.5.1 is incomparably better than the out-
lined naive approximation. On a closest inspection, the “power” of the smart approximation
comes from the fact that here we approximate the Lorentz cone by a projection of a simple
higher-dimensional polyhedral cone. When projecting a polyhedral cone living in RN onto a
linear subspace of dimension  N , you get a polyhedral cone with the number of facets which
can be by an exponential in N factor larger than the number of facets of the original cone.
Thus, the projection of a simple (with small number of facets) polyhedral cone onto a sub-
space of smaller dimension can be a very complicated (with an astronomical number of facets)
polyhedral cone, and this is the fact exploited in the approximation scheme to follow.

2.5.1 Proof of Theorem 2.5.1


Let  > 0 and a positive integer n be given. We intend to build a polyhedral -approximation
of the Lorentz cone Ln+1 . Without loss of generality we may assume that n is an integer power
of 2: n = 2κ , κ ∈ N.

10 . “Tower of variables”. The first step of our construction is quite straightforward: we


introduce extra variables to represent a conic quadratic constraint
q
y12 + ... + yn2 ≤ t (CQI)

of dimension n + 1 by a system of conic quadratic constraints of dimension 3 each. Namely,


let us call our original y-variables “variables of generation 0” and let us split them into pairs
(y1 , y2 ), ..., (yn−1 , yn ). We associate with every one of these pairs its “successor” – an additional
variable “ of generation 1”. We split the resulting 2κ−1 variables of generation 1 into pairs and
associate with every pair its successor – an additional variable of “generation 2”, and so on;
after κ − 1 steps we end up with two variables of the generation κ − 1. Finally, the only variable
of generation κ is the variable t from (CQI).
To introduce convenient notation, let us denote by yi` i-th variable of generation `, so that
y1 , ..., yn0 are our original y-variables y1 , ..., yn , y1κ ≡ t is the original t-variable, and the “parents”
0
`−1 `−1
of yi` are the variables y2i−1 , y2i .
Note that the total number of all variables in the “tower of variables” we end up with is
2n − 1.
It is clear that the system of constraints
q
`−1 2 `−1 2
[y2i−1 ] + [y2i ] ≤ yi` , i = 1, ..., 2κ−` , ` = 1, ..., κ (2.5.1)

is a representation of (CQI) in the sense that a collection (y10 ≡ y1 , ..., yn0 ≡ yn , y1κ ≡ t) can be
extended to a solution of (2.5.1) if and only if (y, t) solves (CQI). Moreover, let Π` (x1 , x2 , x3 , u` )
be polyhedral ` -approximations of the cone
q
3
L = {(x1 , x2 , x3 ) : x21 + x22 ≤ x3 },

` = 1, ..., κ. Consider the system of linear constraints in variables yi` , u`i :


`−1 `−1 ` `
Π` (y2i−1 , y2i , yi , ui ) ≥ 0, i = 1, ..., 2κ−` , ` = 1, ..., κ. (2.5.2)
152 LECTURE 2. CONIC QUADRATIC PROGRAMMING

Writing down this system of linear constraints as Π(y, t, u) ≥ 0, where Π is linear in its ar-
guments, y = (y10 , ..., yn0 ), t = y1κ , and u is the collection of all u`i , ` = 1, ..., κ and all yi` ,
` = 1, ..., κ − 1, we immediately conclude that Π is a polyhedral -approximation of Ln+1 with
κ
Y
1+= (1 + ` ). (2.5.3)
`=1

In view of this observation, we may focus on building polyhedral approximations of the Lorentz
cone L3 .

20 . Polyhedral approximation of L3 we intend to use is given by the system of linear


inequalities as follows (positive integer ν is the parameter of the construction):
(
ξ 0 ≥ |x1 |
(a)
η 0 ≥ |x2 |
    
 ξj π π
= cos 2j+1
ξ j−1 + sin 2j+1
η j−1
(b) 
π
 
π
 , j = 1, ..., ν (2.5.4)
 ηj ≥ − sin 2j+1

ξ j−1 + cos 2j+1 η j−1

( ν
ξ ≤ x3 
(c) π

ην ≤ tg 2ν+1 ξν

Note that (2.5.4) can be straightforwardly written down as a system of linear homogeneous
inequalities Π(ν) (x1 , x2 , x3 , u) ≥ 0, where u is the collection of 2(ν+1) variables ξ j , η i , j = 0, ..., ν.

q
Proposition 2.5.1 Π(ν) is a polyhedral δ(ν)-approximation of L3 = {(x1 , x2 , x3 ) : x21 + x22 ≤
x3 } with
1
δ(ν) =   − 1. (2.5.5)
π
cos 2ν+1

Proof. We should prove that


(i) If (x1 , x2 , x3 ) ∈ L3 , then the triple (x1 , x2 , x3 ) can be extended to a solution to (2.5.4);
(ii) If a triple (x1 , x2 , x3 ) can be extended to a solution to (2.5.4), then k(x1 , x2 )k2 ≤ (1 +
δ(ν))x3 .
(i): Given (x1 , x2 , x3 ) ∈ L3 , let us set ξ 0 = |x1 |, η 0 = |x2 |, thus ensuring (2.5.4.a). Note that
k(ξ , η 0 )k2 = k(x1 , x2 )k2 and that the point P 0 = (ξ 0 , η 0 ) belongs to the first quadrant.
0

Now, for j = 1, ..., ν let us set


   
π π
ξj = cos 2j+1
ξ j−1 + sin
2j+1
η j−1
    ,
π π
ηj = − sin 2j+1 ξ j−1 + cos 2j+1 η j−1

thus ensuring (2.5.4.b), and let P j = (ξ j , η j ). The point P j is obtained from P j−1 by the
π
following construction: we rotate clockwise P j−1 by the angle φj = 2j+1 , thus getting a point
Q ; if this point is in the upper half-plane, we set P = Q , otherwise P j is the reflection
j−1 j j−1

of Qj−1 with respect to the x-axis. From this description it is clear that
(I) kP j k2 = kP j−1 k2 , so that all vectors P j are of the same Euclidean norm as P 0 , i.e., of
the norm k(x1 , x2 )k2 ;
2.5. DOES CONIC QUADRATIC PROGRAMMING EXIST? 153

(II) Since the point P 0 is in the first quadrant, the point Q0 is in the angle − π4 ≤ arg(P ) ≤ π4 ,
so that P 1 is in the angle 0 ≤ arg(P ) ≤ π4 . The latter relation, in turn, implies that Q1 is in
the angle − π8 ≤ arg(P ) ≤ π8 , whence P 2 is in the angle 0 ≤ arg(P ) ≤ π8 . Similarly, P 3 is in the
π π
angle 0 ≤ arg(P ) ≤ 16 , and so on: P j is in the angle 0 ≤ arg(P ) ≤ 2j+1 .
ν ν
By (I), ξ ≤ kP k2 = k(x1 , x2 )k2 ≤ x3 , so that the first inequality in (2.5.4.c) is satisfied.
π
By (II), P ν is in the angle 0 ≤ arg(P ) ≤ 2ν+1 , so that the second inequality in (2.5.4.c) also is
satisfied. We have extended a point from L3 to a solution to (2.5.4).
(ii): Let (x1 , x2 , x3 ) can be extended to a solution (x1 , x2 , x3 , {ξ j , η j }νj=0 ) to (2.5.4). Let
us set P j = (ξ j , η j ). From (2.5.4.a, b) it follows that all vectors P j are nonnegative. We have
k P 0 k2 ≥ k (x1 , x2 ) k2 by (2.5.4.a). Now, (2.5.4.b) says that the coordinates of P j are ≥ absolute
values of the coordinates of P j−1 taken in certain orthonormal system of coordinates, so that
kP j k2 ≥ kP j−1 k2 . Thus, kP ν k2 ≥ k(x1 , x2 )T k2 . On the other hand, by (2.5.4.c) one has
kP ν k2 ≤ 1
π
 ξν ≤ 1
π
 x3 , so that k(x1 , x2 )T k2 ≤ δ(ν)x3 , as claimed. 2
cos cos
2ν+1 2ν+1

Specifying in (2.5.2) the mappings Π` (·) as Π(ν` ) (·), we conclude that for every collection
of positive integers ν1 , ..., νκ one can point out a polyhedral β-approximation Πν1 ,...,νκ (y, t, u) of
Ln , n = 2κ :
(
0 `−1
ξ`,i ≥ |y2i−1 |
(a`,i ) 0 `−1
 η`,i ≥ |y2i  |   
 ξj = cos π
j+1 ξ j−1
+ sin π
j+1
j−1
η`,i
`,i 2 `,i 2
(b`,i )     , j = 1, ..., ν`
 ηj π
≥ − sin 2j+1
j−1
ξ`,i + cos 2j+1π j−1
η`,i
(2.5.6)
( ν`,i`

ξ`,i `
≤ yi 
(c`,i ) ν`

ν`
η`,i ≤ tg 2ν`π+1 ξ`,i
i = 1, ..., 2κ−` , ` = 1, ..., κ.

The approximation possesses the following properties:

1. The dimension of the u-vector (comprised of all variables in (2.5.6) except yi = yi0 and
t = y1κ ) is
κ
X
p(n, ν1 , ..., νκ ) ≤ n + O(1) 2κ−` ν` ;
`=1

2. The image dimension of Πν1 ,...,νκ (·) (i.e., the # of linear inequalities plus twice the # of
linear equations in (2.5.6)) is

κ
X
q(n, ν1 , ..., νκ ) ≤ O(1) 2κ−` ν` ;
`=1

3. The quality β of the approximation is


κ
Y 1
β = β(n; ν1 , ..., νκ ) =   − 1.
π
`=1 cos 2ν` +1
154 LECTURE 2. CONIC QUADRATIC PROGRAMMING

30 . Back to the general case. Given  ∈ (0, 1] and setting

2
ν` = bO(1)` ln c, ` = 1, ..., κ,

with properly chosen absolute constant O(1), we ensure that

β(ν1 , ..., νκ ) ≤ ,
p(n, ν1 , ..., νκ ) ≤ O(1)n ln 2 ,
q(n, ν1 , ..., νκ ) ≤ O(1)n ln 2 ,

as required. 2

2.6 Exercises for Lecture 2


Solutions to exercises/parts of exercises colored in cyan can be found in section 6.2.

2.6.1 Optimal control in discrete time linear dynamic system


Consider a discrete time linear dynamic system

x(t) = A(t)x(t − 1) + B(t)u(t), t = 1, 2, ..., T ;


(S)
x(0) = x0 .

Here:

• t is the (discrete) time;

• x(t) ∈ Rl is the state vector: its value at instant t identifies the state of the controlled
plant;

• u(t) ∈ Rk is the exogenous input at time instant t; {u(t)}Tt=1 is the control;

• For every t = 1, ..., T , A(t) is a given l × l, and B(t) – a given l × k matrices.

A typical problem of optimal control associated with (S) is to minimize a given functional of
the trajectory x(·) under given restrictions on the control. As a simple problem of this type,
consider the optimization model
T
( )
T 1X
min c x(T ) | uT (t)Q(t)u(t) ≤ w , (OC)
x 2 t=1

where Q(t) are given positive definite symmetric matrices.

Exercise 2.1 1) Use (S) to express x(T ) via the control and convert (OC) in a quadratically
constrained problem with linear objective w.r.t. the u-variables.
2) Convert the resulting problem to a conic quadratic program
3) Pass to the resulting problem to its dual and find the optimal solution to the latter problem.
2.6. EXERCISES FOR LECTURE 2 155

2.6.2 Around stable grasp


Recall that the Stable Grasp Analysis problem is to check whether the system of constraints
kF i k2 ≤ µ(f i )T v i , i = 1, ..., N
(v i )T F i = 0, i = 1, ..., N
N
(f i + F i ) + F ext = 0 (SG)
P
i=1
N
pi × (f i + F i ) + T ext
P
= 0
i=1

in the 3D vector variables F i is or is not solvable. Here the data are given by a number of 3D
vectors, namely,
• vectors v i – unit inward normals to the surface of the body at the contact points;
• contact points pi ;
• vectors f i – contact forces;
• vectors F ext and T ext of the external force and torque, respectively.
µ > 0 is a given friction coefficient; we assume that fiT v i > 0 for all i.
Exercise 2.2 Regarding (SG) as the system of constraints of a maximization program with
trivial objective, build the dual problem.

2.6.3 Around randomly perturbed linear constraints


Consider a linear constraint
aT x ≥ b [x ∈ Rn ]. (2.6.1)
We have seen that if the coefficients aj of the left hand side are subject to random perturbations:
aj = a∗j + j , (2.6.2)
where j are independent random variables with zero means taking values in segments [−σj , σj ],
then “a reliable version” of the constraint is
sX
a∗j xj − ω
X
σj2 x2j ≥ b, (2.6.3)
j j
| {z }
α(x)

where ω > 0 is a “safety parameter”. “Reliability” means that if certain x satisfies (2.6.3), then
x is “exp{−ω 2 /4}-reliable solution to (2.6.1)”, that is, the probability that x fails to satisfy
a realization of the randomly perturbed constraint (2.6.1) does not exceed exp{−ω 2 /4} (see
Proposition 2.4.1). Of course, there exists a possibility to build an “absolutely safe” version of
P ∗
(2.6.1) – (2.6.2) (an analogy of the Robust Counterpart), that is, to require that min (aj +
|j |≤σj j
j )xj ≥ b, which is exactly the inequality

a∗j xj −
X X
σj |xj | ≥ b. (2.6.4)
j j
| {z }
β(x)
156 LECTURE 2. CONIC QUADRATIC PROGRAMMING

Whenever x satisfies (2.6.4), x satisfies all realizations of (2.6.1), and not “all, up to exceptions
of small probability”. Since (2.6.4) ensures more guarantees than (2.6.3), it is natural to expect
from the latter inequality to be “less conservative” than the former one, that is, to expect
that the solution set of (2.6.3) is larger than the solution set of (2.6.4). Whether this indeed
is the case? The answer depends on the value of the safety parameter ω: when ω ≤ 1, the
“safety term” α(x) in (2.6.3) is, for every x, not greater than the safety term β(x) in (2.6.4),

so that every solution to (2.6.4) satisfies (2.6.3). When n > ω > 1, the “safety terms” in
our inequalities become “non-comparable”: depending on x, it may happen that α(x) ≤ β(x)

(which is typical when ω << n), same as it may happen that α(x) > β(x). Thus, in the

range 1 < ω < n no one of inequalities (2.6.3), (2.6.4) is more conservative than the other

one. Finally, when ω ≥ n, we always have α(x) ≥ β(x) (why?), so that for “large” values
of ω (2.6.3) is even more conservative than (2.6.4). The bottom line is that (2.6.3) is not
completely satisfactory candidate to the role of “reliable version” of linear constraint (2.6.1)
affected by random perturbations (2.6.2): depending on the safety parameter, this candidate
not necessarily is less conservative than the “absolutely reliable” version (2.6.4).
The goal of the subsequent exercises is to build and to investigate an improved version of
(2.6.3).
Exercise 2.3 1) Given x, assume that there exist u, v such that
(a) x = u + vr
(b)
P ∗
aj xj − σj |uj | − ω
P P 2 2
σ j vj ≥ b (2.6.5)
j j j

Prove that then the probability for x to violate a realization of (2.6.1) is ≤ exp{−ω 2 /4} (and is
≤ exp{−ω 2 /2} in the case of symmetrically distributed j ).
2) Verify that the requirement “x can be extended, by properly chosen u, v, to a solution of
(2.6.5)” is weaker than every one of the requirements
(a) x satisfies (2.6.3)
(b) x satisfies (2.6.4)
The conclusion of Exercise 2.3 is:
A good “reliable version” of randomly perturbed constraint (2.6.1) – (2.6.2) is system
(2.6.5) of linear and conic quadratic constraints in variables x, u, v:
• whenever x can be extended to a solution of system (2.6.5), x is exp{−ω 2 /4}-
reliable solution to (2.6.1) (when the perturbations are symmetrically distributed,
you can replace exp{−ω 2 /4} with exp{−ω 2 /2});
• at the same time, “as far as x is concerned”, system (2.6.5) is less conservative
than every one of the inequalities (2.6.3), (2.6.4): if x solves one of these inequalities,
x can be extended to a feasible solution of the system.
Recall that both (2.6.3) and (2.6.4) are Robust Counterparts
min aT x ≥ b (2.6.6)
a∈U

of (2.6.1) corresponding to certain choices of the uncertainty set U: (2.6.3) corresponds to the
ellipsoidal uncertainty set
U = {a : aj = a∗j + σj ζj ,
X
ζj2 ≤ ω 2 },
j
2.6. EXERCISES FOR LECTURE 2 157

while (2.6.3) corresponds to the box uncertainty set

U = {a : aj = a∗j + σj ζj , max |ζj | ≤ 1}.


j

What about (2.6.5)? Here is the answer:


(!) System (2.6.5) is (equivalent to) the Robust Counterpart (2.6.6) of (2.6.1), the
uncertainty set being the intersection of the above ellipsoid and box:

U∗ = {a : aj = a∗j + σj ζj ,
X
ζj2 ≤ ω 2 , max |ζj | ≤ 1}.
j
j

Specifically, x can be extended to a feasible solution of (2.6.5) if and only if one has

min aT x ≥ b.
a∈U∗

Exercise 2.4 Prove (!) by demonstrating that


   
 X  X 
max pT z : zj2 ≤ R2 , |zj | ≤ 1 = min |uj | + Rkvk2 : u + v = p .
z   u,v  
j j

Exercise 2.5 Extend the above constructions and results to the case of uncertain linear inequal-
ity
aT x ≥ b
with certain b and the vector of coefficients a randomly perturbed according to the scheme

a = a∗ + B,

where B is deterministic, and the entries 1 , ..., N of  are independent random variables with
zero means and such that |i | ≤ σi for all i (σi are deterministic).

2.6.4 Around Robust Antenna Design


Consider Antenna Design problem as follows:
Given locations p1 , ..., pk ∈ R3 of k coherent harmonic oscillators, design antenna
array which sends as much energy as possible in a given direction (which w.l.o.g.
may be taken as the positive direction of the x-axis).
Of course, this is informal setting. The goal of subsequent exercises is to build and process the
corresponding model.

Background. In what follows, you can take for granted the following facts:
1. The diagram of “standardly invoked” harmonic oscillator placed at a point p ∈ R3 is the
following function of a 3D unit direction δ:
! !
2πpT δ 2πpT δ
Dp (δ) = cos + i sin [δ ∈ R3 , δ T δ = 1] (2.6.7)
λ λ

where λ is the wavelength, and i is the imaginary unit.


158 LECTURE 2. CONIC QUADRATIC PROGRAMMING

2. The diagram of an array of oscillators placed at points p1 , ..., pk , is the function

k
X
D(δ) = z` Dp` (δ),
`=1

where z` are the “element weights” (which form the antenna design and can be arbitrary
complex numbers).

3. A natural for engineers way to measure the “concentration” of the energy sent by antenna
around a given direction e (which from now on is the positive direction of the x-axis) is

• to choose a θ > 0 and to define the corresponding sidelobe angle ∆θ as the set of all
unit 3D directions δ which are at the angle ≥ θ with the direction e;
|D(e)|
• to measure the “energy concentration” by the index ρ = max |D(δ)| , where D(·) is the
δ∈∆θ
diagram of the antenna.

4. To make the index easily computable, let us replace in its definition the maximum over
the entire sidelobe angle with the maximum over a given “fine finite grid” Γ ⊂ ∆θ , thus
arriving at the quantity
|D(e)|
ρ=
max |D(δ)|
δ∈Γθ

which we from now on call the concentration index.

Developments. Now we can formulate the Antenna Design problem as follows:

(*) Given

• locations p1 , ..., pk of harmonic oscillators,


• wavelength λ,
• finite set Γ of unit 3D directions,

choose complex weights z` = x` + iy` , ` = 1, ..., k which maximize the index

|
P
z` D` (e)|
`
ρ= (2.6.8)
max |
P
z` D` (δ)|
δ∈Γ `

where D` (·) are given by (2.6.7).

Exercise 2.6 1) Whether the objective (2.6.8) is a concave (and thus “easy to maximize”)
function?
2) Prove that (∗) is equivalent to the convex optimization program
( ! )
X X
max < (x` + iy` )D` (e) :| (x` + iy` )D` (δ)| ≤ 1, δ ∈ Γ . (2.6.9)
x` ,y` ∈R
` `
2.6. EXERCISES FOR LECTURE 2 159

In order to carry out our remaining tasks, it makes sense to approximate (2.6.9) with a Linear
Programming problem. To this end, it suffices to approximate the modulus of a complex number
z (i.e., the Euclidean norm of a 2D vector) by the quantity
πJ (z) = max <(ωj z) [ωj = cos( 2πj 2πj
J ) + i sin( J )]
j=1,...,J

(geometrically: we approximate the unit disk in C = R2 by circumscribed perfect J-side poly-


gon).
Exercise 2.7 What is larger – πJ (z) or |z|? Within which accuracy the “polyhedral norm”
πJ (·) approximates the modulus?
With the outlined approximation of the modulus, (2.6.9) becomes the optimization program
( ! ! )
X X
max < (x` + iy` )D` (e) : < ωj (x` + iy` )D` (δ) ≤ 1, 1 ≤ j ≤ J, δ ∈ Γ . (2.6.10)
x` ,y` ∈R
` `

Operational Exercise 2.6.1 1) Verify that (2.6.10) is a Linear Programming program and
solve it numerically for the following two setups:
Data A:
• k = 16 oscillators placed at the points p` = (` − 1)e, ` = 1, ..., 16;
• wavelength λ = 2.5;
• J = 10;
• sidelobe grid Γ: since with the oscillators located along the x-axis, the
diagram of the array is symmetric with respect to rotations around
the x-axis, it suffices to look at the “sidelobe directions” from the xy-
plane. To get Γ, we form the set of all directions which are at the angle
at least θ = 0.3rad away from the positive direction of the x-axis, and
take 64-point equidistant grid in the resulting “arc of directions”, so
that
  63

 cos(α + sdα) 
2(π−α)
Γ = δs =  sin(α + sdα)  [α = 0.3, dα = 63 ]
 
 
 0 
s=0
Data B: exactly as Data A, except for the wavelength, which is now λ = 5.
2) Assume that in reality the weights are affected by “implementation errors”:
x` = x∗` (1 + σξ` ), y` = x∗` (1 + ση` ),
where x∗` , y`∗ are the “nominal optimal weights” obtained when solving (2.6.10), x` , y` are ac-
tual weights, σ > 0 is the “perturbation level”, and ξ` , η` are mutually independent random
perturbations uniformly distributed in [−1, 1].
2.1) Check by simulation what happens with the concentration index of the actual diagram as
a result of implementation errors. Carry out the simulations for the perturbation level σ taking
values 1.e-4, 5.e-4, 1.e-3.
2.2) If you are not satisfied with the behaviour of nominal design(s) in the presence imple-
mentation errors, use the Robust Counterpart methodology to replace the nominal designs with
the robust ones. What is the “price of robustness” in terms of the index? What do you gain in
stability of the diagram w.r.t. implementation errors?
160 LECTURE 2. CONIC QUADRATIC PROGRAMMING
Lecture 3

Semidefinite Programming

In this lecture we study Semidefinite Programming – a generic conic program with an extremely
wide area of applications.

3.1 Semidefinite cone and Semidefinite programs


3.1.1 Preliminaries
Let Sm be the space of symmetric m × m matrices, and Mm,n be the space of rectangular m × n
matrices with real entries. In the sequel, we always think of these spaces as of Euclidean spaces
equipped with the Frobenius inner product
X
hA, Bi ≡ Tr(AB T ) = Aij Bij ,
i,j

and we may use in connection with these spaces all notions based upon the Euclidean structure,
e.g., the (Frobenius) norm of a matrix
v
u m
q uX q
kXk2 = hX, Xi = t Xij2 = Tr(X T X)
i,j=1

and likewise the notions of orthogonality, orthogonal complement of a linear subspace, etc. Of
course, the Frobenius inner product of symmetric matrices can be written without the transpo-
sition sign:
hX, Y i = Tr(XY ), X, Y ∈ Sm .
Let us focus on the space Sm . After it is equipped with the Frobenius inner product, we may
speak about a cone dual to a given cone K ⊂ Sm :

K∗ = {Y ∈ Sm | hY, Xi ≥ 0 ∀X ∈ K}.

Among the cones in Sm , the one of special interest is the semidefinite cone Sm
+ , the cone of
all symmetric positive semidefinite matrices1) . It is easily seen that Sm
+ indeed is a cone, and
moreover it is self-dual:
(Sm m
+ )∗ = S+ .
1)
Recall that a symmetric n × n matrix A is called positive semidefinite if xT Ax ≥ 0 for all x ∈ Rm ; an
equivalent definition is that all eigenvalues of A are nonnegative

161
162 LECTURE 3. SEMIDEFINITE PROGRAMMING

Another simple fact is that the interior Sm m


++ of the semidefinite cone S+ is exactly the set of all
positive definite symmetric m × m matrices, i.e., symmetric matrices A for which xT Ax > 0 for
all nonzero vectors x, or, which is the same, symmetric matrices with positive eigenvalues.
The semidefinite cone gives rise to a family of conic programs “minimize a linear objective
over the intersection of the semidefinite cone and an affine plane”; these are the semidefinite
programs we are about to study.
Before writing down a generic semidefinite program, we should resolve a small difficulty
with notation. Normally we use lowercase Latin and Greek letters to denote vectors, and the
uppercase letters – to denote matrices; e.g., our usual notation for a conic problem is
n o
min cT x : Ax − b ≥K 0 . (CP)
x

In the case of semidefinite programs, where K = Sm + , the usual notation leads to a conflict with
m
the notation related to the space where S+ lives. Look at (CP): without additional remarks it
is unclear what is A – is it a m × m matrix from the space Sm or is it a linear mapping acting
from the space of the design vectors – some Rn – to the space Sm ? When speaking about a
conic problem on the cone Sm + , we should have in mind the second interpretation of A, while the
standard notation in (CP) suggests the first (wrong!) interpretation. In other words, we meet
with the necessity to distinguish between linear mappings acting to/from Sm and elements of
Sm (which themselves are linear mappings from Rm to Rm ). In order to resolve the difficulty,
we make the following

Notational convention: To denote a linear mapping acting from a linear space to a space
of matrices (or from a space of matrices to a linear space), we use uppercase script letters like
A, B,... Elements of usual vector spaces Rn are, as always, denoted by lowercase Latin/Greek
letters a, b, ..., z, α, ..., ζ, while elements of a space of matrices usually are denoted by uppercase
Latin letters A, B, ..., Z. According to this convention, a semidefinite program of the form (CP)
should be written as n o
min cT x : Ax − B ≥Sm +
0 . (∗)
x
We also simplify the sign ≥Sm+
to  and the sign >Sm
+
to  (same as we write ≥ instead of ≥Rm+
and > instead of >Rm +
). Thus, A  B (⇔ B  A) means that A and B are symmetric matrices
of the same size and A − B is positive semidefinite, while A  B (⇔ B ≺ A) means that A, B
are symmetric matrices of the same size with positive definite A − B.
Our last convention is how to write down expressions of the type AAxB (A is a linear
mapping from some Rn to Sm , x ∈ Rn , A, B ∈ Sm ); what we are trying to denote is the result
of the following operation: we first take the value Ax of the mapping A at a vector x, thus
getting an m × m matrix Ax, and then multiply this matrix from the left and from the right by
the matrices A, B. In order to avoid misunderstandings, we write expressions of this type as
A[Ax]B
or as AA(x)B, or as AA[x]B.

How to specify a mapping A : Rn → Sm . A natural data specifying a linear mapping A :


Rn → Rm is a collection of n elements of the “destination space” – n vectors a1 , a2 , ..., an ∈ Rm
– such that n X
Ax = xj aj , x = (x1 , ..., xn )T ∈ Rn .
j=1
3.1. SEMIDEFINITE CONE AND SEMIDEFINITE PROGRAMS 163

Similarly, a natural data specifying a linear mapping A : Rn → Sm is a collection A1 , ..., An of


n matrices from Sm such that
n
X
Ax = x j Aj , x = (x1 , ..., xn )T ∈ Rn . (3.1.1)
j=1

In terms of these data, the semidefinite program (*) can be written as


n o
min cT x : x1 A1 + x2 A2 + ... + xn An − B  0 . (SDPr)
x

It is a simple exercise to verify that if A is represented as in (3.1.1), then the conjugate to


A linear mapping A∗ : Sm → Rn is given by

A∗ Λ = (Tr(ΛA1 ), ..., Tr(ΛAn ))T : Sm → Rn . (3.1.2)

Linear Matrix Inequality constraints and semidefinite programs. In the case of conic
quadratic problems, we started with the simplest program of this type – the one with a single
conic quadratic constraint Ax − b ≥Lm 0 – and then defined a conic quadratic program as a
program with finitely many constraints of this type, i.e., as a conic program on a direct product
of the ice-cream cones. In contrast to this, when defining a semidefinite program, we impose on
the design vector just one Linear Matrix Inequality (LMI) Ax − B  0. Now we indeed should
not bother about more than a single LMI, due to the following simple fact:

A system of finitely many LMI’s

Ai x − Bi  0, i = 1, ..., k,

is equivalent to the single LMI


Ax − B  0,
with
Ax = Diag (A1 x, A2 x, ..., Ak x) , B = Diag(B1 , ..., Bk );
here for a collection of symmetric matrices Q1 , ..., Qk
 
Q1
Diag(Q1 , ..., Qk ) =  ..
.
 

Qk
is the block-diagonal matrix with the diagonal blocks Q1 , ..., Qk .

Indeed, a block-diagonal symmetric matrix is positive (semi)definite if and only if all its
diagonal blocks are so.

3.1.1.1 Dual to a semidefinite program (SDP)


Specifying the general concept of conic dual of a conic program in the case when the latter is a
semidefinite program (*) and taking into account (3.1.2) along with the fact that the semidefinite
cone is self-dual, we see that the dual to (*) is the semidefinite program

max {hB, Λi ≡ Tr(BΛ) : Tr(Ai Λ) = ci , i = 1, ..., n; Λ  0} . (SDDl)


Λ
164 LECTURE 3. SEMIDEFINITE PROGRAMMING

3.1.1.2 Conic Duality in the case of Semidefinite Programming

Let us see what we get from Conic Duality Theorem in the case of semidefinite programs. Strict
feasibility of (SDPr) means that there exists x such that Ax − B is positive definite, and strict
feasibility of (SDDl) means that there exists a positive definite Λ satisfying A∗ Λ = c. According
to the Refined Conic Duality Theorem, if both primal and dual are essentially strictly feasible,
both are solvable, the optimal values are equal to each other, and the complementary slackness
condition
[Tr(Λ[Ax − B]) ≡] hΛ, Ax − Bi = 0

is necessary and sufficient for a pair of a primal feasible solution x and a dual feasible solution
Λ to be optimal for the corresponding problems.
It is easily seen that for a pair X, Y of positive semidefinite symmetric matrices one has

Tr(XY ) = 0 ⇔ XY = Y X = 0;

in particular, in the case of essentially strictly feasible primal and dual problems, the “primal
slack” S∗ = Ax∗ − B corresponding to a primal optimal solution commutes with (any) dual
optimal solution Λ∗ , and the product of these two matrices is 0. Besides this, S∗ and Λ∗ ,
as a pair of commuting symmetric matrices, share a common eigenbasis, and the fact that
S∗ Λ∗ = 0 means that the eigenvalues of the matrices in this basis are “complementary”: for
every common eigenvector, either the eigenvalue of S∗ , or the one of Λ∗ , or both, are equal to 0
(cf. with complementary slackness in the LP case).

3.1.2 Comments.
Writing down a SDP program as a conic program with a single SDP constraint allows to save
notation; however, in actual applications of Semidefinite Programming, the “maiden” form of
an SDP program in variables x ∈ Rn usually is
n o
Opt(SDP ) = min cT x : Ai x − Bi  0, i ≤ m, Rx = r
h x i (SDP )
n i
Ai = Aij ∈ Ski , Bi ∈ Ski
P
j=1 xi Aj ,

Here is how we build the semidefinite dual of (SDP ) (cf. general considerations of this type in
Section 1.4.3):

• We assign the semidefinite constraints Ai x − Bi  0 with Lagrange multipliers (“weights”)


Λ from the cone dual to the semidefinite cone Sk+i , that is, from the same semidefinite cone:
Λi  0, and the linear equality constraints Rx = r - with Lagrange multiplier µ which is
a vector of the same dimension as r;

• We multiply the constraints by the weight and sum up the results, thus arriving at the
aggregated constraint

A∗i Λi ]T + [RT µ]T x ≥


X X
[ Tr(Bi λi ) + rY µ [A∗i Λi = [Tr(Ai1 Λi ); ...; Tr(Ain Λi )]
i i

which by its origin is a consequence of the constraints of (SDP ).


3.2. WHAT CAN BE EXPRESSED VIA LMI’S? 165

• When the left hand side of the aggregated constrain identically in x is cT x, the right hand
side of the constraint is a lower bound on Opt(SDD). The dual problem
( )
A∗i Λi
X X
T T
Opt(SDD) = max Tr(Bi Λi ) + r µ : + R µ = x, Λi  0, i ≤ m (SDD)
{Λi },µ
i i

is the problem of maximizing this lower bound over (legitimate) Lagrange multipliers.
Usually this “detailed” form of the dual allows for numerous simplifications (like analytical
elimination of some dual variables) and is much more instructive than the “economical”
single-constraint form of the dual.

3.2 What can be expressed via LMI’s?


As in the previous lecture, the first thing to realize when speaking about the “semidefinite
programming universe” is how to recognize that a convex optimization program
m
( )
\
T
min c x : x ∈ X = Xi (P)
x
i=1

can be cast as a semidefinite program. Just as in the previous lecture, this question actually
asks whether a given convex set/convex function is positive semidefinite representable (in short:
SDr). The definition of the latter notion is completely similar to the one of a CQr set/function:

We say that a convex set X ⊂ Rn is SDr, if there exists an affine mapping (x, u) →
A[x; u] − B : Rnx × Rku → Sm such that

x ∈ X ⇔ ∃u : A[x; u] − B  0;

in other words, X is SDr, if there exists LMI

A[x; u] − B  0,

in the original design vector x and a vector u of additional design variables such that
X is the projection of the solution set of the LMI onto the x-space. An LMI with this
property is called Semidefinite Representation (SDR) of the set X. We say that this
SDR is strictly/essentially strictly feasible, if the conic constraints A[x; u] − B  0
is so.
A convex function f : Rn → R ∪ {+∞} is called SDr, if its epigraph

{(x, t) | t ≥ f (x)}

is a SDr set. A SDR of the epigraph of f is called semidefinite representation of f .

By exactly the same reasons as in the case of conic quadratic problems, one has:

1. If f is a SDr function, then all its level sets {x | f (x) ≤ a} are SDr; the SDR
of the level sets are explicitly given by (any) SDR of f ;
166 LECTURE 3. SEMIDEFINITE PROGRAMMING

2. If all the sets Xi in problem (P) are SDr with known SDR’s, then the problem
can explicitly be converted to a semidefinite program.

In order to understand which functions/sets are SDr, we may use the same approach as in
Lecture 2. “The calculus”, i.e., the list of basic operations preserving SD-representability, is
exactly the same as in the case of conic quadratic problems; we just may repeat word by word
the relevant reasoning from Lecture 2, each time replacing “CQr” with “SDr,” and the family
SO of finite direct products of Lorentz cones with the family SD of finite direct products of
semidefinite cones (or, which for our purposes is the same, the family of semidefinite cones);
for more details on this subject, see Section 2.3.7. Thus, the only issue to be addressed is the
derivation of a catalogue of “simple” SDr functions/sets. Our first observation in this direction
is as follows:

1-17. 2) If a function/set is CQr, it is also SDr, and any CQR of the function/set can be
explicitly converted to its SDR.

Indeed, the notion of a CQr/SDr function is a “derivative” of the notion of a CQr/SDr set:
by definition, a function is CQr/SDr if and only if its epigraph is so. Now, CQr sets are
exactly those sets which can be obtained as projections of the solution sets of systems of
conic quadratic inequalities, i.e., as projections of inverse images, under affine mappings, of
direct products of ice-cream cones. Similarly, SDr sets are projections of the inverse images,
under affine mappings, of positive semidefinite cones. Consequently,
(i) in order to verify that a CQr set is SDr as well, it suffices to show that an inverse image,
under an affine mapping, of a direct product of ice-cream cones – a set of the form
l
Y
Z = {z | Az − b ∈ K = Lki }
i=1

is the inverse image of a semidefinite cone under an affine mapping. To this end, in turn, it
suffices to demonstrate that
l
Lki of ice-cream cones is an inverse image of a semidefinite cone
Q
(ii) a direct product K =
i=1
under an affine mapping.
Indeed, representing K as {y | Ay − b ∈ Sm
+ }, we get

Z = {z | Az − b ∈ K} = {z | Âz − B̂ ∈ Sm
+ },

where Âz − B̂ = A(Az − b) − B is affine.


In turn, in order to prove (ii) it suffices to show that
(iii) Every ice-cream cone Lk is an inverse image of a semidefinite cone under an affine
mapping.

In fact the implication (iii) ⇒ (ii) is given by our calculus, since a direct product of SDr
sets is again SDr3) .
2)
We refer to Examples 1-17 of CQ-representable functions/sets from Section 2.3
3)
Just to recall where the calculus comes from, here is a direct verification:
l
Lki of ice-cream cones and given that every factor in the product is the inverse
Q
Given a direct product K =
i=1
3.2. WHAT CAN BE EXPRESSED VIA LMI’S? 167

We have reached the point where no more reductions are necessary, and here is the demon-
stration of (iii). To see that the Lorentz cone Lk , k > 1, is SDr, it suffices to observe
that    
x k tIk−1 x
∈ L ⇔ A(x, t) = 0 (3.2.1)
t xT t
(x is k − 1-dimensional, t is scalar, Ik−1 is the (k − 1) × (k − 1) unit matrix). (3.2.1) indeed
resolves the problem, since the matrix A(x, t) is linear in (x, t)!
verify (3.2.1), which is immediate. If (x, t) ∈ Lk , i.e., if kxk2 ≤ t, then for
It remainsto 
ξ
every y = ∈ Rk (ξ is (k − 1)-dimensional, τ is scalar) we have
τ

y T A(x, t)y = τ 2 t + 2τ xT ξ + tξ T ξ ≥ τ 2 t − 2|τ |kxk2 kξk2 + tkξk22


≥ tτ 2 − 2t|τ |kξk2 + tkξk22
≥ t(|τ | − kξk2 )2 ≥ 0,

so that A(x, t)  0. Vice versa, if A(t, x)  0, then ofcourse t ≥ 0. Assuming t = 0, we


x
immediately obtain x = 0 (since otherwise for y = we would have 0 ≤ y T A(x, t)y =
0
−2kxk22 ); thus, A(x, t)  0 implies kxk2 ≤t in the case of t = 0. To see that the same
−x
implication is valid for t > 0, let us set y = to get
t

0 ≤ y T A(x, t)y = txT x − 2txT x + t3 = t(t2 − xT x),

whence kxk2 ≤ t, as claimed. 2

We see that the “expressive abilities” of semidefinite programming are even richer than
those of Conic Quadratic programming. In fact the gap is quite significant. The first new
possibility is the ability to handle eigenvalues, and the importance of this possibility can hardly
be overestimated.

3.2.1 SD-representability of functions of eigenvalues of symmetric matrices


Our first eigenvalue-related observation is as follows:

18. The largest eigenvalue λmax (X) regarded as a function of m × m symmetric matrix X is
SDr. Indeed, the epigraph of this function

{(X, t) ∈ Sm × R | λmax (X) ≤ t}

is given by the LMI


tIm − X  0,
where Im is the unit m × m matrix.
image of a semidefinite cone under an affine mapping:
Lki = {xi ∈ Rki | Ai xi − Bi  0},
we can represent K as the inverse image of a semidefinite cone under an affine mapping, namely, as
K = {x = (x1 , ..., xl ) ∈ Rk1 × ... × Rkl | Diag(A1 xi − B1 , ..., Al xl − Bl )  0}.
168 LECTURE 3. SEMIDEFINITE PROGRAMMING

Indeed, the eigenvalues of tIm −X are t minus the eigenvalues of X, so that the matrix tIm −X
is positive semidefinite – all its eigenvalues are nonnegative – if and only if t dominates all
eigenvalues of X.
The latter example admits a natural generalization. Let M, A be two symmetric m×m matrices,
and let M be positive definite. A real λ and a nonzero vector e are called eigenvalue and
eigenvector of the pencil [M, A], if Ae = λM e (in particular, the usual eigenvalues/eigenvectors
of A are exactly the eigenvalues/eigenvectors of the pencil [Im , A]). Clearly, λ is an eigenvalue
of [M, A] if and only if the matrix λM − A is singular, and nonzero vectors from the kernel of
the latter matrix are exactly the eigenvectors of [M, A] associated with the eigenvalue λ. The
eigenvalues of the pencil [M, A] are the usual eigenvalues of the matrix M −1/2 AM −1/2 , as can
be concluded from:
Det(λM − A) = 0 ⇔ Det(M 1/2 (λIm − M −1/2 AM −1/2 )M 1/2 ) = 0 ⇔ Det(λIm − M −1/2 AM −1/2 ) = 0.
The announced extension of Example 18 is as follows:

18a. [The maximum eigenvalue of a pencil]: Let M be a positive definite symmetric m × m


matrix, and let λmax (X : M ) be the largest eigenvalue of the pencil [M, X], where X is a
symmetric m × m matrix. The inequality
λmax (X : M ) ≤ t
is equivalent to the matrix inequality
tM − X  0.
In particular, λmax (X : M ), regarded as a function of X, is SDr.

18b. The spectral norm |X| of a symmetric m × m matrix X, i.e., the maximum of abso-
lute values of the eigenvalues of X, is SDr. Indeed, a SDR of the epigraph
{(X, t) | |X| ≤ t} = {(X, t) | λmax (X) ≤ t, λmax (−X) ≤ t}
of |X| is given by the pair of LMI’s
tIm − X  0, tIm + X  0.
In spite of their simplicity, the indicated results are extremely useful. As a more complicated
example, let us build a SDr for the sum of the k largest eigenvalues of a symmetric matrix.
From now on, speaking about m × m symmetric matrix X, we denote by λi (X), i = 1, ..., m,
its eigenvalues counted with their multiplicities and arranged in a non-ascending order:
λ1 (X) ≥ λ2 (X) ≥ ... ≥ λm (X).
The vector of the eigenvalues (in the indicated order) will be denoted λ(X):
λ(X) = (λ1 (X), ..., λm (X))T ∈ Rm .
The question we are about to address is which functions of the eigenvalues are SDr. We already
know that this is the case for the largest eigenvalue λ1 (X). Other eigenvalues cannot be SDr
since they are not convex functions of X. And convexity, of course, is a necessary condition for
SD-representability (cf. Lecture 2). It turns out, however, that the m functions
k
X
Sk (X) = λi (X), k = 1, ..., m,
i=1
are convex and, moreover, are SDr:
3.2. WHAT CAN BE EXPRESSED VIA LMI’S? 169

18c. Sums of largest eigenvalues of a symmetric matrix. Let X be m × m symmetric ma-


trix, and let k ≤ m. Then the function Sk (X) is SDr. Specifically, the epigraph

{(X, t) | Sk (x) ≤ t}

of the function admits the SDR


(a) t − ks − Tr(Z) ≥ 0
(b) Z  0 (3.2.2)
(c) Z − X + sIm  0
where Z ∈ Sm and s ∈ R are additional variables.
We should prove that
(i) If a given pair X, t can be extended, by properly chosen s, Z, to a solution of the system
of LMI’s (3.2.2), then Sk (X) ≤ t;
(ii) Vice versa, if Sk (X) ≤ t, then the pair X, t can be extended, by properly chosen s, Z, to
a solution of (3.2.2).
To prove (i), we use the following basic fact4) :

(W) The vector λ(X) is a -monotone function of X ∈ Sm :

X  X 0 ⇒ λ(X) ≥ λ(X 0 ).

Assuming that (X, t, s, Z) is a solution to (3.2.2), we get X  Z + sIm , so that


1
 
.
λ(X) ≤ λ(Z + sIm ) = λ(Z) + s ..  ,

1
whence
Sk (X) ≤ Sk (Z) + sk.
Since Z  0 (see (3.2.2.b)), we have Sk (Z) ≤ Tr(Z), and combining these inequalities we get

Sk (X) ≤ Tr(Z) + sk.

The latter inequality, in view of (3.2.2.a)), implies Sk (X) ≤ t, and (i) is proved.
To prove (ii), assume that we are given X, t with Sk (X) ≤ t, and let us set s = λk (X).
Then the k largest eigenvalues of the matrix X − sIm are nonnegative, and the remaining are
nonpositive. Let Z be a symmetric matrix with the same eigenbasis as X and such that the
k largest eigenvalues of Z are the same as those of X − sIm , and the remaining eigenvalues
are zeros. The matrices Z and Z − X + sIm are clearly positive semidefinite (the first by
construction, and the second since in the eigenbasis of X this matrix is diagonal with the first
k diagonal entries being 0 and the remaining being the same as those of the matrix sIm − X,
i.e., nonnegative). Thus, the matrix Z and the real s we have built satisfy (3.2.2.b, c). In
order to see that (3.2.2.a) is satisfied as well, note that by construction Tr(Z) = Sk (x) − sk,
whence t − sk − Tr(Z) = t − Sk (x) ≥ 0. 2

4)
which is an immediate corollary of the fundamental Variational Characterization of Eigenvalues of symmetric
matrices, see Section A.7.3: for a symmetric m × m matrix A,
λi (A) = min max eT Ae,
E∈Ei e∈E:eT e=1

where Ei is the collection of all linear subspaces of the dimension n − i + 1 in Rm ,


170 LECTURE 3. SEMIDEFINITE PROGRAMMING

In order to proceed, we need the following highly useful technical result:

Lemma 3.2.1 [Lemma on the Schur Complement] Let

CT
 
B
A=
C D

be a symmetric matrix with k × k block B and ` × ` block D. Assume that B is positive definite.
Then A is positive (semi)definite if and only if the matrix

D − CB −1 C T

is positive (semi)definite (this matrix is called the Schur complement of B in A).

Proof. The positive semidefiniteness of A is equivalent to the fact that

CT
  
B x
0 ≤ (xT , y T ) = xT Bx + 2xT C T y + y T Dy ∀x ∈ Rk , y ∈ R` ,
C D y

or, which is the same, to the fact that


h i
inf xT Bx + 2xT C T y + y T Dy ≥ 0 ∀y ∈ R` .
x∈Rk

Since B is positive definite by assumption, the infimum in x can be computed explicitly for every
fixed y: the optimal x is −B −1 C T y, and the optimal value is

y T Dy − y T CB −1 C T y = y T [D − CB −1 C T ]y.

The positive definiteness/semidefiniteness of A is equivalent to the fact that the latter ex-
pression is, respectively, positive/nonnegative for every y 6= 0, i.e., to the positive definite-
ness/semidefiniteness of the Schur complement of B in A. 2

18d. “Determinant” of a symmetric positive semidefinite matrix. Let X be a symmetric


positive semidefinite m × m matrix. Although its determinant
m
Y
Det(X) = λi (X)
i=1

is neither a convex nor a concave function of X (if m ≥ 2), it turns out that the function
Detq (X) is concave in X whenever 0 ≤ q ≤ m 1
. Function of this type are important in many
volume-related problems (see below); we are about to prove that
1
if q is a rational number,, 0 ≤ q ≤ m, then the function

−Detq (X), X  0

fq (X) =
+∞, otherwise

is SDr.
3.2. WHAT CAN BE EXPRESSED VIA LMI’S? 171

Consider the following system of LMI’s:


 
X ∆
 0, (D)
∆T D(∆)

where ∆ is m × m lower triangular matrix comprised of additional variables, and D(∆) is


the diagonal matrix with the same diagonal entries as those of ∆. Let diag(∆) denote the
vector of the diagonal entries of the square matrix ∆.
As we know from Lecture 2 (see Example 15), the set

{(δ, t) ∈ Rm q
+ × R | t ≤ (δ1 ...δm ) }

admits an explicit CQR. Consequently, this set admits an explicit SDR as well. The latter
SDR is given by certain LMI S(δ, t; u)  0, where u is the vector of additional variables of
the SDR, and S(δ, t, u) is a matrix affinely depending on the arguments. We claim that

(!) The system of LMI’s (D) & S(diag(∆), t; u)  0 is a SDR for the set

{(X, t) | X  0, t ≤ Detq (X)},

which is basically the epigraph of the function fq (the latter is obtained from our set by
reflection with respect to the plane t = 0).
To support our claim, recall that by Linear Algebra a matrix X is positive semidefinite if and
only if it can be factorized as X = ∆
b∆b T with a lower triangular ∆, b ≥ 0; the resulting
b diag(∆)
matrix ∆ is called the Choleski factor of X. Now note that if X  0 and t ≤ Detq (X), then
b
(1) We can extend X by appropriately chosen lower triangular matrix ∆ to a solution of (D)
m
Q
in such a way that if δ = diag(∆), then δi = Det(X).
i=1

Indeed, let ∆
b be the Choleski factor of X. Let Db be the diagonal matrix with the same
diagonal entries as those of ∆,
b and let ∆ = ∆b D,
b so that the diagonal entries δi of ∆ are
squares of the diagonal entries b
δi of the matrix ∆.
b Thus, D(∆) = D b 2 . It follows that
−1 T 2 −1 T
b ∆ b T = X. We
for every  > 0 one has ∆[D(∆) + I] ∆ = ∆ b D[
bD b + I] D b∆

b∆

X ∆
see that by the Schur Complement Lemma all matrices of the form
∆T D(∆) + I
 
X ∆
with  > 0 are positive semidefinite, whence  0. Thus, (D) is indeed
∆T D(∆)
m m
b T ⇒ Det(X) = Det2 (∆) δi2 =
Q Q
satisfied by (X, ∆). And of course X = ∆
b∆ b = b δi .
i=1 i=1

m
m q
δi = Det(X), we get t ≤ Detq (X) =
Q Q
(2) Since δ = diag(∆) ≥ 0 and δi , so that we
i=1 i=1
can extend (t, δ) by a properly chosen u to a solution of the LMI S(diag(∆), t; u)  0.
We conclude that if X  0 and t ≤ Detq (X), then one can extend the pair X, t by properly
chosen ∆ and u to a solution of the LMI (D) & S(diag(∆), t; u)  0, which is the first part
of the proof of (!).
To complete the proof of (!), it suffices to demonstrate that if for a given pair X, t there
exist ∆ and u such that (D) and the LMI S(diag(∆), t; u)  0 are satisfied, then X is
positive semidefinite and t ≤ Detq (X). This is immediate: denoting δ = diag(∆) [≥ 0]
and applying the Schur Complement Lemma, we conclude that X  ∆[D(∆) + I]−1 ∆T
for every  > 0. Applying (W), we get λ(X) ≥ λ(∆[D(∆) + I]−1 ∆T ), whence of course
m
Det(X) ≥ Det(∆[D(∆) + I]−1 ∆T ) = δi2 /(δi + ). Passing to limit as  → 0, we get
Q
i=1
172 LECTURE 3. SEMIDEFINITE PROGRAMMING

m
Q
δi ≤ Det(X). On the other hand, the LMI S(δ, t; u)  0 takes place, which means that
i=1 m q
δi . Combining the resulting inequalities, we come to t ≤ Detq (X), as required.
Q
t≤
i=1
2

18e. Negative powers of the determinant. Let q be a positive rational. Then the function

Det−q (X),

X0
f (X) =
+∞, otherwise
of symmetric m × m matrix X is SDr.
The construction is completely similar to the one used in Example 18d. As we remember from
Lecture 2, Example 16, the function g(δ) = (δ1 ...δm )−q of positive vector δ = (δ1 , ..., δm )T is
CQr and is therefore SDr as well. Let an SDR of the function be given by LMI R(δ, t, u) 
0. The same arguments as in Example 18d demonstrate that the pair of LMI’s (D) &
R(Dg(∆), t, u)  0 is an SDR for f .
In examples 18, 18b – 18d we were discussed SD-representability of particular functions of
eigenvalues of a symmetric matrix. Here is a general statement of this type:
Proposition 3.2.1 Let g(x1 , ..., xm ) : Rm → R ∪ {+∞} be a symmetric (i.e., invariant with
respect to permutations of the coordinates x1 , ...., xm ) SD-representable function:
t ≥ f (x) ⇔ ∃u : S(x, t, u)  0,
with S affinely depending on x, t, u. Then the function
f (X) = g(λ(X))
of symmetric m × m matrix X is SDr, with SDR given by the relation
(a) t ≥ f (X)
m
∃x1 , ...,
 xm , u :

 S(x1 , ..., xm , t, u)  0 (3.2.3)

 x1 ≥ x2 ≥ ... ≥ xm
(b)

 Sj (X) ≤ x1 + ... + xj , j = 1, ..., m − 1


Tr(X) = x1 + ... + xm
k
P
(recall that the functions Sj (X) = λi (X) are SDr, see Example 18c). Thus, the solution set
i=1
of (b) is SDr (as an intersection of SDr sets), which implies SD-representability of the projection
of this set onto the (X, t)-plane; by (3.2.3) the latter projection is exactly the epigraph of f ).
Proof of Proposition 3.2.1 is the subject of Exercises in Section 3.8.1.4. This proof is based upon
an extremely useful result known as Birkhoff’s Theorem5) .
5)
The Birkhoff Theorem, which, aside of other applications, implies a number of crucial facts about eigenvalues
of symmetric matrices, by itself even does not mention the word “eigenvalue” and reads: The extreme points of
the polytope P of double stochastic m × m matrices – those with nonnegative entries and unit sums of entries in
every row and every column – are exactly the permutation matrices (those with a single nonzero entry, equal to
1, in every row and every column). For proof, see Section B.2.8.C.1.
3.2. WHAT CAN BE EXPRESSED VIA LMI’S? 173

As a corollary of Proposition 3.2.1, we see that the following functions of a symmetric m × m


matrix X are SDr:

• f (X) = −Detq (X), X  0, q ≤ 1


m is a positive rational (this fact was already established
directly);
[here g(x1 , ..., xm ) = (x1 ...xm )q : Rn+ → R; a CQR (and thus – a SDR) of g is presented
in Example 15 of Lecture 2]

• f (x) = Det−q (X), X  0, q is a positive rational (cf. Example 18e)


[here g(x1 , ..., xm ) = (x1 , ..., xm )−q : Rm
++ → R; a CQR of g is presented in Example 16,
Lecture 2]
m 1/p
• kXkp = |λi (X)|p , p ≥ 1 is rational
P
i=1
m 1/p
[g(x) = kxkp ≡ |xi |p
P
, see Example 17a, Lecture 2, Example 17a]
i=1
m 1/p
• kX+ kp = maxp [λi (X), 0] , p ≥ 1 is rational
P
i=1
m 1/p
[here g(x) = kx+ kp ≡ maxp [x
P
i , 0] , see Example 17b, Lecture 2]
i=1

SD-representability of functions of singular values. Consider the space Mk,l of k × l


rectangular matrices and assume that k ≤ l. Given a matrix A ∈ Mk,l , consider the symmetric
positive semidefinite k × k matrix (AAT )1/2 ; its eigenvalues are called singular values of A and
are denoted by σ1 (A), ...σk (A): σi (A) = λi ((AAT )1/2 ). According to the convention on how we
enumerate eigenvalues of a symmetric matrix, the singular values form a non-ascending sequence:

σ1 (A) ≥ σ2 (A) ≥ ... ≥ σk (A).

The importance of the singular values comes from the Singular Value Decomposition Theorem
which states that a k × l matrix A (k ≤ l) can be represented as
k
X
A= σi (A)ei fiT ,
i=1

where {ei }ki=1 and {fi }ki=1 are orthonormal sequences in Rk and Rl , respectively; this is a
surrogate of the eigenvalue decomposition of a symmetric k × k matrix
k
X
A= λi (A)ei eTi ,
i=1

where {ei }ki=1 form an orthonormal eigenbasis of A.


Among the singular values of a rectangular matrix, the most important is the largest σ1 (A).
This is nothing but the operator (or spectral) norm of A:

|A| = max{kAxk2 | kxk2 ≤ 1}.


174 LECTURE 3. SEMIDEFINITE PROGRAMMING

For a symmetric matrix, the singular values are exactly the moduli of the eigenvalues, and our
new definition of the norm coincides with the one already given in 18b.
It turns out that the sum of a given number of the largest singular values of A
p
X
Σp (A) = σi (A)
i=1
is a convex and, moreover, a SDr function of A. In particular, the operator norm of A is SDr:

19. The sum Σp (X) of p largest singular values of a rectangular matrix X ∈ Mk,l is SDr. In
particular, the operator norm of a rectangular matrix is SDr:
−X T
 
tIl
|X| ≤ t ⇔  0.
−X tIk
Indeed, the result in question follows from the fact that the sums of p largest eigenvalues of
a symmetric matrix are SDr (Example 18c) due to the following
Observation. The singular values σi (X) of a rectangular k × l matrix X (k ≤ l)
for i ≤ k are equal to the eigenvalues λi (X̄) of the (k + l) × (k + l) symmetric
matrix  
0 XT
X̄ = .
X 0
Since X̄ linearly depends on X, SDR’s of the functions Sp (·) induce SDR’s of the functions
Σp (X) = Sp (X̄) (Rule on affine substitution, Lecture 2; recall that all “calculus rules”
established in Lecture 2 for CQR’s are valid for SDR’s as well).
k
σi (X)ei fiT be a singular value de-
P
Let us justify our observation. Let X =
i=1  
fi
composition of X. We claim that the 2k (k + l)-dimensional vectors gi+ =
  ei
− fi
and gi = are orthogonal to each other, and they are eigenvectors of X̄
−ei
with the eigenvalues σi (X) and −σi (X), respectively. Moreover, X̄ vanishes on
the orthogonal complement of the linear span of these vectors. In other words,
we claim that the eigenvalues of X̄, arranged in the non-ascending order, are as
follows:
σ1 (X), σ2 (X), ..., σk (X), 0, ..., 0, −σk (X), −σk−1 (X), ..., −σ1 (X);
| {z }
2(l−k)

this, of course, proves our Observation.


Now, the fact that the 2k vectors gi± , i = 1, ..., k, are mutually orthogonal and
nonzero is evident. Furthermore (we write σi instead of σi (X)),
 k 
T
P
    0 σ f e
j j j  
0 XT fi  j=1  fi
=  k
 
X 0 ei  ei
σj ej fjT
P
0
 j=1
k 
σj fj (eTj ei )
P
 j=1 
=  P k


T
σj ej (fj fi )
j=1
 
fi
= σi
ei
3.2. WHAT CAN BE EXPRESSED VIA LMI’S? 175

(we have used that both {fj } and {ej } are orthonormal systems). Thus, gi+ is an
eigenvector of X̄ with the eigenvalue σi (X). Similar computation shows that gi−
is an eigenvector of X̄ with the eigenvalue −σi (X).
 
f
It remains to verify that if h = is orthogonal to all gi± (f is l-dimensional,
e
e is k-dimensional), then X̄h = 0. Indeed, the orthogonality assumption means
that f T fi ±eT ei = 0 for all i, whence eT ei =0 and f T fi = 0 for all i. Consequently,
 k 
T
P
    fj (e j e)
0 XT f
2
 i=1 
= k
 = 0.
X 0 e  
ej (fjT f )
P
i=1

Looking at Proposition 3.2.1, we see that the fact that specific functions of eigenvalues of a
symmetric matrix X, namely, the sums Sk (X) of k largest eigenvalues of X, are SDr, underlies
the possibility to build SDR’s for a wide class of functions of the eigenvalues. The role of the
sums of k largest singular values of a rectangular matrix X is equally important:

Proposition 3.2.2 Let g(x1 , ..., xk ) : Rk+ → R ∪ {+∞} be a symmetric monotone function:

0 ≤ y ≤ x ∈ Dom f ⇒ f (y) ≤ f (x).

Assume that g is SDr:


t ≥ g(x) ⇔ ∃u : S(x, t, u)  0,

with S affinely depending on x, t, u. Then the function

f (X) = g(σ(X))

of k × l (k ≤ l) rectangular matrix X is SDr, with SDR given by the relation

(a) t ≥ f (X)
m
∃x1 , ...,
 xk , u : (3.2.4)

 S(x1 , ..., xk , t, u)  0
(b) x1 ≥ x2 ≥ ... ≥ xk
 Σ (X) ≤ x + ... + x , j = 1, ..., m

j 1 j

Note the difference between the symmetric (Proposition 3.2.1) and the non-symmetric
(Proposition 3.2.2) situations: in the former the function g(x) was assumed to be SDr and
symmetric only, while in the latter the monotonicity requirement is added.
The proof of Proposition 3.2.2 is outlined in Section 3.8

“Nonlinear matrix inequalities”. There are several cases when matrix inequalities F (x) 
0, where F is a nonlinear function of x taking values in the space of symmetric m × m matrices,
can be “linearized” – expressed via LMI’s.
176 LECTURE 3. SEMIDEFINITE PROGRAMMING

20a. General quadratic matrix inequality. Let X be a rectangular k × l matrix and

F (X) = (AXB)(AXB)T + CXD + (CXD)T + E

be a “quadratic” matrix-valued function of X; here A, B, C, D, E = E T are rectangular matrices


of appropriate sizes. Let m be the row size of the values of F . Consider the “-epigraph” of
the (matrix-valued!) function F – the set

{(X, Y ) ∈ Mk,l × Sm | F (X)  Y }.

We claim that this set is SDr with the SDR


!
Ir (AXB)T
0 [B : l × r]
AXB Y − E − CXD − (CXD)T

Indeed, by the Schur Complement Lemma our LMI is satisfied if and only if the Schur
complement of the North-Western block is positive semidefinite, which is exactly our original
“quadratic” matrix inequality.

20b. General “fractional-quadratic” matrix inequality. Let X be a rectangular k×l matrix,


and V be a positive definite symmetric l × l matrix. Then we can define the matrix-valued
function
F (X, V ) = XV −1 X T

taking values in the space of k × k symmetric matrices. We claim that the closure of the
-epigraph of this (matrix-valued!) function, i.e., the set

E = cl {(X, V ; Y ) ∈ Mk,l × Sl++ × Sk | F (X, V ) ≡ XV −1 X T  Y }

is SDr, and an SDR of this set is given by the LMI

XT
 
V
 0. (R)
X Y

Indeed, by the Schur Complement Lemma a triple (X, V, Y ) with positive definite V belongs
to the “epigraph of F ” – satisfies the relation F (X, V )  Y – if and only if it satisfies (R).
Now, if a triple (X, V, Y ) belongs to E, i.e., it is the limit of a sequence of triples from
the epigraph of F , then it satisfies (R) (as a limit of triples satisfying (R)). Vice versa, if a
triple (X, V, Y ) satisfies (R), then V is positive semidefinite (as a diagonal block in a positive
semidefinite matrix). The “regularized” triples (X, V = V + Il , Y ) associated with  > 0
satisfy (R) along with the triple (X, V, R); since, as we just have seen, V  0, we have
V  0, for  > 0. Consequently, the triples (X, V , Y ) belong to E (this was our very first
observation); since the triple (X, V, Y ) is the limit of the regularized triples which, as we
have seen, all belong to the epigraph of F , the triple (X, Y, V ) belongs to the closure E of
this epigraph. 2
3.2. WHAT CAN BE EXPRESSED VIA LMI’S? 177

20c. Matrix inequality Y  (C T X −1 C)−1 . In the case of scalars x, y the inequality y ≤


(cx−1 c)−1 in variables x, y is just an awkward way to write down the linear inequality y ≤ c−2 x,
but it naturally to the matrix analogy of the original inequality, namely, Y  (C T X −1 C)−1 ,
with rectangular m × n matrix C and variable symmetric n × n matrix Y and m × m matrix X.
In order for the matrix inequality to make sense, we should assume that the rank of C equals n
(and thus m ≥ n). Under this assumption, the matrix (C T X −1 C)−1 makes sense at least for a
positive definite X. We claim that the closure of the solution set of the resulting inequality –
the set
X = cl {(X, Y ) ∈ Sm × Sn | X  0, Y  (C T X −1 C)−1 }
is SDr:
X = {(X, Y ) | ∃Z : Y  Z, Z  0, X  CZC T }.
Indeed, let us denote by X 0 the set in the right hand side of the latter relation; we should
prove that X 0 = X . By definition, X is the closure of its intersection with the domain X  0.
It is clear that X 0 also is the closure of its intersection with the domain X  0. Thus, all we
need to prove is that a pair (Y, X) with X  0 belongs to X if and only if it belongs to X 0 .
“If” part: Assume that X  0 and (Y, X) ∈ X 0 . Then there exists Z such that Z  0,
Z  Y and X  CZC T . Let us choose a sequence Zi  Z such that Zi → Z, i → ∞.
Since CZi C T → CZC T  X as i → ∞, we can find a sequence of matrices Xi such that
T
Xi → X,i → ∞, and  Xi  CZi C for all i. By the Schur Complement Lemma, the
Xi C
matrices are positive definite; applying this lemma again, we conclude that
C T Zi−1
−1 −1
Zi  C T Xi C. Note that the left and the right hand side matrices in the latter inequality
are positive definite. Now let us use the following simple fact

Lemma 3.2.2 Let U, V be positive definite matrices of the same size. Then
U  V ⇔ U −1  V −1 .

Proof. Note that we can multiply an inequality A  B by a matrix Q from the


left and QT from the right:
A  B ⇒ QAQT  QBQT [A, B ∈ Sm , Q ∈ Mk,m ]
(why?) Thus, if 0 ≺ U  V , then V −1/2 U V −1/2  V −1/2 V V −1/2 = I (note
that V −1/2 = [V −1/2 ]T ), whence clearly V 1/2 U −1 V 1/2 = [V −1/2 U V −1/2 ]−1  I.
Thus, V 1/2 U −1 V 1/2  I; multiplying this inequality from the left and from the
right by V −1/2 = [V −1/2 ]T , we get U −1  V −1 . 2

Applying Lemma 3.2.2 to the inequality Zi−1  C T Xi−1 C[ 0], we get Zi  (C T Xi−1 C)−1 .
As i → ∞, the left hand side in this inequality converges to Z, and the right hand side
converges to (C T X −1 C)−1 . Hence Z  (C T X −1 C)−1 , and since Y  Z, we get Y 
(C T X −1 C)−1 , as claimed.

“Only if” part: Let X  0 and Y  (C T X −1 C)−1 ; we should prove that there exists Z  0
such that Z  Y and X  CZC T . We claim that the required relations are satisfied by
Z = (C T X −1 C)−1 . The only nontrivial part of the claim is that X  CZC T , and here is the
required
 −1 justification:
 by its origin Z  0, and by the Schur Complement Lemma the matrix
Z CT
is positive semidefinite, whence, by the same Lemma, X  C(Z −1 )−1 C T =
C X
CZC T .
178 LECTURE 3. SEMIDEFINITE PROGRAMMING

Nonnegative polynomials. Consider the problem of the best polynomial approximation –


given a function f on certain interval, we want to find its best uniform (or Least Squares, etc.)
approximation by a polynomial of a given degree. This problem arises typically as a subproblem
in all kinds of signal processing problems. In some situations the approximating polynomial is
required to be nonnegative (think, e.g., of the case where the resulting polynomial is an estimate
of an unknown probability density); how to express the nonnegativity restriction? As it was
shown by Yu. Nesterov [44], it can be done via semidefinite programming:
The set of all nonnegative (on the entire axis, or on a given ray, or on a given segment)
polynomials of a given degree is SDr.
k
pi ti of degree (not
P
In this statement (and everywhere below) we identify a polynomial p(t) =
i=0
exceeding) k with the (k + 1)-dimensional vector Coef(p) = Coef(p) = (p0 , p1 , ..., pk )T of the
coefficients of p. Consequently, a set of polynomials of the degree ≤ k becomes a set in Rk+1 ,
and we may ask whether this set is or is not SDr.
Let us look what are the SDR’s of different sets of nonnegative polynomials. The key here
+
is to get a SDR for the set P2k (R) of polynomials of (at most) a given degree 2k which are
nonnegative on the entire axis6)

+
21a. Polynomials nonnegative on the entire axis: The set P2k (R) is SDr – it is the image
k+1
of the semidefinite cone S+ under the affine mapping
 
1
 t 
 
X 7→ Coef(eT (t)Xe(t)) : Sk+1 → R2k+1 , e(t) =  t2  (C)
 
 
 ... 
tk
+
First note that the fact that P + ≡ P2k (R) is an affine image of the semidefinite cone indeed
+
implies the SD-representability of P , see the “calculus” of conic representations in Lecture
2. Thus, all we need is to show that P + is exactly the same as the image, let it be called P ,
of Sk+1
+ under the mapping (C).
(1) The fact that P is contained in P + is immediate. Indeed, let X be a (k + 1) × (k + 1)
positive semidefinite matrix. Then X is a sum of dyadic matrices:
k+1
X
X= pi (pi )T , pi = (pi0 , pi1 , ..., pik )T ∈ Rk+1
i=1

(why?) But then


 2
k+1
X k+1
X Xk
eT (t)Xe(t) = eT (t)pi [pi ]T e(t) =  pij tj 
i=1 i=1 j=0

is the sum of squares of other polynomials and therefore is nonnegative on the axis. Thus,
the image of X under the mapping (C) belongs to P + .
Note that reversing our reasoning, we get the following result:
6)
It is clear why we have restricted the degree to be even: a polynomial of an odd degree cannot be nonnegative
on the entire axis!
3.2. WHAT CAN BE EXPRESSED VIA LMI’S? 179

(!) If a polynomial p(t) of degree ≤ 2k can be represented as a sum of squares of


other polynomials, then the vector Coef(p) of the coefficients of p belongs to the
image of Sk+1
+ under the mapping (C).

With (!), the remaining part of the proof – the demonstration that the image of Sk+1
+ contains
P + , is readily given by the following well-known algebraic fact:

(!!) A polynomial is nonnegative on the axis if and only if it is a sum of squares


of polynomials.
The proof of (!!) is so nice that we cannot resist the temptation to present it here. The
“if” part is evident. To prove the “only if” one, assume that p(t) is nonnegative on the
axis, and let the degree of p (it must be even) be 2k. Now let us look at the roots of p.
The real roots λ1 , ..., λr must be of even multiplicities 2m1 , 2m2 , ...2mr each (otherwise
p would alter its sign in a neighbourhood of a root, which contradicts the nonnegativity).
The complex roots of p can be arranged in conjugate pairs (µ1 , µ∗1 ), (µ2 , µ∗2 ), ..., (µs , µ∗s ),
and the factor of p
(t − µi )(t − µ∗i ) = (t − <µi )2 + (=µi )2
corresponding to such a pair is a sum of two squares. Finally, the leading coefficient of
p is positive. Consequently, we have

p(t) = ω 2 [(t − λ1 )2 ]m1 ...[(t − λr )2 ]mr [(t − µ1 )(t − µ∗1 )]...[(t − µs )(t − µ∗s )]

is a product of sums of squares. But such a product is itself a sum of squares (open the
parentheses)!
In fact we can say more: a nonnegative polynomial p is a sum of just two
squares! To see this, note that, as we have seen, p is a product of sums of two
squares and take into account the following fact (Liouville):
The product of sums of two squares is again a sum of two squares:

(a2 + b2 )(c2 + d2 ) = (ac − bd)2 + (ad + bc)2

(cf. with: “the modulus of a product of two complex numbers is the product
of their moduli”).

+
Equipped with the SDR of the set P2k (R) of polynomials nonnegative on the entire axis, we
can immediately obtain SDR’s for the polynomials nonnegative on a given ray/segment:

21b. Polynomials nonnegative on a ray/segment.


1) The set Pk+ (R+ ) of (coefficients of) polynomials of degree ≤ k which are nonnegative on
the nonnegative ray, is SDr.
+
Indeed, this set is the inverse image of the SDr set P2k (R) under the linear mapping of the
spaces of (coefficients of) polynomials given by the mapping

p(t) 7→ p+ (t) ≡ p(t2 )

(recall that the inverse image of an SDr set is SDr).

2) The set Pk+ ([0, 1]) of (coefficients of) polynomials of degree ≤ k which are nonnegative on
the segment [0, 1], is SDr.
Indeed, a polynomial p(t) of degree ≤ k is nonnegative on [0, 1] if and only if the rational
function  2 
t
g(t) = p
1 + t2
180 LECTURE 3. SEMIDEFINITE PROGRAMMING

is nonnegative on the entire axis, or, which is the same, if and only if the polynomial

p+ (t) = (1 + t2 )k g(t)

of degree ≤ 2k is nonnegative on the entire axis. The coefficients of p+ depend linearly on


the coefficients of p, and we conclude that Pk+ ([0, 1]) is the inverse image of the SDr set
+
P2k (R) under certain linear mapping.

Our last example in this series deals with trigonometric polynomials


k
X
p(φ) = a0 + [a` cos(`φ) + b` sin(`φ)]
`=1

Identifying such a polynomial with its vector of coefficients Coef(p) ∈ R2k+1 , we may ask how to
express the set Sk+ (∆) of those trigonometric polynomials of degree ≤ k which are nonnegative
on a segment ∆ ⊂ [0, 2π].

21c. Trigonometric polynomials nonnegative on a segment. The set Sk+ (∆) is SDr.
Indeed, sin(`φ) and cos(`φ) are polynomials of sin(φ) and cos(φ), and the latter functions,
in turn, are rational functions of ζ = tan(φ/2):

1 − ζ2 2ζ
cos(φ) = 2
, sin(φ) = [ζ = tan(φ/2)].
1+ζ 1 + ζ2
Consequently, a trigonometric polynomial p(φ) of degree ≤ k can be represented as a rational
function of ζ = tan(φ/2):

p+ (ζ)
p(φ) = [ζ = tan(φ/2)],
(1 + ζ 2 )k

where the coefficients of the algebraic polynomial p+ of degree ≤ 2k are linear functions
of the coefficients of p. Now, the requirement for p to be nonnegative on a given segment
∆ ⊂ [0, 2π] is equivalent to the requirement for p+ to be nonnegative on a “segment” ∆+
(which, depending on ∆, may be either the usual finite segment, or a ray, or the entire axis).
We see that Sk+ (∆) is inverse image, under certain linear mapping, of the SDr set P2k
+
(∆+ ),
+
so that Sk (∆) itself is SDr.

Finally, we may ask which part of the above results can be saved when we pass from nonneg-
ative polynomials of one variable to those of two or more variables. Unfortunately, not too much.
E.g., among nonnegative polynomials of a given degree with r > 1 variables, exactly those of
them who are sums of squares can be obtained as the image of a positive semidefinite cone under
certain linear mapping similar to (D). The difficulty is that in the multi-dimensional case the
nonnegativity of a polynomial is not equivalent to its representability as a sum of squares, thus,
the positive semidefinite cone gives only part of the polynomials we are interested to describe.

3.3 Applications of Semidefinite Programming in Engineering


Due to its tremendous expressive abilities, Semidefinite Programming allows to pose and process
numerous highly nonlinear convex optimization programs arising in applications, in particular,
in Engineering. We are about to outline briefly just few instructive examples.
3.3. APPLICATIONS OF SEMIDEFINITE PROGRAMMING IN ENGINEERING 181

3.3.1 Dynamic Stability in Mechanics


“Free motions” of the so called linearly elastic mechanical systems, i.e., their behaviour when
no external forces are applied, are governed by systems of differential equations of the type

d2
M x(t) = −Ax(t), (N)
dt2
where x(t) ∈ Rn is the state vector of the system at time t, M is the (generalized) “mass
matrix”, and A is the “stiffness” matrix of the system. Basically, (N) is the Newton law for a
system with the potential energy 21 xT Ax.
As a simple example, consider a system of k points of masses µ1 , ..., µk linked by springs with
given elasticity coefficients; here x is the vector of the displacements xi ∈ Rd of the points
from their equilibrium positions ei (d = 1/2/3 is the dimension of the model). The Newton
equations become

d2 X
µi x i (t) = − νij (ei − ej )(ei − ej )T (xi − xj ), i = 1, ..., k,
dt2
j6=i

with νij given by


κij
νij = ,
kei − ej k32
where κij > 0 are the elasticity coefficients of the springs. The resulting system is of the
form (N) with a diagonal matrix M and a positive semidefinite symmetric matrix A. The
well-known simplest system of this type is a pendulum (a single point capable to slide along
a given axis and linked by a spring to a fixed point on the axis):

l x
d2
dt2 x(t) = −νx(t), ν = κl .

Another example is given by trusses – mechanical constructions, like a railway bridge or the
Eiffel Tower, built from linked to each other thin elastic bars.

Note that in the above examples both the mass matrix M and the stiffness matrix A are
symmetric positive semidefinite; in “nondegenerate” cases they are even positive definite, and
this is what we assume from now on. Under this assumption, we can pass in (N) from the
variables x(t) to the variables y(t) = M 1/2 x(t); in these variables the system becomes

d2
y(t) = −Ây(t), Â = M −1/2 AM −1/2 . (N0 )
dt2
It is well known that the space of solutions of system (N0 ) (where  is symmetric positive
definite) is spanned by fundamental (perhaps complex-valued) solutions of the form exp{µt}f .
A nontrivial (with f 6= 0) function of this type is a solution to (N0 ) if and only if

(µ2 I + Â)f = 0,

so that the allowed values of µ2 are the minus eigenvalues of the matrix Â, and f ’s are the
corresponding eigenvectors of Â. Since the matrix  is symmetric positive definite, the only
182 LECTURE 3. SEMIDEFINITE PROGRAMMING
q
allowed values of µ are purely imaginary, with the imaginary parts ± λj (Â). Recalling that
the eigenvalues/eigenvectors of  are exactly the eigenvalues/eigenvectors of the pencil [M, A],
we come to the following result:

(!) In the case of positive definite symmetric M, A, the solutions to (N) – the “free
motions” of the corresponding mechanical system S – are of the form
n
X
x(t) = [aj cos(ωj t) + bj sin(ωj t)]ej ,
j=1

where aj , bj are free real parameters, ej are the eigenvectors of the pencil [M, A]:

(λj M − A)ej = 0
p
and ωj = λj . Thus, the “free motions” of the system S are mixtures of har-
monic oscillations along the eigenvectors of the pencil [M, A], and the frequencies
of the oscillations (“the eigenfrequencies of the system”) are the square roots of the
corresponding eigenvalues of the pencil.

ω = 1.274 ω = 0.957 ω = 0.699


“Nontrivial” modes of a spring triangle (3 unit masses linked by springs)
Shown are 3 “eigenmotions” (modes) of a spring triangle with nonzero frequencies; at each picture,
the dashed lines depict two instant positions of the oscillating triangle.
There are 3 more “eigenmotions” with zero frequency, corresponding to shifts and rotation of the triangle.

From the engineering viewpoint, the “dynamic behaviour” of mechanical constructions such
as buildings, electricity masts, bridges, etc., is the better the larger are the eigenfrequencies of
the system7) . This is why a typical design requirement in mechanical engineering is a lower
bound
λmin (A : M ) ≥ λ∗ [λ∗ > 0] (3.3.1)
on the smallest eigenvalue λmin (A : M ) of the pencil [M, A] comprised of the mass and the
stiffness matrices of the would-be system. In the case of positive definite symmetric mass
matrices (3.3.1) is equivalent to the matrix inequality

A − λ∗ M  0. (3.3.2)
7)
Think about a building and an earthquake or about sea waves and a light house: in this case the external
load acting at the system is time-varying and can be represented as a sum of harmonic oscillations of different
(and low) frequencies; if some of these frequencies are close to the eigenfrequencies of the system, the system can
be crushed by resonance. In order to avoid this risk, one is interested to move the eigenfrequencies of the system
away from 0 as far as possible.
3.3. APPLICATIONS OF SEMIDEFINITE PROGRAMMING IN ENGINEERING 183

If M and A are affine functions of the design variables (as is the case in, e.g., Truss Design), the
matrix inequality (3.3.2) is a linear matrix inequality on the design variables, and therefore it
can be processed via the machinery of semidefinite programming. Moreover, in the cases when
A is affine in the design variables, and M is constant, (3.3.2) is an LMI in the design variables
and λ∗ , and we may play with λ∗ , e.g., solve a problem of the type “given the mass matrix of
the system to be designed and a number of (SDr) constraints on the design variables, build a
system with the minimum eigenfrequency as large as possible”, which is a semidefinite program,
provided that the stiffness matrix is affine in the design variables.

3.3.2 Truss Topology Design


A truss is a linearly elastic mechanical construction, like electric mast, railroad bridge, or Eiffel
Tower, comprised of thin elastic bars linked with each other at nodes:

A console

When a truss is subject to external load (collection of forces acting at the nodes), it deforms
until the reaction forces caused by elongations/contractions of bars compensate the external
force:

Loaded console
At the equilibrium, the deformed truss capacitates certain potential energy – compliance of the
truss w.r.t. the load. Compliance is a natural measure of the rigidity of the truss w.r.t. the load
– the less is the compliance, the better.
Mathematically:

• Displacements of a truss are identified with long vectors comprised of “physical” 2D/3D
displacements of m nodes allowed to be utilized in the construction; these displacements
form a linear space V = RM = V1 × V2 × ... × Vm , where M is the total number of degrees
of freedom of the nodal set, and Vi is the linear subspace of Rd (d = 2 for planar trusses
and d = 3 for spatial ones) comprised of displacements allowed for node i by its supports
184 LECTURE 3. SEMIDEFINITE PROGRAMMING

(“boundary conditions”). For example, the above console is planar (d = 2), its most-left
nodes are fixed (Vi = {0}) and all other nodes are free (Vi = Rd ).
• An external load acting at a truss is identified with a long vector f ∈ V comprised of
“physical” 2D/3D forces acting at the nodes
• Assuming deformation small, the reaction forces caused by the deformation form vector
A(t)v ∈ V , where v ∈ V is the displacement of the nodal set under the deformation, and
A(t) is the stiffness matrix of the truss. Mechanics says that
N
X
A(t) = tj Bj ,
j=1

where tj ≥ 0 is the volume of bar i, and Bi  0 is a matrix (in fact, of rank 1) given by
the geometry of the nodal set (specifically, positions of the nodes linked by j-th bar).
• Equilibrium under load f exists if and only if f belongs to the image space of the positive
semidefinite matrix A(t), the corresponding displacement v of the nodal set solves the
equation
A(t)v = f
and the compliance is
1 1 1
 
Complf (t) = max f T u − uT A(t)u = f T v = v T A(t)v
u∈V 2 2 2
If the equilibrium equation has no solutions, the compliance w.r.t. the load is +∞, meaning
that the truss collapses under the load.
In the multi-load Truss Topology Design (TTD) problem one is given one is given
• Ground Structure: — the set of M tentative 2D/3D nodes along with boundary conditions
specifying the spaces Vi , i ≤ M , of allowed displacements of these nodes,
— the set of tentative bars – N pairs of tentative nodes which are allowed to be linked by
bars

• a collection F of K loading scenarios fk ∈ V , 1 ≤ k ≤ K


and seeks for the truss of a given total weight with minimum possible worst-case, over loads
from F , compliance w.r.t. the load.

f f

9×9 nodal grid and load N = 2, 039 tentative bars


3.3. APPLICATIONS OF SEMIDEFINITE PROGRAMMING IN ENGINEERING 185

It is easily seen (check it!) that


( " # )
1 τ fT
Complf (t) = min τ: 0
2 f A(t)

so that the multi-load TTD problem is nothing but the semidefinite program
( " # )
1 X τ fT
min τ : t ≥ 0, ti ≤ w, P k  0, 1 ≤ k ≤ K
t,τ 2 i
fk j tj Bj

When solving a TTD problem, one starts with a dense nodal grid and allows for all pair connec-
tions of nodes by tentative bars. At the optimal solution, most of these tentative bars get zero
volumes, and the design reveals optimal topology, not merely optimal sizing! This is where the
“Topology Design” comes from.

Optimal console (single-load design)

3.3.3 Design of chips and Boyd’s time constant


Consider an RC-electric circuit, i.e., a circuit comprised of three types of elements: (1) resistors;
(2) capacitors; (3) resistors in a series combination with outer sources of voltage:
σAB
A B

σOA

VOA CAO CBO

O O O
A simple circuit
Element OA: outer supply of voltage VOA and resistor with conductance σOA
Element AO: capacitor with capacitance CAO
Element AB: resistor with conductance σAB
Element BO: capacitor with capacitance CBO

E.g., a chip is, electrically, a complicated circuit comprised of elements of the indicated type.
When designing chips, the following characteristics are of primary importance:

• Speed. In a chip, the outer voltages are switching at certain frequency from one constant
value to another. Every switch is accompanied by a “transition period”; during this period,
186 LECTURE 3. SEMIDEFINITE PROGRAMMING

the potentials/currents in the elements are moving from their previous values (correspond-
ing to the static steady state for the “old” outer voltages) to the values corresponding to
the new static steady state. Since there are elements with “inertia” – capacitors – this
transition period takes some time8 ). In order to ensure stable performance of the chip, the
transition period should be much less than the time between subsequent switches in the
outer voltages. Thus, the duration of the transition period is responsible for the speed at
which the chip can perform.
• Dissipated heat. Resistors in the chip dissipate heat which should be eliminated, otherwise
the chip will not function. This requirement is very serious for modern “high-density”
chips. Thus, a characteristic of vital importance is the dissipated heat power.
The two objectives: high speed (i.e., a small transition period) and small dissipated heat –
usually are conflicting. As a result, a chip designer faces the tradeoff problem like “how to get
a chip with a given speed and with the minimal dissipated heat”. It turns out that different
optimization problems related to the tradeoff between the speed and the dissipated heat in an
RC circuit belong to the “semidefinite universe”. We restrict ourselves with building an SDR
for the speed.
Simple considerations, based on Kirchoff laws, demonstrate that the transition period in an
RC circuit is governed by a linear system of differential equations as follows:
d
C w(t) = −Sw(t) + Rv. (3.3.3)
dt
Here
• The state vector w(·) is comprised of the potentials at all but one nodes of the circuit (the
potential at the remaining node – “the ground” – is normalized to be identically zero);
• Matrix C  0 is readily given by the topology of the circuit and the capacitances of the
capacitors and is linear in these capacitances. Similarly, matrix S  0 is readily given
by the topology of the circuit and the conductances of the resistors and is linear in these
conductances. Matrix R is given solely by the topology of the circuit;
• v is the vector of outer voltages; recall that this vector is set to certain constant value at
the beginning of the transition period.
As we have already mentioned, the matrices C and S, due to their origin, are positive semidefi-
nite; in nondegenerate cases, they are even positive definite, which we assume from now on.
Let w b = Rv. The difference δ(t) = w(t) − w
b be the steady state of (3.3.3), so that S w b is a
solution to the homogeneous differential equation
d
C δ(t) = −Sδ(t). (3.3.4)
dt
Setting γ(t) = C 1/2 δ(t) (cf. Section 3.3.1), we get
d
γ(t) = −(C −1/2 SC −1/2 )γ(t). (3.3.5)
dt
8)
From purely mathematical viewpoint, the transition period takes infinite time – the currents/voltages ap-
proach asymptotically the new steady state, but never actually reach it. From the engineering viewpoint, however,
we may think that the transition period is over when the currents/voltages become close enough to the new static
steady state.
3.3. APPLICATIONS OF SEMIDEFINITE PROGRAMMING IN ENGINEERING 187

Since C and S are positive definite, all eigenvalues λi of the symmetric matrix C −1/2 SC −1/2 are
positive. It is clear that the space of solutions to (3.3.5) is spanned by the “eigenmotions”

γi (t) = exp{−λi t}ei ,

where {ei } is an orthonormal eigenbasis of the matrix C −1/2 SC −1/2 . We see that all solutions
to (3.3.5) (and thus - to (3.3.4) as well) are exponentially fast converging to 0, or, which is
the same, the state w(t) of the circuit exponentially fast approaches the steady state w. b The
“time scale” of this transition is, essentially, defined by the quantity λmin = min λi ; a typical
i
“decay rate” of a solution to (3.3.5) is nothing but T = λ−1min . S. Boyd has proposed to use T to
quantify the length of the transition period, and to use the reciprocal of it – i.e., the quantity
λmin itself – as the quantitative measure of the speed. Technically, the main advantage of this
definition is that the speed turns out to be the minimum eigenvalue of the matrix C −1/2 SC −1/2 ,
i.e., the minimum eigenvalue of the matrix pencil [C : S]. Thus, the speed in Boyd’s definition
turns out to be efficiently computable (which is not the case for other, more sophisticated, “time
constants” used by engineers). Even more important, with Boyd’s approach, a typical design
specification “the speed of a circuit should be at least such and such” is modelled by the matrix
inequality
S  λ∗ C. (3.3.6)
As it was already mentioned, S and C are linear in the capacitances of the capacitors and
conductances of the resistors; in typical circuit design problems, the latter quantities are affine
functions of the design parameters, and (3.3.6) becomes an LMI in the design parameters.

3.3.4 Lyapunov stability analysis/synthesis


3.3.4.1 Uncertain dynamical systems
Consider a time-varying uncertain linear dynamical system
d
x(t) = A(t)x(t), x(0) = x0 . (ULS)
dt
Here x(t) ∈ Rn represents the state of the system at time t, and A(t) is a time-varying n × n
matrix. We assume that the system is uncertain in the sense that we have no idea of what is x0 ,
and all we know about A(t) is that this matrix, at any time t, belongs to a given uncertainty set
U. Thus, (ULS) represents a wide family of linear dynamic systems rather than a single system;
and it makes sense to call a trajectory of the uncertain linear system (ULS) every function x(t)
which is an “actual trajectory” of a system from the family, i.e., is such that
d
x(t) = A(t)x(t)
dt
for all t ≥ 0 and certain matrix-valued function A(t) taking all its values in U.
Note that we can model a nonlinear dynamic system
d
x(t) = f (t, x(t)) [x ∈ Rn ] (NLS)
dt
with a given right hand side f (t, x) and a given equilibrium x(t) ≡ 0 (i.e., f (t, 0) = 0, t ≥ 0)
as an uncertain linear system. Indeed, let us define the set Uf as the closed convex hull of
188 LECTURE 3. SEMIDEFINITE PROGRAMMING



the set of n × n matrices ∂x f (t, x) | t ≥ 0, x ∈ Rn . Then for every point x ∈ Rn we have

Rs  ∂

f (t, x) = f (t, 0) + ∂x f (t, sx) xds = Ax (t)x,
0
R1 ∂
Ax (t) = ∂x f (t, sx)ds ∈ U.
0

We see that every trajectory of the original nonlinear system (NLS) is also a trajectory of the
uncertain linear system (ULS) associated with the uncertainty set U = Uf (this trick is called
“global linearization”). Of course, the set of trajectories of the resulting uncertain linear
system can be much wider than the set of trajectories of (NLS); however, all “good news”
about the uncertain system (like “all trajectories of (ULS) share such and such property”)
are automatically valid for the trajectories of the “nonlinear system of interest” (NLS), and
only “bad news” about (ULS) (“such and such property is not shared by some trajectories
of (ULS)”) may say nothing about the system of interest (NLS).

3.3.4.2 Stability and stability certificates

The basic question about a dynamic system is the one of its stability. For (ULS), this question
sounds as follows:

(?) Is it true that (ULS) is stable, i.e., that

x(t) → 0 as t → ∞

for every trajectory of the system?

A sufficient condition for the stability of (ULS) is the existence of a quadratic Lyapunov function,
i.e., a quadratic form L(x) = xT Xx with symmetric positive definite matrix X such that

d
L(x(t)) ≤ −αL(x(t)) (3.3.7)
dt

for certain α > 0 and all trajectories of (ULS):

Lemma 3.3.1 [Quadratic Stability Certificate] Assume (ULS) admits a quadratic Lyapunov
function L. Then (ULS) is stable.

Proof. If (3.3.7) is valid with some α > 0 for all trajectories of (ULS), then, by integrating this differential
inequality, we get
L(x(t)) ≤ exp{−αt}L(x(0)) → 0 as t → ∞.
d d
(indeed, by (3.3.7) one has dt exp{αt}L(x(t)) = exp{αt}[αL(x(t)) + dt L(x(t))] ≤ 0 for all t). Since L(·)
is a positive definite quadratic form, L(x(t)) → 0 implies that x(t) → 0. 2

Of course, the statement of Lemma 3.3.1 also holds for non-quadratic Lyapunov functions:
all we need is (3.3.7) plus the assumption that L(x) is smooth, nonnegative and is bounded
away from 0 outside every neighbourhood of the origin. The advantage of a quadratic Lyapunov
function is that we more or less know how to find such a function, if it exists:
3.3. APPLICATIONS OF SEMIDEFINITE PROGRAMMING IN ENGINEERING 189

Proposition 3.3.1 [Existence of Quadratic Stability Certificate] Let U be the uncertainty set
associated with uncertain linear system (ULS). The system admits quadratic Lyapunov function
if and only if the optimal value of the “semi-infinite9) semidefinite program”

minimize s
s.t.
(Ly)
sIn − AT X − XA  0, ∀A ∈ U
X  In

with the design variables s ∈ R and X ∈ Sn , is negative. Moreover, every feasible solution to the
problem with negative value of the objective provides a quadratic Lyapunov stability certificate
for (ULS).

We shall refer to a positive definite matrix X  In which can be extended, by properly


chosen s < 0, to a feasible solution of (Ly), as to a Lyapunov stability certificate for (ULS), the
uncertainty set being U. h i
d
Proof of Proposition 3.3.1. The derivative dt xT (t)Xx(t) of the quadratic function xT Xx
along a trajectory of (ULS) is equal to
T
d d
  
x(t) Xx(t) + xT (t)X x(t) = xT (t)[AT (t)X + XA(t)]x(t).
dt dt

If xT Xx is a Lyapunov function, then the resulting quantity must be at most −αxT (t)Xx(t),
i.e., we should have h i
xT (t) −αX − AT (t)X − XA(t) x(t) ≥ 0

for every possible value of A(t) at any time t and for every possible value x(t) of a trajectory
of the system at this time. Since possible values of x(t) fill the entire Rn and possible values of
A(t) fill the entire U, we conclude that

−αX − AT X − XA  0 ∀A ∈ U.

By definition of a quadratic Lyapunov function, X  0 and α > 0; by normalization (dividing


both X and α by the smallest eigenvalue of X), we get a pair ŝ > 0, X̂ ≥ In such that

−ŝX̂ − AT X̂ − X̂A  0 ∀A ∈ U.

Since X̂  In , we conclude that

−ŝIn − AT X̂ − X̂A  ŝX̂ − AT X̂ − X̂A  0 ∀A ∈ U;

thus, (s = −ŝ, X̂) is a feasible solution to (Ly) with negative value of the objective. We have
demonstrated that if (ULS) admits a quadratic Lyapunov function, then (Ly) has a feasible
solution with negative value of the objective. Reversing the reasoning, we can verify the inverse
implication. 2

9)
i.e., with infinitely many LMI constraints
190 LECTURE 3. SEMIDEFINITE PROGRAMMING

Lyapunov stability analysis. According to Proposition 3.3.1, the existence of a Lyapunov


stability certificate is a sufficient, but, in general, not a necessary stability condition for (ULS).
When the condition is not satisfied (i.e., if the optimal value in (Ly) is nonnegative), then all
we can say is that the stability of (ULS) cannot be certified by a quadratic Lyapunov function,
although (ULS) still may be stable.10) In this sense, the stability analysis based on quadratic
Lyapunov functions is conservative. This drawback, however, is in a sense compensated by the
fact that this kind of stability analysis is “implementable”: in many cases we can efficiently solve
(Ly), thus getting a quadratic “stability certificate”, provided that it exists, in a constructive
way. Let us look at two such cases.

Polytopic uncertainty set. The first “tractable case” of (Ly) is when U is a polytope
given as a convex hull of finitely many points:

U = Conv{A1 , ..., AN }.

In this case (Ly) is equivalent to the semidefinite program


n o
min s : sIn − ATi X − XAi  0, i = 1, ..., N ; X  In (3.3.8)
s,X

(why?).
The assumption that U is a polytope given as a convex hull of a finite set is crucial for a
possibility to get a “computationally tractable” equivalent reformulation of (Ly). If U is, say,
a polytope given by a list of linear inequalities (e.g., all we know about the entries of A(t) is
that they reside in certain intervals; this case is called “interval uncertainty”), (Ly) may become
as hard as a problem can be: it may happen that just to check whether a given pair (s, X) is
feasible for (Ly) is already a “computationally intractable” problem. The same difficulties may
occur when U is a general-type ellipsoid in the space of n × n matrices. There exists, however,
a specific type of “uncertainty ellipsoids” U for which (Ly) is “easy”. Let us look at this case.

Norm-bounded perturbations. In numerous applications the n × n matrices A forming


the uncertainty set U are obtained from a fixed “nominal” matrix A∗ by adding perturbations
of the form B∆C, where B ∈ Mn,k and C ∈ Ml,n are given rectangular matrices and ∆ ∈ Mk,l
is “the perturbation” varying in a “simple” set D:
h i
U = {A = A∗ + B∆C | ∆ ∈ D ⊂ Mk,l } B ∈ Mn,k , 0 6= C ∈ Ml,n (3.3.9)

As an instructive example, consider a controlled linear time-invariant dynamic system


d
dt x(t) = Ax(t) + Bu(t)
(3.3.10)
y(t) = Cx(t)

(x is the state, u is the control and y is the output we can observe) “closed” by a feedback

u(t) = Ky(t).
10)
The only case when the existence of a quadratic Lyapunov function is a criterion (i.e., a necessary and
d
sufficient condition) for stability is the simplest case of certain time-invariant linear system dt x(t) = Ax(t)
(U = {A}). This is the case which led Lyapunov to the general concept of what is now called “a Lyapunov
function” and what is the basic approach to establishing convergence of different time-dependent processes to
their equilibria. Note also that in the case of time-invariant linear system there exists a straightforward algebraic
stability criterion – all eigenvalues of A should have negative real parts. The advantage of the Lyapunov approach
is that it can be extended to more general situations, which is not the case for the eigenvalue criterion.
3.3. APPLICATIONS OF SEMIDEFINITE PROGRAMMING IN ENGINEERING 191

u(t) x(t) y(t) u(t) x(t) y(t)


x’(t) = Ax(t) + Bu(t) y(t) = Cx(t) x’(t) = Ax(t) + Bu(t) y(t) = Cx(t)

u(t) = K y(t)

Open loop (left) and closed loop (right) controlled systems


The resulting “closed loop system” is given by

d
x(t) = Âx(t), Â = A + BKC. (3.3.11)
dt
Now assume that A, B and C are constant and known, but the feedback K is drifting around
certain nominal feedback K ∗ : K = K ∗ + ∆. As a result, the matrix  of the closed loop
system also drifts around its nominal value A∗ = A + BK ∗ C, and the perturbations in Â
are exactly of the form B∆C.
Note that we could get essentially the same kind of drift in  assuming, instead of additive
perturbations, multiplicative perturbations C = (Il +∆)C ∗ in the observer (or multiplicative
disturbances in the actuator B).

Now assume that the input perturbations ∆ are of spectral norm |∆| not exceeding a given ρ
(norm-bounded perturbations):

D = {∆ ∈ Mk,l | |∆| ≤ ρ}. (3.3.12)

Proposition 3.3.2 [18] In the case of uncertainty set (3.3.9), (3.3.12) the “semi-infinite”
semidefinite program (Ly) is equivalent to the usual semidefinite program

minimize α
s.t.
αIn − AT∗ X − XA∗ − λC T C
 
ρXB (3.3.13)
 0
ρB T X λIk
X  In

in the design variables α, λ, X.


When shrinking the set of perturbations (3.3.12) to the ellipsoid
v
u
uXk X
l
k,l 11 )
E = {∆ ∈ M | k∆k2 ≡ t ∆2ij ≤ ρ}, (3.3.14)
u
i=1 j=1

we basically do not vary (Ly): in the case of the uncertainty set (3.3.9), (Ly) is still equivalent
to (3.3.13).

Proof. It suffices to verify the following general statement:

11)
This indeed is a “shrinkage”: |∆| ≤ k∆k2 for every matrix ∆ (prove it!)
192 LECTURE 3. SEMIDEFINITE PROGRAMMING

Lemma 3.3.2 [18] Consider the matrix inequality


Y − QT ∆T P T Z T R − RT ZP ∆Q  0 (3.3.15)
where Y is symmetric n×n matrix, ∆ is a k×l matrix and P , Q, Z, R are rectangular
matrices of appropriate sizes (i.e., q × k, l × n, p × q and p × n, respectively). Given
Y, P, Q, Z, R, with Q 6= 0 (this is the only nontrivial case), this matrix inequality is
satisfied for all ∆ with |∆| ≤ ρ if and only if it is satisfied for all ∆ with k∆k2 ≤ ρ,
and this is the case if and only if
Y − λQT Q −ρRT ZP
 
0
−ρP T Z T R λIk
for a properly chosen real λ.

The statement of Proposition 3.4.14 is just a particular case of Lemma 3.3.2. For example, in
the case of uncertainty set (3.3.9), (3.3.12) a pair (α, X) is a feasible solution to (Ly) if and only
if X  In and (3.3.15) is valid for Y = αX − AT∗ X − XA∗ , P = B, Q = C, Z = X, R = In ;
Lemma 3.3.2 provides us with an LMI reformulation of the latter property, and this LMI is
exactly what we see in the statement of Proposition 3.4.14.
Proof of Lemma. (3.3.15) is valid for all ∆ with |∆| ≤ ρ (let us call this property of (Y, P, Q, Z, R)
“Property 1”) if and only if
2[ξ T RT ZP ]∆[Qξ] ≤ ξ T Y ξ ∀ξ ∈ Rn ∀(∆ : |∆| ≤ ρ),
or, which is the same, if and only if
max 2 [P T Z T Rξ]T ∆[Qξ] ≤ ξ T Y ξ ∀ξ ∈ Rn .
 
(Property 2)
∆:|∆|≤ρ

The maximum over ∆, |∆| ≤ ρ, of the quantity η T ∆ζ, clearly is equal to ρ times the product of the
Euclidean norms of the vectors η and ζ (why?). Thus, Property 2 is equivalent to
ξ T Y ξ − 2ρkQξk2 kP T Z T Rξk2 ≥ 0 ∀ξ ∈ Rn . (Property 3)
Now is the trick: Property 3 is clearly equivalent to the following
Property 4: Every pair ζ = (ξ, η) ∈ Rn × Rk which satisfies the quadratic inequality

ξ T QT Qξ − η T η ≥ 0 (I)
satisfies also the quadratic inequality
ξ T Y ξ − 2ρη T P T Z T Rξ ≥ 0. (II)
Indeed, for a fixed ξ the minimum over η satisfying (I) of the left hand side in (II) is
nothing but the left hand side in (Property 3).

It remains to use the S-Lemma:


S-Lemma. Let A, B be symmetric n×n matrices, and assume that the quadratic inequality
xT Ax ≥ 0 (A)
T
is strictly feasible: there exists x̄ such that x̄ Ax̄ > 0. Then the quadratic inequality
xT Bx ≥ 0 (B)
is a consequence of (A) if and only if it is a linear consequence of (A), i.e., if and only if there
exists a nonnegative λ such that
B  λA.
3.3. APPLICATIONS OF SEMIDEFINITE PROGRAMMING IN ENGINEERING 193

(for a proof, see Section 3.5). Property 4 says that the quadratic inequality (II) with variables ξ, η is a
consequence of (I); by the S-Lemma (recall that Q 6= 0, so that (I) is strictly feasible!) this is equivalent
to the existence of a nonnegative λ such that
   T 
Y −ρRT ZP Q Q
−λ  0,
−ρP T Z T R −Ik

which is exactly the statement of Lemma 3.3.2 for the case of |∆| ≤ ρ. The case of perturbations with
k∆k2 ≤ ρ is completely similar, since the equivalence between Properties 2 and 3 is valid independently
of which norm of ∆ – | · | or k · k2 – is used. 2

It is worthy of mentioning that Lemma 3.3.2 admits the following useful


Corollary 3.3.1 Let a, b be two nonzero vectors from Rn and Y ∈ Sn . Relation

Y  ±[abT + baT ] (!)

holds true of and only if there exists λ > 0 such that

Y  λaaT + λ−1 bbT .

or, which is the same by Schur Complement Lemma, if and only if there exists λ
such that " #
Y − λaaT b
 0. (3.3.16)
bT λ

Note that the case n = 1 of Corollary is the following well-known fact: For two
nonzero reals a, b, y ≥ 2|ab| if and only if there exists positive λ such that y ≥
λa2 + λ−1 b2 , or, which is the same, For two nonzero reals a, b, minλ>0 [λa2 + λ−1 b2 ]
is achieved and is equal to 2|ab|.

To prove Corollary, note that (!) is exactly the same as the relation

X + a∆T bT + b∆aT  0 ∀(∆ ∈ R1×1 : |∆| ≤ 1),

that is, this is relation (3.3.15) with ρ = 1, Q = aT ∈ R1×n , R = bT ∈ R1×n and Z = P = 1 ∈


R1×1 (i.e., in the notation from Lemma 3.3.2, k = l = p = q = 1). By Lemma, (!) takes place if
and only if there exists λ such that (3.3.16) takes place, and since b 6= 0, this λ must be positive.
2

3.3.4.3 Lyapunov Stability Synthesis


We have seen that under reasonable assumptions on the underlying uncertainty set the question
of whether a given uncertain linear system (ULS) admits a quadratic Lyapunov function can be
reduced to a semidefinite program. Now let us switch from the analysis question: “whether a
stability of an uncertain linear system may be certified by a quadratic Lyapunov function” to
the synthesis question which is as follows. Assume that we are given an uncertain open loop
controlled system
d
dt x(t) = A(t)x(t) + B(t)u(t) (UOS)
y(t) = C(t)x(t);
194 LECTURE 3. SEMIDEFINITE PROGRAMMING

all we know about the collection (A(t), B(t), C(t)) of time-varying n × n matrix A(t), n × k
matrix B(t) and l × n matrix C(t) is that this collection, at every time t, belongs to a given
uncertainty set U. The question is whether we can equip our uncertain “open loop” system
(UOS) with a linear feedback
u(t) = Ky(t)
in such a way that the resulting uncertain closed loop system
d
x(t) = [A(t) + B(t)KC(t)] x(t) (UCS)
dt
will be stable and, moreover, such that its stability can be certified by a quadratic Lyapunov
function. In other words, now we are simultaneously looking for a “stabilizing controller” and a
quadratic Lyapunov certificate of its stabilizing ability.
With the “global linearization” trick we may use the results on uncertain controlled linear
systems to build stabilizing linear controllers for nonlinear controlled systems
d
dt x(t) = f (t, x(t), u(t))
y(t) = g(t, x(t))

Assuming f (t, 0, 0) = 0, g(t, 0) = 0 and denoting by U the closed convex hull of the set
  
∂ ∂ ∂ n k
f (t, x, u), f (t, x, u), g(t, x) t ≥ 0, x ∈ R , u ∈ R ,

∂x ∂u ∂x

we see that every trajectory of the original nonlinear system is a trajectory of the uncertain
linear system (UOS) associated with the set U. Consequently, if we are able to find a
stabilizing controller for (UOS) and certify its stabilizing property by a quadratic Lyapunov
function, then the resulting controller/Lyapunov function will stabilize the nonlinear system
and will certify the stability of the closed loop system, respectively.

Exactly the same reasoning as in the previous Section leads us to the following

Proposition 3.3.3 Let U be the uncertainty set associated with an uncertain open loop con-
trolled system (UOS). The system admits a stabilizing controller along with a quadratic Lya-
punov stability certificate for the resulting closed loop system if and only if the optimal value in
the optimization problem

minimize s
s.t.
(LyS)
[A + BKC]T X + X[A + BKC]  sIn ∀(A, B, C) ∈ U
X  In ,

in design variables s, X, K, is negative. Moreover, every feasible solution to the problem with
negative value of the objective provides stabilizing controller along with quadratic Lyapunov sta-
bility certificate for the resulting closed loop system.

A bad news about (LyS) is that it is much more difficult to rewrite this problem as a
semidefinite program than in the analysis case (i.e., the case of K = 0), since (LyS) is a semi-
infinite system of nonlinear matrix inequalities. There is, however, an important particular case
where this difficulty can be eliminated. This is the case of a feedback via the full state vector
– the case when y(t) = x(t) (i.e., C(t) is the unit matrix). In this case, all we need in order to
3.3. APPLICATIONS OF SEMIDEFINITE PROGRAMMING IN ENGINEERING 195

get a stabilizing controller along with a quadratic Lyapunov certificate of its stabilizing ability,
is to solve a system of strict matrix inequalities

[A + BK]T X + X[A + BK]  Z ≺ 0 ∀(A, B) ∈ U


. (∗)
X  0

Indeed, given a solution (X, K, Z) to this system, we always can convert it by normalization of
X to a solution of (LyS). Now let us make the change of variables
h i
Y = X −1 , L = KX −1 , W = X −1 ZX −1 ⇔ X = Y −1 , K = LY −1 , Z = Y −1 W Y −1 .

With respect to the new variables Y, L, K system (*) becomes


(
[A + BLY −1 ]T Y −1 + Y −1 [A + BLY −1 ]  Y −1 W Y −1 ≺ 0
Y −1  0
( m
LT B T + Y AT + BL + AY  W ≺ 0, ∀(A, B) ∈ U
Y  0

(we have multiplied all original matrix inequalities from the left and from the right by Y ).
What we end up with, is a system of strict linear matrix inequalities with respect to our new
design variables L, Y, W ; the question of whether this system is solvable can be converted to the
question of whether the optimal value in a problem of the type (LyS) is negative, and we come
to the following
Proposition 3.3.4 Consider an uncertain controlled linear system with a full observer:
d
dt x(t)= A(t)x(t) + B(t)u(t)
y(t) = x(t)

and let U be the corresponding uncertainty set (which now is comprised of pairs (A, B) of possible
values of (A(t), B(t)), since C(t) ≡ In is certain).
The system can be stabilized by a linear controller

u(t) = Ky(t) [≡ Kx(t)]

in such a way that the resulting uncertain closed loop system


d
x(t) = [A(t) + B(t)K]x(t)
dt
admits a quadratic Lyapunov stability certificate if and only if the optimal value in the optimiza-
tion problem

minimize s
s.t.
(Ly∗ )
BL + AY + LT B T + Y AT  sIn ∀(A, B) ∈ U
Y  I

in the design variables s ∈ R, Y ∈ Sn , L ∈ Mk,n , is negative. Moreover, every feasible solution


to (Ly∗ ) with negative value of the objective provides a stabilizing linear controller along with
related quadratic Lyapunov stability certificate.
196 LECTURE 3. SEMIDEFINITE PROGRAMMING

In particular, in the polytopic case:

U = Conv{(A1 , B1 ), ..., (AN , BN )}

the Quadratic Lyapunov Stability Synthesis reduces to solving the semidefinite program
n o
min s : Bi L + Ai Y + Y ATi + LT BiT  sIn , i = 1, ..., N ; Y  In .
s,Y,L

3.4 Semidefinite relaxations of intractable problems


One of the most challenging and promising applications of Semidefinite Programming is in
building tractable approximations of “computationally intractable” optimization problems. Let
us look at several applications of this type.

3.4.1 Semidefinite relaxations of combinatorial problems


3.4.1.1 Combinatorial problems and their relaxations
Numerous problems of planning, scheduling, routing, etc., can be posed as combinatorial opti-
mization problems, i.e., optimization programs with discrete design variables (integer or zero-
one). There are several “universal forms” of combinatorial problems, among them Linear Pro-
gramming with integer variables and Linear Programming with 0-1 variables; a problem given
in one of these forms can always be converted to any other universal form, so that in principle
it does not matter which form to use. Now, the majority of combinatorial problems are difficult
– we do not know theoretically efficient (in certain precise meaning of the notion) algorithms
for solving these problems. What we do know is that nearly all these difficult problems are, in
a sense, equivalent to each other and are NP-complete. The exact meaning of the latter notion
will be explained in Lecture 4; for the time being it suffices to say that NP-completeness of a
problem P means that the problem is “as difficult as a combinatorial problem can be” – if we
knew an efficient algorithm for P , we would be able to convert it to an efficient algorithm for
any other combinatorial problem. NP-complete problems may look extremely “simple”, as it is
demonstrated by the following example:

(Stones) Given n stones of positive integer weights (i.e., given n positive integers
a1 , ..., an ), check whether you can partition these stones into two groups of equal
weight, i.e., check whether a linear equation
n
X
ai xi = 0
i=1

has a solution with xi = ±1.

Theoretically difficult combinatorial problems happen to be difficult to solve in practice as well.


An important ingredient in basically all algorithms for combinatorial optimization is a technique
for building bounds for the unknown optimal value of a given (sub)problem. A typical way to
estimate the optimal value of an optimization program

f ∗ = min{f (x) : x ∈ X}
x
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 197

from above is to present a feasible solution x̄; then clearly f ∗ ≤ f (x̄). And a typical way to
bound the optimal value from below is to pass from the problem to its relaxation

f∗ = min{f (x) : x ∈ X 0 }
x

increasing the feasible set: X ⊂ X 0 . Clearly, f∗ ≤ f ∗ , so, whenever the relaxation is efficiently
solvable (to ensure this, we should take care of how we choose X 0 ), it provides us with a
“computable” lower bound on the actual optimal value.
When building a relaxation, one should take care of two issues: on one hand, we want the
relaxation to be “efficiently solvable”. On the other hand, we want the relaxation to be “tight”,
otherwise the lower bound we get may be by far “too optimistic” and therefore not useful. For a
long time, the only practical relaxations were the LP ones, since these were the only problems one
could solve efficiently. With recent progress in optimization techniques, nonlinear relaxations
become more and more “practical”; as a result, we are witnessing a growing theoretical and
computational activity in the area of nonlinear relaxations of combinatorial problems. These
developments mostly deal with semidefinite relaxations. Let us look how they emerge.

3.4.1.2 Shor’s Semidefinite Relaxation scheme


As it was already mentioned, there are numerous “universal forms” of combinatorial problems.
E.g., a combinatorial problem can be posed as minimizing a quadratic objective under quadratic
inequality constraints:

minimize in x ∈ Rn f0 (x) = xT A0 x + 2bT0 x + c0


s.t. (3.4.1)
fi (x) = xT Ai x + 2bTi x + ci ≤ 0, i = 1, ..., m.

To see that this form is “universal”, note that it covers the classical universal combinatorial
problem – a generic LP program with Boolean (0-1) variables:
n o
min cT x : aTi x − bi ≤ 0, i = 1, ..., m; xj ∈ {0, 1}, j = 1, ..., n (B)
x

Indeed, the fact that a variable xj must be Boolean can be expressed by the quadratic equality

x2j − xj = 0,

which can be represented by a pair of opposite quadratic inequalities and a linear inequality
aTi x − bi ≤ 0 is a particular case of quadratic inequality. Thus, (B) is equivalent to the problem
n o
min cT x : aTi x − bi ≤ 0, i = 1, ..., m; x2j − xj ≤ 0, −x2j + xj ≤ 0 j = 1, ..., n ,
x,s

and this problem is of the form (3.4.1).


To bound from below the optimal value in (3.4.1), we may use the same technique we used for
building the dual problem (it is called the Lagrange relaxation). We choose somehow “weights”
λi ≥ 0, i = 1, ..., m, and add the constraints of (3.4.1) with these weights to the objective, thus
coming to the function
m
P
fλ (x) = f0 (x) + λi fi (x)
i=1 (3.4.2)
= xT A(λ)x + 2bT (λ)x + c(λ),
198 LECTURE 3. SEMIDEFINITE PROGRAMMING

where m
P
A(λ) = A0 + λi Ai
i=1
m
P
b(λ) = b0 + λi bi
i=1
m
P
c(λ) = c0 + λi ci
i=1

By construction, the function fλ (x) is ≤ the actual objective f0 (x) on the feasible set of the
problem (3.4.1). Consequently, the unconstrained infimum of this function

a(λ) = inf n fλ (x)


x∈R

is a lower bound for the optimal value in (3.4.1). We come to the following simple result (cf.
the Weak Duality Theorem:)
(*) Assume that λ ∈ Rm
+ and ζ ∈ R are such that

fλ (x) − ζ ≥ 0 ∀x ∈ Rn (3.4.3)

(i.e., that ζ ≤ a(λ)). Then ζ is a lower bound for the optimal value in (3.4.1).
It remains to clarify what does it mean that (3.4.3) holds. Recalling the structure of fλ , we see
that it means that the inhomogeneous quadratic form

gλ (x) = xT A(λ)x + 2bT (λ)x + c(λ) − ζ

is nonnegative on the entire space. Now, an inhomogeneous quadratic form

g(x) = xT Ax + 2bT x + c

is nonnegative everywhere if and only if certain associated homogeneous quadratic form is non-
negative everywhere. Indeed, given t 6= 0 and x ∈ Rn , the fact that g(t−1 x) ≥ 0 means exactly
the nonnegativity of the homogeneous quadratic form G(x, t)

G(x, t) = xT Ax + 2tbT x + ct2

with (n + 1) variables x, t. We see that if g is nonnegative, then G is nonnegative whenever


t 6= 0; by continuity, G then is nonnegative everywhere. Thus, if g is nonnegative, then G is,
and of course vice versa (since g(x) = G(x, 1)). Now, to say that G is nonnegative everywhere
is literally the same as to say that the matrix
bT
 
c
(3.4.4)
b A
is positive semidefinite.
It is worthy to catalogue our simple observation:
Simple Lemma. A quadratic inequality with a (symmetric) n × n matrix A

xT Ax + 2bT x + c ≥ 0

is identically true – is valid for all x ∈ Rn – if only if the matrix (3.4.4) is positive
semidefinite.
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 199

Applying this observation to gλ (x), we get the following equivalent reformulation of (*):
If (λ, ζ) ∈ Rm
+ × R satisfy the LMI
m
 P m 
λi ci − ζ bT0 + λi bTi
P
 i=1 i=1
  0,

 m
P m
P
b0 + λi bi A0 + λi Ai
i=1 i=1
then ζ is a lower bound for the optimal value in (3.4.1).
Now, what is the best lower bound we can get with this scheme? Of course, it is the optimal
value of the semidefinite program
  m m  
λi ci − ζ bT0 + λi bTi
P P

 c0 + 

i=1 i=1
max ζ :    0, λ ≥ 0 . (3.4.5)
 
m m
ζ,λ  P P
b0 + λi bi A0 + λi Ai
 

i=1 i=1
We have proved the following simple
Proposition 3.4.1 The optimal value in (3.4.5) is a lower bound for the optimal value in
(3.4.1).
The outlined scheme is extremely transparent, but it looks different from a relaxation scheme
as explained above – where is the extension of the feasible set of the original problem? In fact
the scheme is of this type. To see it, note that the value of a quadratic form at a point x ∈ Rn
can be written as the Frobenius inner product of a matrix defined by the problem data and the
   T
1 1
dyadic matrix X(x) = :
x x
 T 
bT bT
    
1 c 1 c
xT Ax + 2bT x + c = = Tr X(x) .
x b A x b A
Consequently, (3.4.1) can be written as
bT0 bTi
       
c0 ci
min Tr X(x) : Tr X(x) ≤ 0, i = 1, ..., m . (3.4.6)
x b0 A0 bi Ai
Thus, we may think of (3.4.2) as a problem with linear objective and linear equality constraints
and with the design vector X which is a symmetric (n + 1) × (n + 1) matrix running through
the nonlinear manifold X of dyadic matrices X(x), x ∈ Rn . Clearly, all points of X are positive
semidefinite matrices with North-Western entry 1. Now let X̄ be the set of all such matrices.
Replacing X by X̄ , we get a relaxation of (3.4.6) (the latter problem is, essentially, our original
problem (3.4.1)). This relaxation is the semidefinite program

 : Tr(Āi X) ≤
minX Tr(Ā0 X)   0; X11 = 1
0, i = 1, ..., m; X
ci bTi

(3.4.7)
Āi = , i = 1, ..., m .
bi Ai
We get the following
Proposition 3.4.2 The optimal value of the semidefinite program (3.4.7) is a lower bound for
the optimal value in (3.4.1).
One can easily verify that problem (3.4.5) is just the semidefinite dual of (3.4.7); thus, when
deriving (3.4.5), we were in fact implementing the idea of relaxation. This is why in the sequel
we call both (3.4.7) and (3.4.5) semidefinite relaxations of (3.4.1).
200 LECTURE 3. SEMIDEFINITE PROGRAMMING

3.4.1.3 When the semidefinite relaxation is exact?


In general, the optimal value in (3.4.7) is just a lower bound on the optimal value of (3.4.1).
There are, however, two cases when this bound is exact. These are

• Convex case, where all quadratic forms in (3.4.1) are convex (i.e., Qi  0, i = 0, 1, ..., m).
Here, strict feasibility of the problem (i.e., the existence of a feasible solution x̄ with
fi (x̄) < 0, i = 1, ..., m) plus below boundedness of it imply that (3.4.7) is solvable with
the optimal value equal to the one of (3.4.1). This statement is a particular case of the
well-known Lagrange Duality Theorem in Convex Programming.

• The case of m = 1. Here the optimal value in (3.4.1) is equal to the one in (3.4.7), provided
that (3.4.1) is strictly feasible. This highly surprising fact (no convexity is assumed!) is
called Inhomogeneous S-Lemma; we shall prove it in Section 3.5.2.

Let us look at several interesting examples of Semidefinite relaxations.

3.4.1.4 Stability number, Shannon and Lovasz capacities of a graph


Stability number of a graph. Consider a (non-oriented) graph – a finite set of nodes linked
by arcs12) , like the simple 5-node graph C5 :
A B

E C

D
Graph C5

One of the fundamental characteristics of a graph Γ is its stability number α(Γ) defined as the
maximum cardinality of an independent subset of nodes – a subset such that no two nodes
from it are linked by an arc. E.g., the stability number for the graph C5 is 2, and a maximal
independent set is, e.g., {A; C}.
The problem of computing the stability number of a given graph is NP-complete, this is why
it is important to know how to bound this number.

Shannon capacity of a graph. An upper bound on the stability number of a graph which
interesting by its own right is the Shannon capacity Θ(Γ) defined as follows.
Let us treat the nodes of Γ as letters of certain alphabet, and the arcs as possible errors in
certain communication channel: you can send trough the channel one letter per unit time, and
what arrives on the other end of the channel can be either the letter you have sent, or any letter
adjacent to it. Now assume that you are planning to communicate with an addressee through the
12)
One of the formal definitions of a (non-oriented) graph is as follows: a n-node graph is just a n × n symmetric
matrix A with entries 0, 1 and zero diagonal. The rows (and the columns) of the matrix are identified with the
nodes 1, 2, ..., n of the graph, and the nodes i, j are adjacent (i.e., linked by an arc) exactly for those i, j with
Aij = 1.
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 201

channel by sending n-letter words (n is fixed). You fix in advance a dictionary Dn of words to be
used and make this dictionary known to the addressee. What you are interested in when building
the dictionary is to get a good one, meaning that no word from it could be transformed by the
channel into another word from the dictionary. If your dictionary satisfies this requirement, you
may be sure that the addressee will never misunderstand you: whatever word from the dictionary
you send and whatever possible transmission errors occur, the addressee is able either to get the
correct message, or to realize that the message was corrupted during transmission, but there
is no risk that your “yes” will be read as “no!”. Now, in order to utilize the channel “at full
capacity”, you are interested to get as large dictionary as possible. How many words it can
include? The answer is clear: this is precisely the stability number of the graph Γn as follows:
the nodes of Γn are ordered n-element collections of the nodes of Γ – all possible n-letter words
in your alphabet; two distinct nodes (i1 , ..., in ) (j1 , ..., jn ) are adjacent in Γn if and only if for
every l the l-th letters il and jl in the two words either coincide, or are adjacent in Γ (i.e., two
distinct n-letter words are adjacent, if the transmission can convert one of them into the other
one). Let us denote the maximum number of words in a “good” dictionary Dn (i.e., the stability
number of Γn ) by f (n), The function f (n) possesses the following nice property:

f (k)f (l) ≤ f (k + l), k, l = 1, 2, ... (∗)

Indeed, given the best (of the cardinality f (k)) good dictionary Dk and the best good
dictionary Dl , let us build a dictionary comprised of all (k + l)-letter words as follows: the
initial k-letter fragment of a word belongs to Dk , and the remaining l-letter fragment belongs
to Dl . The resulting dictionary is clearly good and contains f (k)f (l) words, and (*) follows.

Now, this is a simple exercise in analysis to see that for a nonnegative function f with property
(*) one has
lim (f (k))1/k = sup(f (k))1/k ∈ [0, +∞].
k→∞ k≥1

In our situation sup(f (k))1/k < ∞, since clearly f (k) ≤ nk , n being the number of letters (the
k≥1
number of nodes in Γ). Consequently, the quantity

Θ(Γ) = lim (f (k))1/k


k→∞

is well-defined; moreover, for every k the quantity (f (k))1/k is a lower bound for Θ(Γ). The
number Θ(Γ) is called the Shannon capacity of Γ. Our immediate observation is that

(!) The Shannon capacity Θ(Γ) upper-bounds the stability number of Γ:

α(Γ) ≤ Θ(Γ).

Indeed, as we remember, (f (k))1/k is a lower bound for Θ(Γ) for every k = 1, 2, ...; setting
k = 1 and taking into account that f (1) = α(Γ), we get the desired result.

We see that the Shannon capacity number is an upper bound on the stability number; and this
bound has a nice interpretation in terms of the Information Theory. The bad news is that we
do not know how to compute the Shannon capacity. E.g., what is it for the toy graph C5 ?
The stability number of C5 clearly is 2, so that our first observation is that

Θ(C5 ) ≥ α(C5 ) = 2.
202 LECTURE 3. SEMIDEFINITE PROGRAMMING

To get a better estimate, let us look the graph (C5 )2 (as we remember, Θ(Γ) ≥ (f (k))1/k =
(α(Γk ))1/k for every k). The graph (C5 )2 has 25 nodes, so that we do not draw it; it, however,
is not that difficult to find its stability number, which turns out to be 5. A good 5-element
dictionary (≡ a 5-node independent set in (C5 )2 ) is, e.g.,

AA, BC, CE, DB, ED.

Thus, we get
q √
Θ(C5 ) ≥ α((C5 )2 ) = 5.

Attempts to compute the subsequent lower bounds (f (k))1/k , as long as they are implementable
(think how many vertices there are in (C5 )4 !), do not
√ yield any √
improvements, and for more than
20 years it remained unknown whether Θ(C5 ) = 5 or is > 5. And this is for a toy graph!
The breakthrough in the area of upper bounds for the stability number is due to L. Lovasz who
in early 70’s found a new – computable! – bound of this type.

Lovasz capacity number. Given a n-node graph Γ, let us associate with it an affine matrix-
valued function L(x) taking values in the space of n × n symmetric matrices, namely, as follows:
• For every pair i, j of indices (1 ≤ i, j ≤ n) such that the nodes i and j are not linked by
an arc, the ij-th entry of L is equal to 1;
• For a pair i < j of indices such that the nodes i, j are linked by an arc, the ij-th and the
ji-th entries in L are equal to xij – to the variable associated with the arc (i, j).
Thus, L(x) is indeed an affine function of N design variables xij , where N is the number of
arcs in the graph. E.g., for graph C5 the function L is as follows:
 
1 xAB 1 1 xEA
x 1 xBC 1 1 
 AB 
L= 1 xBC 1 xCD 1 .
 
 
 1 1 xCD 1 xDE 
xEA 1 1 xDE 1

Now, the Lovasz capacity number ϑ(Γ) is defined as the optimal value of the optimization
program
min {λmax (L(x))} ,
x

i.e., as the optimal value in the semidefinite program

min {λ : λIn − L(x)  0} . (L)


λ,x

Proposition 3.4.3 [Lovasz] The Lovasz capacity number is an upper bound for the Shannon
capacity:
ϑ(Γ) ≥ Θ(Γ)

and, consequently, for the stability number:

ϑ(Γ) ≥ Θ(Γ) ≥ α(Γ).


3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 203

For the √
graph C5 , the Lovasz capacity can be easily computed analytically and turns out to
be exactly 5. Thus, a small byproduct of Lovasz’s result is a solution to the problem which
remained open for two decades.
Let us look how the Lovasz bound on the stability number can be obtained from the general
relaxation scheme. To this end note that the stability number of an n-node graph Γ is the
optimal value of the following optimization problem with 0-1 variables:
n o
maxx eT x : xi xj = 0 whenever i, j are adjacent nodes , xi ∈ {0, 1}, i = 1, ..., n ,
e = (1, ..., 1)T ∈ Rn .

Indeed, 0-1 n-dimensional vectors can be identified with sets of nodes of Γ: the coordinates
xi of the vector x representing a set A of nodes are ones for i ∈ A and zeros otherwise. The
quadratic equality constraints xi xj = 0 for such a vector express equivalently the fact that the
corresponding set of nodes is independent, and the objective eT x counts the cardinality of this
set.
As we remember, the 0-1 restrictions on the variables can be represented equivalently by
quadratic equality constraints, so that the stability number of Γ is the optimal value of the
following problem with quadratic (in fact linear) objective and quadratic equality constraints:

maximize eT x
s.t.
(3.4.8)
xi xj = 0, (i, j) is an arc
2
xi − xi = 0, i = 1, ..., n.

The latter problem is in the form of (3.4.1), with the only difference that the objective should
be maximized rather than minimized. Switching from maximization of eT x to minimization of
(−e)T x and passing to (3.4.5), we get the problem

−ζ − 21 (e + µ)T
   
max ζ : 0 ,
ζ,µ − 12 (e + µ) A(µ, λ)

where µ is n-dimensional and A(µ, λ) is as follows:

• The diagonal entries of A(µ, λ) are µ1 , ..., µn ;

• The off-diagonal cells ij corresponding to non-adjacent nodes i, j (“empty cells”) are zeros;

• The off-diagonal cells ij, i < j, and the symmetric cells ji corresponding to adjacent nodes
i, j (“arc cells”) are filled with free variables λij .

Note that the optimal value in the resulting problem is a lower bound for minus the optimal
value of (3.4.8), i.e., for minus the stability number of Γ.
Passing in the resulting problem from the variable ζ to a new variable ξ = −ζ and again
switching from maximization of ζ = −ξ to minimization of ξ, we end up with the semidefinite
program
− 12 (e + µ)T
   
ξ
min ξ : 0 . (3.4.9)
ξ,λ,µ − 12 (e + µ) A(µ, λ)
The optimal value in this problem is the minus optimal value in the previous one, which, in
turn, is a lower bound on the minus stability number of Γ; consequently, the optimal value in
(3.4.9) is an upper bound on the stability number of Γ.
204 LECTURE 3. SEMIDEFINITE PROGRAMMING

We have built a semidefinite relaxation (3.4.9) of the problem of computing the stability
number of Γ; the optimal value in the relaxation is an upper bound on the stability number.
To get the Lovasz relaxation, let us further fix the µ-variables at the level 1 (this may only
increase the optimal value in the problem, so that it still will be an upper bound for the stability
number)13) . With this modification, we come to the problem

−eT
   
ξ
min ξ : 0 .
ξ,λ −e A(e, λ)

In every feasible solution to the problem, ξ should be ≥ 1 (it is an upper bound for α(Γ) ≥ 1).
When ξ ≥ 1, the LMI
−eT
 
ξ
0
e A(e, λ)
by the Schur Complement Lemma is equivalent to the LMI

A(e, λ)  (−e)ξ −1 (−e)T ,

or, which is the same, to the LMI

ξA(e, λ) − eeT  0.

The left hand side matrix in the latter LMI is equal to ξIn − B(ξ, λ), where the matrix B(ξ, λ)
is as follows:

• The diagonal entries of B(ξ, λ) are equal to 1;

• The off-diagonal “empty cells” are filled with ones;

• The “arc cells” from a symmetric pair off-diagonal pair ij and ji (i < j) are filled with
ξλij .

Passing from the design variables λ to the new ones xij = ξλij , we conclude that problem (3.4.9)
with µ’s set to ones is equivalent to the problem

min {ξ → min | ξIn − L(x)  0} ,


ξ,x

whose optimal value is exactly the Lovasz capacity number of Γ.


As a byproduct of our derivation, we get the easy part of the Lovasz Theorem – the inequality
ϑ(Γ) ≥ α(Γ); this inequality, however, could be easily obtained directly from the definition of
ϑ(Γ). The advantage of our derivation is that it demonstrates what is the origin of ϑ(Γ).

How good is the Lovasz capacity number? The Lovasz capacity number plays a crucial
role in numerous graph-related problems; there is an important sub-family of graphs – perfect
graphs – for which this number coincides with the stability number. However, for a general-type
graph Γ, ϑ(Γ) may be a fairly poor bound for α(Γ). Lovasz has proved that for any graph Γ with
n nodes, ϑ(Γ)ϑ(Γ̂) ≥ n, where Γ̂ is the complement to Γ (i.e., two distinct nodes are adjacent in
Γ̂ if and only if they are not adjacent in Γ). It follows that for n-node graph Γ one always has

max[ϑ(Γ), ϑ(Γ̂)] ≥ n. On the other hand, it turns out that for a random n-node graph Γ (the
arcs are drawn at random and independently of each other, with probability 0.5 to draw an arc
13)
In fact setting µi = 1, we do not vary the optimal value at all.
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 205

linking two given distinct nodes) max[α(Γ), α(Γ̂)] is “typically” (with probability approaching
1 as n grows) of order of ln n. It follows that for random n-node graphs a typical value of the
ratio ϑ(Γ)/α(Γ) is at least of order of n1/2 / ln n; as n grows, this ratio blows up to ∞.
A natural question arises: are there “difficult” (NP-complete) combinatorial problems admit-
ting “good” semidefinite relaxations – those with the quality of approximation not deteriorating
as the sizes of instances grow? Let us look at two breakthrough results in this direction.

3.4.1.5 The MAXCUT problem and maximizing quadratic form over a box
The MAXCUT problem. The maximum cut problem is as follows:
Problem 3.4.1 [MAXCUT] Let Γ be an n-node graph, and let the arcs (i, j) of the graph be
associated with nonnegative “weights” aij . The problem is to find a cut of the largest possible
weight, i.e., to partition the set of nodes in two parts S, S 0 in such a way that the total weight
of all arcs “linking S and S 0 ” (i.e., with one incident node in S and the other one in S 0 ) is as
large as possible.
In the MAXCUT problem, we may assume that the weights aij = aji ≥ 0 are defined for every
pair i, j of indices; it suffices to set aij = 0 for pairs i, j of non-adjacent nodes.
In contrast to the minimum cut problem (where we should minimize the weight of a cut
instead of maximizing it), which is, basically, a nice LP program of finding the maximum flow
in a network and is therefore efficiently solvable, the MAXCUT problem is as difficult as a
combinatorial problem can be – it is NP-complete.

Theorem of Goemans and Williamson [24]. It is easy to build a semidefinite relaxation


of MAXCUT. To this end let us pose MAXCUT as a quadratic problem with quadratic equality
constraints. Let Γ be a n-node graph. A cut (S, S 0 ) – a partitioning of the set of nodes in
two disjoint parts S, S 0 – can be identified with a n-dimensional vector x with coordinates ±1 –
n
xi = 1 for i ∈ S, xi = −1 for i ∈ S 0 . The quantity 1 P
2 aij xi xj is the total weight of arcs with
i,j=1
both ends either in S or in S 0 minus the weight of the cut (S, S 0 ); consequently, the quantity
 
n n n
1 1 X 1 X 1 X
aij − aij xi xj  = aij (1 − xi xj )
2 2 i,j=1 2 i,j=1 4 i,j=1

is exactly the weight of the cut (S, S 0 ).


We conclude that the MAXCUT problem can be posed as the following quadratic problem
with quadratic equality constraints:
 
n
1 X 
max aij (1 − xi xj ) : x2i = 1, i = 1, ..., n . (3.4.10)
x 4 
i,j=1

For this problem, the semidefinite relaxation (3.4.7) after evident simplifications becomes the
semidefinite program
n
1
aij (1 − Xij )
P
maximize 4
i,j=1
s.t. (3.4.11)
X = [Xij ]ni,j=1 = X T  0
Xii = 1, i = 1, ..., n;
206 LECTURE 3. SEMIDEFINITE PROGRAMMING

the optimal value in the latter problem is an upper bound for the optimal value of MAXCUT.
The fact that (3.4.11) is a relaxation of (3.4.10) can be established directly, independently
of any “general theory”: (3.4.10) is the problem of maximizing the objective
n n n
1 X 1 X 1 X 1
aij − aij xi xj ≡ aij − Tr(AX(x)), X(x) = xxT
4 i,j=1 2 i,j=1 4 i,j=1 4

over all rank 1 matrices X(x) = xxT given by n-dimensional vectors x with entries ±1. All
these matrices are symmetric positive semidefinite with unit entries on the diagonal, i.e.,
they belong the feasible set of (3.4.11). Thus, (3.4.11) indeed is a relaxation of (3.4.10).
The quality of the semidefinite relaxation (3.4.11) is given by the following brilliant result of
Goemans and Williamson (1995):
Theorem 3.4.1 Let OP T be the optimal value of the MAXCUT problem (3.4.10), and SDP
be the optimal value of the semidefinite relaxation (3.4.11). Then
OP T ≤ SDP ≤ α · OP T, α = 1.138... (3.4.12)
Proof. The left inequality in (3.4.12) is what we already know – it simply says that semidef-
inite program (3.4.11) is a relaxation of MAXCUT. To get the right inequality, Goemans and
Williamson act as follows. Let X = [Xij ] be a feasible solution to the semidefinite relaxation.
Since X is positive semidefinite, it is the covariance matrix of a Gaussian random vector ξ with
zero mean, so that E {ξi ξj } = Xij . Now consider the random vector ζ = sign[ξ] comprised of
signs of the entries in ξ. A realization of ζ is almost surely a vector with coordinates ±1, i.e., it
is a cut. What is the expected weight of this cut? A straightforward computation demonstrates
that E {ζi ζj } = π2 asin(Xij ) 14) . It follows that
 
n
1 X n
1 X 2
  
E aij (1 − ζi ζi ) = aij 1 − asin(Xij ) . (3.4.13)
4  4 i,j=1 π
i,j=1

Now, it is immediately seen that


2
−1 ≤ t ≤ 1 ⇒ 1 −asin(t) ≥ α−1 (1 − t), α = 1.138...
π
In view of aij ≥ 0, the latter observation combines with (3.4.13) to imply that
 
n
1 X n
 1 X
E aij (1 − ζi ζi ) ≥ α−1 aij (1 − Xij ).
4  4 i,j=1
i,j=1

The left hand side in this inequality, by evident reasons, is ≤ OP T . We have proved that the
value of the objective in (3.4.11) at every feasible solution X to the problem is ≤ α · OP T ,
whence SDP ≤ α · OP T as well. 2

Note that the proof of Theorem 3.4.1 provides a randomized algorithm for building a sub-
optimal, within the factor α−1 = 0.878..., solution to MAXCUT: we find a (nearly) optimal
solution X to the semidefinite relaxation (3.4.11) of MAXCUT, generate a sample of, say, 100
realizations of the associated random cuts ζ and choose the one with the maximum weight.
14)
Recall that Xij  0 isnormalized
 by the requirement Xii = 1 for all i. Omitting this normalization, we
would get E {ζi ζj } = 2
π
asin √ Xij .
Xii Xjj
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 207

π
3.4.1.6 Nesterov’s 2 Theorem
In the MAXCUT problem, we are in fact maximizing the homogeneous quadratic form
 
n
X n
X n
X
xT Ax ≡  aij  x2i − aij xi xj
i=1 j=1 i,j=1

over the set Sn of n-dimensional vectors x with coordinates ±1. It is easily seen that the matrix
A of this form is positive semidefinite and possesses a specific feature that the off-diagonal
entries are nonpositive, while the sum of the entries in every row is 0. What happens when
we are maximizing over Sn a quadratic form xT Ax with a general-type (symmetric) matrix A?
An extremely nice result in this direction was obtained by Yu. Nesterov. The cornerstone of
Nesterov’s construction relates to the case when A is positive semidefinite, and this is the case
we shall focus on. Note that the problem of maximizing a quadratic form xT Ax with positive
semidefinite (and, say, integer) matrix A over Sn , same as MAXCUT, is NP-complete.
The semidefinite relaxation of the problem
n o
max xT Ax : x ∈ Sn [⇔ xi ∈ {−1, 1}, i = 1, ..., n] (3.4.14)
x

can be built exactly in the same way as (3.4.11) and turns out to be the semidefinite program

maximize Tr(AX)
s.t.
(3.4.15)
X = X T = [Xij ]ni,j=1  0
Xii = 1, i = 1, ..., n.

The optimal value in this problem, let it again be called SDP , is ≥ the optimal value OP T in
the original problem (3.4.14). The ratio OP T /SDP , however, cannot be too large:
π
Theorem 3.4.2 [Nesterov’s 2 Theorem] Let A be positive semidefinite. Then
π π
OP T ≤ SDP ≤ SDP [ = 1.570...]
2 2
The proof utilizes the central idea of Goemans and Williamson in the following brilliant
reasoning:
The inequality SDP ≥ OP T is valid since (3.4.15) is a relaxation of (3.4.14). Let X be a feasible
solution to the relaxed problem; let, same as in the MAXCUT construction, ξ be a Gaussian random
vector with zero mean and the covariance matrix X, and let ζ = sign[ξ]. As we remember,
X 2 2
E ζ T Aζ =

Aij asin(Xij ) = Tr(A, asin[X]), (3.4.16)
i,j
π π

where for a function f on the axis and a matrix X f [X] denotes the matrix with the entries f (Xij ). Now
– the crucial (although simple) observation:
For a positive semidefinite symmetric matrix X with diagonal entries ±1 (in fact, for any
positive semidefinite X with |Xij | ≤ 1) one has

asin[X]  X. (3.4.17)
208 LECTURE 3. SEMIDEFINITE PROGRAMMING

The proof is immediate: denoting by [X]k the matrix with the entries Xij k
and making use of the
Taylor series for the asin (this series converges uniformly on [−1, 1]), for a matrix X with all entries
belonging to [−1, 1] we get

X 1 × 3 × 5 × ... × (2k − 1)
asin[X] − X = [X]2k+1 ,
2k k!(2k + 1)
k=1

15)
and all we need is to note is that all matrices in the left hand side are  0 along with X

Combining (3.4.16), (3.4.17) and the fact that A is positive semidefinite, we conclude that

2 2
[OP T ≥] E ζ T Aζ = Tr(A asin[X]) ≥ Tr(AX).

π π
The resulting inequality is valid for every feasible solution X of (3.4.15), whence SDP ≤ π
2 OP T . 2

The π2 Theorem has a number of far-reaching consequences (see Nesterov’s papers [45, 46]),
for example, the following two:

Theorem 3.4.3 Let T be a convex compact subset of Rn+ which intersects int Rn+ . Consider
the set
T = {x ∈ Rn : (x21 , ..., x2n )T ∈ T },

and let A be a symmetric n × n matrix. Then the quantities m∗ (A) = min xT Ax and m∗ (A) =
x∈T
max xT Ax admit efficiently computable bounds
x∈T
n o
s∗ (A) ≡ min Tr(AX) : X  0, (X11 , ..., Xnn )T ∈ T ,
X n o
s∗ (A) ≡ max Tr(AX) : X  0, (X11 , ..., Xnn )T ∈ T ,
X

such that
s∗ (A) ≤ m∗ (A) ≤ m∗ (A) ≤ s∗ (A)

and
π
m∗ (A) − m∗ (A) ≤ s∗ (A) − s∗ (A) ≤ (m∗ (A) − m∗ (A))
4−π
π
(in the case of A  0 and 0 ∈ T , the factor 4−π can be replaced with π2 ).
Thus, the “variation” max xT Ax − min xT Ax of the quadratic form xT Ax on T can be effi-
x∈T x∈T
ciently bounded from above, and the bound is tight within an absolute constant factor.
Note that if T is given by an essentially strictly feasible SDR, then both (−s∗ (A)) and s∗ (A)
are SDr functions of A (semidefinite version of Proposition 2.4.4).

15)
The fact that the entry-wise product of two positive semidefinite matrices is positive semidefinite is a standard
fact from Linear Algebra. The easiest way to understand it is to note that if P, Q are positive semidefinite
symmetric matrices of the same size, then they are Gram matrices: Pij = pTi pj for certain system of vectors pi
from certain (no matter from which exactly) RN and Qij = qiT qj for a system of vectors qi from certain RM . But
then the entry-wise product of P and Q – the matrix with the entries Pij Qij = (pTi pj )(qiT qj ) – also is a Gram
matrix, namely, the Gram matrix of the matrices pi qiT ∈ MN,M = RN M . Since every Gram matrix is positive
semidefinite, the entry-wise product of P and Q is positive semidefinite.
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 209

Theorem 3.4.4 Let p ∈ [2, ∞], r ∈ [1, 2], and let A be an m × n matrix. Consider the problem
of computing the operator norm kAkp,r of the linear mapping x 7→ Ax, considered as the mapping
from the space Rn equipped with the norm k · kp to the space Rm equipped with the norm k · kr :

kAkp,r = max {kAxkr : kxkp ≤ 1} ;

note that it is difficult (NP-hard) to compute this norm, except for the case of p = r = 2. The
“computationally intractable” quantity kAkp,r admits an efficiently computable upper bound

1 AT
 h i  Diag{µ}  
ωp,r (A) = min kµk p + kλk r : 0 ;
λ∈Rm ,µ∈Rn 2 p−2 2−r A Diag{λ}

this bound is exact for a nonnegative matrix A, and for an arbitrary A the bound is tight within
π
the factor 2√3−2π/3 = 2.293...:

π
kAkp,r ≤ ωp,r (A) ≤ √ kAkp,r .
2 3 − 2π/3

Moreover, when p ∈ [1, ∞) and r ∈ [1, 2] are rational (or p = ∞ and r ∈ [1, 2] is rational), the
bound ωp,r (A) is an SDr function of A.

3.4.1.7 Shor’s semidefinite relaxation revisited


In retrospect, Goemans-Williamson and Nesterov’s theorems on tightness of SDP relaxation
follow certain approach which, to the best of our knowledge, underlies all other results of this
type. This approach stems from “probabilistic interpretation” of Shor’s relaxation scheme.
Specifically, consider quadratic quadratically constrained problem

minimize in x ∈ Rn f0 (x) = xT A0 x + 2bT0 x + c0


(P )
s.t. fi (x) = xT Ai x + 2bTi x + ci ≤ 0, i = 1, ..., m.

and imagine that we are looking for random solution x which should satisfy the constraints at
average and minimize under this restriction the average value of the objective. As far as averages
of objective and constraints are concerned, all that matters is the moment matrix
" #
n o 1 [E{x}]T
X = E [1; x][1; x]T ] =
E{x} E{xxT }

of x comprised of the first and the second moments of random solution x. In terms of this
matrix the averages of fi (x) are Tr(Āi X), with Āi given by (3.4.7). Now, the moment matrices
of random vectors x of dimension n are exactly positive semidefinite (n + 1) × (n + 1) matrices X
with X11 = 1 – every matrix of this type can be represented as the moment matrix of a random
vector (with significant freedom in the distribution of the vector), and vice versa. As a result,
the “averaged” version of (P ) is exactly the semidefinite relaxation (3.4.7) of (P ). Note that in
the homogeneous case – one where all quadratic forms in (P ) are with no linear terms (bi = 0,
0 ≤ i ≤ m) there is no reason to care about first order moments of a random solution x, and
the “pass to averaged problem” approach results in the semidefinite relaxation

min {Tr(A0 X) : X  0, Tr(Ai X) + ci ≤ 0, 1 = leqi ≤ m} [X = E{xxT }]


X
210 LECTURE 3. SEMIDEFINITE PROGRAMMING

The advantage of probabilistic interpretation of SDP relaxation of (P ) is that it provides us


with certain way to pass from optimal or suboptimal solution X∗ of the relaxation to candidate
solutions to the problem of interest. Specifically, given X∗ , we select somehow the distribution
of random solution x with the moment matrix X∗ and generate from this distribution a sample
x1 , x2 ,..., xN of candidate solutions to (P ). Of course, there is no reason for these solutions
to be feasible, but in good cases we can correct xi to get feasible solutions x̄i to (P ), and can
understand how much this correction “costs” in terms of the objective, thus upper-bounding
the conservatism of the relaxation. The reader is strongly advised to look at the proofs of
Goemans-Williamson and Nesterov theorems through the lens just outlined: the theorems in
question operate with homogeneous problem (P ), and the design variable X in SDP relaxation
is interpreted as E{xxT }. To establish tightness bounds, we treat an optimal solution to the
SDP relaxation as the covariance matrix of Gaussian zero mean random vector, generate from
this distributions samples xi and correct them to make feasible for the problem of interest (i.e.,
to be ±1 vector) by passing from vectors xi to vectors comprised of signs of entries in xi . We
shall see this approach in action in several tightness results to follow.

3.4.2 Semidefinite relaxation on ellitopes and its applications


The problem we are interested in now is

Opt∗ (C) = max xT Cx, (!)


x∈X

where A is an n × n symmetric matrix and X is a given compact subset of Rn . We restrict these


sets to be ellitopes.

3.4.2.1 Ellitopes

A basic ellitope is a set X represented as

X = {x ∈ Rn : ∃t ∈ T : xT Sk x ≤ tk , 1 ≤ k ≤ K} (3.4.18)

where:

• Sk  0, k ≤ K, and Sk  0
P
k

• T is a convex computationally tractable compact subset of RK+ which has a nonempty in-
terior and is monotone, monotonicity meaning that every nonnegative vector t ≥-dominate
by some vector from T also belongs to T : whenever 0 ≤ t ≤ t0 ∈ T , we have t ∈ T .

An ellitope is a set X represented as the linear image of basic ellitope:

X = {x ∈ Rm : ∃(t ∈ T , z ∈ Rn ) : x = P z, z T Sk z ≤ tk , k ≤ K} (3.4.19)

with T and Sk are as above.


Clearly, every ellitope is a convex compact set symmetric w.r.t. the origin; a basic ellitope,
in addition, has a nonempty interior.
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 211

Examples: A. Bounded intersection X of K centered at the origin ellipsoids/elliptic cylinders


{x : xT Sk x ≤ 1} [Sk  0] is a basic ellitope:
X = {x : ∃t ∈ T := [0, 1]K : xT Sk x ≤ tk , k ≤ K}
In particular, the unit box {x : kxk∞ ≤ 1} is a basic ellitope.
B. k · kp -ball in Rn with p ∈ [2, ∞] is a basic ellitope:
{x ∈ Rn : kxkp ≤ 1} = {x : ∃t ∈ T = {t ∈ Rn+ , ktkp/2 ≤ 1} : x2k ≤ tk , k ≤ K}.
|{z}
xT S k x

In fact, ellitopes admit fully algorithmic ”calculus:” this family is closed with respect to basic
operations preserving convexity and symmetry w.r.t. the origin, like taking finite intersections,
linear images, inverse images under linear embeddings, direct products, arithmetic summation
(for details, see [30, Section 4.3]); what is missing, is taking convex hulls of finite unions.

3.4.2.2 Construction and main result


Assume that the domain X in (!) is ellitope given by (3.4.19), and let us build a computationally
tractable relaxation of (P ). We have
(λ ∈ Rk+ & P T CP  k λk Sk & x ∈ X )
P

⇒ (λ ∈ Rk+ & P T CP  k λk Sk & x = P z with z T Sk z ≤ tk , k ≤ K, and t ∈ T )


P

⇒ xT Cx = z T P T CP z ≤ z T [ k λk Sk ]z ≤ k λk tk ,
P P

implying the validity of the implication


X
λ ≥ 0, P T CP  λk Sk ⇒ Opt∗ (C) ≤ φT (λ) := max λT t.
t∈T
k

Note that φT (λ) is the support function of T ; it is real-valued, convex and efficiently computable
(since T is a computationally tractable convex compact set) is positively homogeneous of degree
1: φ(sλ) = sφ(λ) when s ≥ 0, and, finally, is positive whenever λ ≥ 0 is nonzero (recall that T
is contained in RK+ and has a nonempty interior).
We have arrived at the first claim of the following
Theorem 3.4.5 [30, Proposition 4.6] Given ellitope
X = {x ∈ Rm : ∃(t ∈ T , z ∈ Rn ) : x = P z, z T Sk z ≤ tk , k ≤ K}
and a symmetric matrix C, consider the quadratic maximization problem
Opt∗ (C) = max xT Cx.
x∈X

along with its relaxation


( )
X
T
Opt(C) = min φT (λ) : λ ≥ 0, P CP  λ k Sk (3.4.20)
λ
k

The latter problem is computationally tractable and solvable, and its optimal value is an effi-
ciently computable upper bound on Opt(C):
Opt∗ (C) ≤ Opt(C). (3.4.21)
This upper bound is reasonably tight:

Opt(C) ≤ 3 ln( 3K)Opt∗ (C). (3.4.22)
212 LECTURE 3. SEMIDEFINITE PROGRAMMING

Let us make several comments:


• When T is SDr with essentially strictly feasible SDR, φT (·) is SDr as well (by semidefinite
version of Corollary 2.3.5). Consequently, in this case (3.4.20) can be reformulated as an
SDP program,

• When T = [0; 1]K , Opt∗ [C] is the maximum of the quadratic form z T [P T CP ]z over the
intersection of centered at the origin ellipsoids/elliptic cylinders z T Sk z ≤ 1, that is, the
maximum of a homogeneous quadratic constraints xT Sk ≤ 1. It is immediately seen that
Opt(C) as defined in (3.4.20) is nothing but the upper bound on Opt∗ (C) yielded by Shor’s
semidefinite relaxation. It is shown in [37] that in this case the ratio Opt/Opt∗ indeed can
be as large as O(ln(K)), even when all Sk = qak aTk are of rank 1 and X is the polytope
{x : |aTk x| ≤ 1, k ≤ K}.Robust

• Pay attention to the fact that the tightness factor in (3.4.22) is “nearly independent” of
the “size” K of the ellitope X and is completely independent of all other parameters of
the situation.
Proof of Theorem 3.4.5. 10 . We start with rewriting the optimization problem in (3.4.20)
as a conic program. To this end let

T = cl{[t; τ ] : τ > 0, t/τ ∈ T }

be the closed conic hull of T . Since T is a convex compact set with a nonempty interior, T is
a regular cone, nonzero vectors from T are exactly the vectors of the form [t; τ ] with τ > 0 and
t/τ ∈ T , and
T = {t : [t; 1] ∈ T}.
We claim that the cone T∗ dual to T is

T∗ = {[g; s] : s ≥ φT (−g)}

Indeed, nonzero vectors from T are positive multiplies of vectors [t; 1] with t ∈ T , so that
[g; s] ∈ T∗ if and only if g T t + s ≥ 0 for all t ∈ T , that is, if and only if

0 ≤ s + min[g T t] = s − max[−g]T t = s − φT (−g).


t∈T t∈T

In view of these observations, (3.4.20) reads


( )
X
T
Opt(C) = min τ : λ ≥ 0, [−λ; τ ] ∈ T∗ , P CP  λk Sk (3.4.23)
λ,τ
k

This conic problem clearly is strictly feasible (recall that k Sk  0), so that the dual problem
P

is solvable with the same optimal value. Denoting the Lagrange multipliers for the constraints
in the latter problem by µ ≥ 0, [t; s] ∈ (T∗ )∗ = T, X  0, the dual problem reads
( )
X
max Tr(P T CP X) : µT λ + τ s − λT t + Tr(X λk Sk ) = τ ∀(λ, τ ), µ ≥ 0, [t; s] ∈ T, X  0 ,
µ,[t;s],X
k

or, which is the same,

max {Tr(BX) : s = 1, Tr(XSk ) + µk = tk , k ≤ K, µ ≥ 0, [t; s] ∈ T, X  0} ; [B = P T CP ]


µ,[t;s],X
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 213

after eliminating variables s and µ, and recalling [t; 1] ∈ T is exactly the same as t ∈ T , and
that the optimal value in the dual problem is Opt(C), we arrive at

Opt(C) = max {Tr(BX) : X  0, t ∈ T , Tr(XSk ) ≤ tk , k ≤ K} . (3.4.24)


X,t

As we remember, the resulting problem is solvable; let X∗ , t∗ be an optimal solution to the


problem.
20 . Let
Be := X∗1/2 BX∗1/2 = U Diag{µ}U T [U is orthogonal]
1/2 1/2
and let Sek = U T X∗ Sk X∗ U , so that
1/2 1/2
0  Sek , Tr(Sek ) = Tr(X∗ Sk X∗ ) = Tr(Sk X∗ ) ≤ t∗k . (3.4.25)

Let ζ be Rademacher random vector (independent entries taking values ±1 with probability
1/2), and let
1/2
ξ = X∗ U ζ.
We have
1/2 1/2
E{ξξ T } = E{X∗ U ζζ T U T X∗ } = X∗ (a)
T T T 1/2 1/2 T T
ξ Bξ = ζ U X∗ BX∗ U ζ = ζ U BU e ζ
T P (3.4.26)
= ζ Diag{µ}ζ = i µi = Tr(B) = Tr(BX∗ ) = Opt(C) (b)
e
1/2 1/2
ξ Sk ξ = ζ T U T X∗ Sk X∗ U ζ = ζ T Sek ζ
T (c)

Observe that

A: when k is such that t∗k = 0, we have Sek = 0 by (3.4.25), whence ξ T Sk ξ ≡ 0 by (3.4.26.c)

B: when k is such that t∗k > 0, we have Tr(Sek /t∗k ) ≤ 1, whence


( ( )) ( ( ))
ξ T Sk ξ ζ T Sek ζ
E exp = E exp ≤ 3, (3.4.27)
3t∗k 3t∗k

where the equality is due to (3.4.26.c), and the concluding inequality is due to the following
fact to be proved later:

Lemma 3.4.1 Let Q be positive semidefinite N × N matrix with trace ≤ 1 and


ζ be N -dimensional Rademacher random vector. Then
n n oo √
E exp ζ T Qζ/3 ≤ 3.

as applied with Q = Sek /t∗k (the latter matrix satisfies Lemma’s premise in view of (3.4.25)).

30 . By 20 .A-B, we have

Prob{ξ T Sk ξ > γt∗k } ≤ 3 exp{−γ/3}, ∀γ ≥ 0. (3.4.28)
214 LECTURE 3. SEMIDEFINITE PROGRAMMING

Indeed, for k with t∗k > 0 this relation is readily given by (3.4.27), while for k with t∗k = 0, as we
have seen, it holds ξ T Sk ξ ≡ 0. From (3.4.28) it follows that

Prob{∃k : ξ T Sk ξ > 3 ln( 3K)t∗k } < 1.

T √
that ξ Sk ξ ≤ 3 ln( 3K)t∗k , k ≤ K, while by
implying that there exists a realization ξ of ξ such q
T √
(3.4.26.b) we have ξ Bξ = Opt(C). Setting z = ξ/ 3 ln( 3K), we get


z T Sk z ≤ t∗k , k ≤ K & z T Bz = Opt(C)/[3 ln( 3K)]

that is, x := P z ∈ X and xT Cx ≥ Opt(C)/[3 ln( 3K)], implying the second inequality in
(3.4.22).
40 . It remains to prove Lemma 3.4.1. Let Q obey the premise of Lemma, and let Q = i σi fi fiT
P

be the eigenvalue decomposition of Q, so that fiT fi = 1, σi ≥ 0, and i σi ≤ 1. The function


P

1 Tf T
n P o
σζ
i i i fi ζ
f (σ1 , ..., σN ) = E e 3

is convex on the simplex {σ ≥ 0, i σi ≤ 1} and thus attains it maximum over the simplex at a
P

vertex, implying that for some f = fi , f T f = 1, it holds

1 T 1 T ζ)2
E{e 3 ζ Qζ
} ≤ E{e 3 (f }.

Let ξ ∼ N (0, 1) be independent of ζ. We have


n o n n oo
Eζ exp{ 13 (f T ζ)2 } = Eζ Eξ exp{[ 2/3f T ζ]ξ}
p
( )
n n oo N n o
2/3f T ζ]ξ}
p p
Eζ exp{ 2/3ξfj ζj }
Q
= Eξ Eζ exp{[ = Eξ
( ) ( j=1 ) 2
N N
exp{ξ 2 fj2 /3}
p
≤ Eξ
Q Q
= Eξ cosh( 2/3ξfj )
j=1 j=1

= Eξ exp{ξ 2 /3} = 3


Note: As shown in [30, Section 4.3], Semidefinite Relaxation and Theorem 3.4.5 can be extended
from ellitopes to an essentially wider family of spectratopes; the same holds true for applications
and modifications of this Theorem we are about to consider below. A basic spectratope is a
bounded set represented as

X = {x ∈ Rn : ∃t ∈ T : Sk2 [x]  tk Idk , k ≤ K}

where Sk [x] = nj=1 xj S kj with matrix coefficients S kj ∈ Sdk , with the same restrictions on
P

T as for ellitopes. A spectratope is a set representable as linear image of basic spectratope.


Every spectratope is an ellitope, but not vice versa; for example the “matrix box” {x ∈ Rm×n :
kxk2,2 ≤ 1}, kxk2,2 being the spectral norm (maximum singular value) of matrix x, is a “genuine”
spectratope. Spectratopes admit fully algorithmic calculus completely similar to the one of
ellitopes.
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 215

3.4.2.3 Application: Near-optimal linear estimation


Consider the following basic statistical problem: Given noisy observation
ω = Ax + ξ
[A : m × n; ξ : standard (zero mean, unit covariance) Gaussian noise]
of unknown signal x known to belong to a given “signal set” X ⊂ Rn , recover the linear image
Bx ∈ Rν of x.
We quantify the performance of a candidate estimate x
b(·) by its risk

b|X ] = sup Eξ {kx


Risk[x b(Ax + ξ) − Bxk} ,
x∈X

where k · k is a given norm on Rν .


The simplest estimates are linear ones: x bH (ω) := H T ω.
b(ω) = x
Noting that for a linear estimate x T
bH (ω) = H ω we have

bH (Ax + ξ)k = k[B − H T A]x − H T ξk ≤ k[B − H T A]xk + kH T ξk


kBx − x

the risk of a linear estimate can be upper-bounded as follows:

b|X ] := max k[B − H T A]xk + E{kH T ξk} .


b|X ] ≤ Risk[x
Risk[x
x∈X | {z }
stochastic
| {z }
“bias” term

It is easily seen that when X is symmetric w.r.t. the origin, which we assume from now on, this
bound is at most twice the actual risk. The minimum (within factor 2) risk linear estimate is
given by an optimal solution to the convex optimization problem

Opt∗ = min {Φ(H) + Ψ(H)} , Φ(H) := max k[B − H T A]xk, Ψ(H) = E{kH T ξk}
H x∈X

where the expectation is taken w.r.t. the standard (zero mean, unit covariance) Gaussian dis-
tribution.
The difficulty is that Φ and Ψ, while convex, are, in general, difficult to compute. For
example, as far as computing Φ is concerned, the only generic “easy cases” here are those of an
ellipsoid X , or X given as a convex hull of finite set.
Assume that
• the signal set X is a basic ellitope16 :

X = {x : ∃t ∈ T : xT Sk x ≤ tk , k ≤ K}

• the unit ball B∗ of the norm k · k∗ conjugate to k · k is an ellitope:

B∗ = {u : kuk∗ ≤ 1} = {u : ∃z ∈ Z : u = M z}, Z = {z : ∃r ∈ R : z T R` z ≤ r` , ` ≤ L},

as is the case, e.g., when k · k = k · kp with 1 ≤ p ≤ 2,


where T , Sk , R, R` are as required by the definition of an ellitope.
We are about to demonstrate that under these assumptions Φ and Ψ admit efficiently com-
putable convex in H upper bounds.
16
A formally more general case of ellitope X reduces immediately (think how) to the case when the ellitope is
basic.
216 LECTURE 3. SEMIDEFINITE PROGRAMMING

Upper-bounding Φ. Observe that

Φ(H) = max k[B − H T A]xk = max uT [B − H T A]x = max z T [M T [B − H T A]]x.


x∈X u∈B∗ ,x∈X z∈Z,x∈X

We see that Φ(H) is the maximum of homogeneous quadratic form of [u; x] over [u; x] ∈ Z × X ,
and the set Z × X is an ellitope along with Z and X . Applying semidefinite relaxation we arrive
at an efficiently computable upper bound
 

 λ ≥ 0, µ ≥ 0
" # 

1 T [B − H T A]
P
Φ(H) := min φT (λ) + ψR (µ) : ` µ` R ` 2MP
λ,µ 1 T 0 
− AT H]M

2 [B k λ k Sk
 

on Φ(H); here φT and φR are the support √ functions of T and R. Note that by Theorem 3.4.5
this bound is tight within the factor 3 ln( 3(K + L)).

Upper-bounding Ψ. Observe that whenever nonnegative vector θ, symmetric matrix Θ, and


H satisfy the matrix inequality
" P #
1 T T
` θ` R ` 2M H
1  0,
2 HM Θ

we have Ψ(H) ≤ φR (θ) + Tr(Θ). Indeed, when the above matrix inequality takes place, we have
for every [z; ξ] X
[M z]T H T ξ ≤ z T [ θ` R` ]z + ξ T Θξ.
`

When z ∈ Z, the first term in the right hand side is ≤ φR (θ) (see the proof of Theorem 3.4.5),
and we arrive at
kH T ξk = max[M z]T H T ξ ≤ φR (θ) + ξ T Θξ.
z∈Z

Taking expectation over ξ, we arrive at Ψ(H) ≤ φR (θ) + Tr(Θ), as claimed. The bottom line is
that ( " P # )
1 T T
` θ` R` 2 M H
Ψ(η) ≤ Ψ(H) := min φR (θ) + Tr(Θ) : θ ≥ 0, 1 0 .
θ,Θ 2 HM Θ

It turns outp(see [30, Lemma 4.51]) that the upper bound Ψ(H) on Ψ(H) is tight within the
factor O(1) ln(L + 1).

Bottom line is as follows. Consider the convex optimization problem


 
 λ ≥ 0, Pµ ≥ 0, θ ≥ 0 
1 T
[B − H T A]
   
` µ ` R` 2 MP

 

0
 
1 T T
Opt = min
H,λ,µ,θ,Θ  2 [B − A H]M
φT (λ) + φR (µ) + φR (θ) + Tr(Θ) :  P k λk Sk
1 T T

` θ ` R` 2M H
 

0

 

1
2 HM Θ
 

(this is nothing but the problem of minimizing Φ(H)+Ψ(H) over H). This problem is efficiently
solvable, and the linear estimate x
bH∗ yielded by the H-component H∗ of optimal solution satisfies
the relation
bH∗ |X ] ≤ Risk[x
Risk[x bH∗ |X ] ≤ Opt.
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 217

From what was said on tightness of the upper bounds Φ, Ψ it follows that the resulting linear
estimate is optimal, within the “moderate” factor O(1) ln(K + L), in terms of its risk among all
linear estimates. Surprisingly, itpturns out (see [30, Proposition 4.50]) that the estimate x
bH∗ is
optimal, within the factor O(1) ln(K + 1) ln(L + 1), in terms of its risk among all estimates,
linear and nonlinear alike.

3.4.2.4 Application: Tight bounding of operator norms


So far, we have utilized Semidefinite Relaxation on ellitopes in order to build “presumably good”
linear estimates of signals observed via the standard Signal Processing observation scheme. Now
we intend to use it for another purpose – upper-bounding operator norms.
Consider the problem as follows (cf. Theorem 3.4.4):

Given an m × n matrix A and norms π(·) on Rn and θ(·) on Rm , compute/tightly


upper-bound the induced by π, θ operator norm of A

Φπ→θ (A) = max {θ(Ax) : π(x) ≤ 1}


x

The problem in question is to maximize a specific convex function over a specific convex set,
and as such it can be computationally intractable. Whether this indeed is the case, it depends
on the norms π and θ, and here is the list of known “easy cases:”

• π(·) and θ(·) are the standard Euclidean norms. In this case Φπ→θ (A) – the spectral norm
of A – is the maximal singular value of A, and this quantity is efficiently computable.

• π(·) = k · k1 . In this case Φπ→θ (A) = max θ(Colj [A]) (why?).


j≤n

• θ(·) = k · k∞ . In this case Φπ→θ (A) = max π∗ (Rowi [A]), where RowTj [A] is i-th row of A,
i≤m
and
π∗ (y) = max{y T x : π(x) ≤ 1}
x

is the norm conjugate to π.

Note that the last two statements are straightforward reformulations of each other due to the
following immediate observation:

Let φ∗ , θ∗ be the norms conjugate to π, θ:

π∗ (y) = maxx {xT y : π(x) ≤ 1} [⇔ π(x) = maxy {xT y : π∗ (y) ≤ 1} : (π∗ )∗ = π]


θ∗ (u) = maxv {v T u : θ(v) ≤ 1} [⇔ θ(v) = maxu {uT v : θ∗ (u) ≤ 1} : (θ∗ )∗ = θ]

Then Φπ→θ (A) = Φθ∗ →π∗ (AT ).


Indeed, we have

Φπ→θ (A) = maxx {θ(Ax) : π(x) ≤ 1} = maxx {maxv [v T Ax : θ∗ (v) ≤ 1]} : π(x) ≤ 1
= maxv,x v T Ax : θ∗ (v) ≤ 1, π(x) ≤ 1 = maxx,v {xT AT v : θ∗ (v) = 1, (π∗ )∗ (x) ≡ π(x) ≤ 1}
= Φπ∗ →θ∗ (AT ).
218 LECTURE 3. SEMIDEFINITE PROGRAMMING

On the other hand, it is known than when π(·) = k · kp , θ(·) = k · kr with p > 2 and r < 2, then
computing Φπ→θ (A) is NP-hard in general (i.e., when m, n, and A are considered as problem
data).
Observe that as we just have seen, the operator norm of a matrix is the maximum of quadratic
(in fact, even bilinear) form:
 h i 
1A
Φπ→θ (A) = maxv,x {v T Ax : θ∗ (v) ≤ 1, π(x) ≤ 1} = max[v;x]∈Θ∗ ×Π [v; x]T 1 AT
2 [v; x] ,
2

Π = {x : π(x) ≤ 1}, Θ∗ = {v : θ∗ (v) ≤ 1}.

It follows that when, as we assume from now on, Π and Θ∗ and are ellitopes:

Π = {x : ∃r ∈ R, z : x = P z, z T Rk z ≤ rk , k ≤ K},
(3.4.29)
Θ∗ = {v : ∃s ∈ S, w : v = Qw, wT S` w ≤ s` , ` ≤ L}

with R, S, Rk , S` as required in the definition of ellitopes, Φπ→θ (A) is the maximum of a


quadratic form on the ellitope Θ∗ × Π and as such can be tightly upper-bounded by Semidefinite
Relaxation. As is immediately seen, in our present situation and notation the construction from
Section 3.4.2.2 boils down to specifying the efficiently computable convex function
( " P # )
1 T
` µ` S` 2 Q AP
Φπ→θ (A) = minλ≥0,µ≥0 φR (λ) + φS (µ) : 1 T T P 0 (a)
2P A Q k λk Rk
[φR , φS are functions of R and S ]
support  (3.4.30)

  [−λ; p] ∈ R∗ , [−µ; p] ∈ S∗#, λ ≥ 0, µ ≥ 0
 " 

P 1 T
= minλ,µ,p,q p + q : ` µ` S` 2 Q AP (b)
  1 T T P 0 
 
2 P A Q k λk Rk

where R∗ = {[g; p] : p ≥ φR (−g)} is the cone dual to the cone R = cl{[r; t] : t > 0, r/t ∈ R}, and
S∗ = {[h; q] : q ≥ φS (−h)} is the cone dual to the cone S = cl{[s; t] : t > 0, s/t ∈ S}. Theorem
3.4.5 states that √
Φπ→θ (A) ≤ Φπ→θ (A) ≤ 3 ln( 3[K + L])Φπ→θ (A),
K + L being the size of the ellitope Θ∗ × Π. Our current goal is to slightly refine the latter
result:

Theorem 3.4.6 In the case of (3.4.29), one has


q
Φπ→θ (A) ≤ Φπ→θ (A) ≤ 3 ln(4K) ln(4L)Φπ→θ (A), (3.4.31)

Remark 3.4.1 When the quadratic forms z T Rk z, wT S` w are just squares of entries in z and
w, Nesterov’s results presented in Theorem 3.4.3 allow to improve the tightness factor in (3.4.31)
to an appropriate absolute constant, see Exercise 3.35.

Proof of Theorem 3.4.6 follows the one of Theorem 3.4.5, utilizing at some point bilinearity
of the quadratic form we want to upper-bound on Θ∗ × Π. To save notation, let M and N be
the dimensions of v, respectively, z, in (3.4.29), and assume that M ≤ N , which is w.l.o.g.17 .
17
as is immediately seen, Φ is enough intelligent to inherit from Φ the identity Φπ→θ (A) = Φθ∗ →π∗ (AT ),
implying that we always can ensure M ≤ N by passing, if necessary, from π, θ, A to θ∗ , π∗ , AT .
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 219

10 . Applying Conic Duality to the (clearly strictly feasible and bounded) conic representation
of Φπ→θ (A) as given by 3.4.30.b, we get
Φπ→θ (A)  
 r ∈ R, s ∈ S,Tr(S` U ) ≤ s` ∀`, Tr(Rk V ) ≤ rk ∀k 
 
= max Tr(W T QT AP ) : U W
r,s,U,V,W 0
WT
 V
  
 
r ∈ R, s ∈ S, U  0, V  0, Y T Y  I
= max Tr(~[U 1/2 Y V 1/2 ]T QT AP ) :
U,V,Y,r,s Tr(S` U ) ≤ s` ∀`, Tr(Rk V ) ≤ rk ∀k
PM 1/2 T 1/2

= max i=1 i
σ (U Q AP V ) : U  0, V  0, Tr(S` U ) ≤ s` ∀`, Tr(Rk V ) ≤ rk ∀k, r ∈ R, s ∈ S
U,V,r,s
[σi (·) are the singular values of M × N matrix; recall that M ≤ N ]

At the last two steps of the above derivation, we have used the well known and easy
to check (check them!) facts that
h i
• U
WT
W
V
 0 if and only if U  0, V  0 and W = U 1/2 Y V 1/2 with Y T Y  I,
and
• the maximum of Frobenius inner products of a given matrix with matrices of
spectral norm not exceeding 1 is the sum of singular values of the matrix.
The concluding optimization problem in the above chain clearly is solvable; let U, V, r, s be
the optimal solution, and let σi be the singular values, and M T
P
ι=1 σι eι fι be the singular value
1/2 T 1/2
decomposition of U Q AP V , so that
M P
Φπ→θ (A) = ι=1 σι (a)
U 1/2 QT AP V 1/2 = ( M T
P
ι=1 σι eι fι ( (b)
1, i = j ≤ M 1, i = j ≤ N
eTi ej = , fiT fj = (c) (3.4.32)
0, i 6= j 0, i 6= j
Tr(U 1/2 S` U 1/2 ) ≤ s` , ` ≤ L & s ∈ S (d.1)
Tr(V 1/2 Rk V 1/2 ) ≤ rk , k ≤ K & r ∈ R (d.2)

Let 1 , ..., N be independent random variables taking values ±1 with probabilities 1/2, and let
M
X N
X
ξ= i ei , η = j fj .
i=1 j=1

Then in view of (3.4.32) it holds, identically in i = ±1, 1 ≤ i ≤ N :

X M
X
ξ T U 1/2 QT AP V 1/2 η = [i j σι eTi eι fιT fj ] = σι = Φπ→θ (A) (3.4.33)
i,ι≤M,j≤N ι=1

On the other hand, setting E = [e1 , ..., eM ], we get an orthonormal M × M matrix such that
ξ = E, where  = [1 ; ...; M ] is a Rademacher vector, and

ξ T U 1/2 S` U 1/2 ξ = T [E T U 1/2 S` U 1/2 E] 


| {z }
S`

By construction, S `  0 is with the same trace as U 1/2 S` U 1/2 , that is, with trace ≤ s` , see
(3.4.32). For every ` such that s` > 0 we have Tr(s−1 ` S ` ) ≤ 1, whence by Lemma 3.4.1

E{exp{ξ T [s−1
` U 1/2
S` U 1/2
]ξ/3}} = E{exp{ T −1
[s ` S ` ]/3}} ≤ 3.
220 LECTURE 3. SEMIDEFINITE PROGRAMMING

As a result, for every ` such that s` > 0 we have

Prob{ξ T U 1/2 S` U 1/2 ξ > 3 ln(4L)s` } < 1/(2L).

The latter relation holds true for those ` for which s` = 0 as well, since for these ` one has
U 1/2 S` U 1/2 = 0, due to the trace of the latter positive semidefinie matrix being ≤ s` . Similar
reasoning with  = [1 ; ...; N ] in the role of  and Rk ’s, rk ’s in the roles of S` , s` demonstrates
that for every k we have

Prob{η T V 1/2 Rk V 1/2 η > 3 ln(4K)rk } < 1/(2K).

Consequently, invoking (3.4.33), there exist realization (ξ, η) of (ξ, η) such that

T
ξ U 1/2 QT AP V 1/2 η = Φπ→θ (A),
T 1/2
ξ U S` U 1/2 ξ ≤ 3 ln(4L)s` ∀`, η T V 1/2 Rk V 1/2 η ≤ 3 ln(4K)rk ∀k.

Setting v = QU 1/2 ξ, x = P V 1/2 η and invoking (3.4.29), we get π(x) ≤


p
p 3 ln(4K), θ∗ (v) ≤
3 ln(4L), resulting in

T
Φπ→θ (A) = ξ U 1/2 QT AP V 1/2 η = v T Ax ≤ π(x)θ∗ (v)Φπ→θ (A),

that is,
q
Φπ→θ (A) ≤ 3 ln(4K) ln(4L)Φπ→θ (A). 2

Note that when π(·) = k · kp , θ(·) = k · kr with p ≥ 2, r ≤ 2, Nesterov’s Theorem 3.4.4 states
that the Semidefinite Relaxation bound ωp,r (A) (which under the circumstances is nothing but
Φπ→θ (A)) is tight within absolute constant factor, which is stronger than what in the case in
question is stated by Theorem 3.4.6. However, it can be proved that in the full scope of the
latter Theorem, logarithmic growth of the tightness factor with K, L is unavoidable.

3.4.3 Matrix Cube Theorem and interval stability analysis/synthesis


Consider the problem of Lyapunov Stability Analysis in the case of interval uncertainty:

U = Uρ = {A ∈ Mn,n | |Aij − A∗ij | ≤ ρDij , i, j = 1, ..., n}, (3.4.34)

where A∗ is the “nominal” matrix, D 6= 0 is a matrix with nonnegative entries specifying the
“scale” for perturbations of different entries, and ρ ≥ 0 is the “level of perturbations”. We deal
with a polytopic uncertainty, and as we remember from Section 3.3.4, to certify the stability
is the same as to find a feasible solution of the associated semidefinite program (3.3.8) with a
negative value of the objective. The difficulty, however, is that the number N of LMI constraints
in this problem is the number of vertices of the polytope (3.4.34), i.e., N = 2m , where m is the
number of uncertain entries in our interval matrix (≡the number of positive entries in D). For
5 × 5 interval matrices with “full uncertainty” m = 25, i.e., N = 225 = 33, 554, 432, which is “a
bit” too many; for “fully uncertain” 10 × 10 matrices, N = 2100 > 1.2 × 1030 ... Thus, the “brute
force” approach fails already for “pretty small” matrices affected by interval uncertainty.
In fact, the difficulty we have encountered lies in the NP-hardness of the following problem:
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 221

Given a candidate Lyapunov stability certificate X  0 and ρ > 0, check whether X


indeed certifies stability of all instances of Uρ , i.e., whether X solves the semi-infinite
system of LMI’s
AT X + XA  −I ∀A ∈ Uρ . (3.4.35)
(in fact, we are interested in the system “AT X + XA ≺ 0 ∀A ∈ Uρ ”, but this is
a minor difference – the “system of interest” is homogeneous in X, and therefore
every feasible solution of it can be converted to a solution of (3.4.35) just by scaling
X 7→ tX).

The above problem, in turn, is a particular case of the following problem:

“Matrix Cube”: Given matrices A0 , A1 , ..., Am ∈ Sn with A0  0, find the largest


ρ = R[A1 , ..., Am : A0 ] such that the set
m
( )
X
Aρ = A = A0 + zi Ai | kzk∞ ≤ ρ (3.4.36)
i=1

– the image of the m-dimensional cube {z ∈ Rm | kzk∞ ≤ ρ} under the affine


m
mapping z 7→ A0 + zi Ai – is contained in the semidefinite cone Sn+ .
P
i=1

This is the problem we will focus on; what follows stems from [13]

3.4.3.1 The Matrix Cube Theorem


The problem “Matrix Cube” (MC for short) is NP-hard; this is true also for the “feasibility
version” MCρ of MC, where we, given a ρ ≥ 0, are interested to verify the inclusion Aρ ⊂ Sn+ .
However, we can point out a simple sufficient condition for the validity of the inclusion Aρ ⊂ Sn+ :

Proposition 3.4.4 Assume that the system of LMI’s

(a) X i  ρAi , X i  −ρAi , i = 1, ..., m;


m (Sρ )
Xi  A0
P
(b)
i=1

in matrix variables X 1 , ..., X m ∈ Sn is solvable. Then Aρ ⊂ Sn+ .

Proof. Let X 1 , ..., X m be a solution of (Sρ ). From (a) it follows that whenever kzk∞ ≤ ρ, we
have X i  zi Ai for all i, whence by (b)
m
2
X X
A0 + zi Ai  A0 − Xi  0.
i=1 i

Our main result is that the sufficient condition for the inclusion Aρ ⊂ Sn+ stated by Proposition
3.4.4 is not too conservative:

Theorem 3.4.7 If the system of LMI’s (Sρ ) is not solvable, then

Aϑ(µ)ρ 6⊂ Sn+ ; (3.4.37)


222 LECTURE 3. SEMIDEFINITE PROGRAMMING

here
µ = max Rank(Ai )
1≤i≤m

(note “i ≥ 1” in the max!), and ϑ(·) is a universal function, specified in the proof to follow, such
that √
π k
ϑ(1) = 1, ϑ(2) = π/2, ϑ(3) = 1.7348..., ϑ(4) = 2 & ϑ(k) ≤ , k ≥ 1. (3.4.38)
2
Proof. Below ζ ∼ N (0, In ) means that ζ is a random Gaussian n-dimensional vector with zero mean and
the unit covariance matrix, and pn (·) stands for the density of the corresponding probability distribution:
 T 
u u
pn (u) = (2π)−n/2 exp − , u ∈ Rn .
2
Let us set
1
ϑ(k) = Z . (3.4.39)
|αi u21 αk u2k |pk (u)du α k

min + ... + ∈ R , kαk1 = 1

It suffices to verify that


(i): With the just defined ϑ(·), insolvability of (Sρ ) does imply (3.4.37);
(ii): ϑ(·) satisfies (3.4.38).
Let us prove (i).
10 . Assume that (Sρ ) has no solutions. It means that the optimal value of the semidefinite problem
 
 X i  ρAi , X i  −ρAi , i = 1, ..., m; 
m
min t P (3.4.40)
t,{X i }  Xi  A0 + tI 
i=1

is positive. Since the problem is strictly feasible, its optimal value is positive if and only if the optimal
value of the dual problem
 
 X m U i + V i = W, i = 1, ..., m, 
Tr([U i − V i ]Ai ) − Tr(W A0 )

max ρ Tr(W ) = 1,
W,{U i ,V i } 
U i, V i, W  0

i=1

is positive. Thus, there exists matrices U i , V i , W such that

(a) U i , V i , W  0,
(b) U + V i = W, i = 1, 2, ...m,
i
m (3.4.41)
Tr([U i − V i ]Ai ) > Tr(W A0 ).
P
(c) ρ
i=1

0
2 . Now let us use simple

Lemma 3.4.2 Let W, A ∈ Sn , W  0. Then

max Tr([U − V ]A) = max Tr(XW 1/2 AW 1/2 ) = kλ(W 1/2 AW 1/2 )k1 . (3.4.42)
U,V 0,U +V =W X=X T :kλ(X)k∞ ≤1

Proof of Lemma. We clearly have

U, V  0, U + V = W ⇔ U = W 1/2 P W 1/2 , V = W 1/2 QW 1/2 , P, Q  0, P + Q = I,

whence

max Tr([U − V ]A) = max Tr([P − Q]W 1/2 AW 1/2 ).


U,V :U,V 0,U +V =W P,Q:P,Q0,P +Q=I
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 223

When P, Q are linked by the relation P + Q = I and vary in {P  0, Q  0}, the matrix
X = P − Q runs through the entire “interval” {−I  X  I} (why?); we have proved the
first equality in (3.4.42). When proving the second equality, we may assume w.l.o.g. that
the matrix W 1/2 AW 1/2 is diagonal, so that Tr(XW 1/2 AW 1/2 ) = λT (W 1/2 AW 1/2 )Dg(X),
where Dg(X) is the diagonal of X. When X runs through the “interval” {−I  X  I}, the
diagonal of X runs through the entire unit cube {kxk∞ ≤ 1}, which immediately yields the
second equality in (3.4.42). 2

By Lemma 3.4.2, from (3.4.41) it follows that there exists W  0 such that
m
X
ρ kλ(W 1/2 Ai W 1/2 )k1 > Tr(W 1/2 A0 W 1/2 ). (3.4.43)
i=1

30 . Now let us use the following observation:


Lemma 3.4.3 With ξ ∼ N (0, In ), for every k and every symmetric n × n matrix A with Rank(A) ≤ k
one has  T
(a)  TE ξ Aξ = Tr(A),
1 (3.4.44)
(a) E |ξ Aξ| ≥ ϑ(Rank(A)) kλ(A)k1 ;
here E stands for the expectation w.r.t. the distribution of ξ.

Proof of Lemma. (3.4.44.a) is evident:


m
X
E ξ T Aξ =

Aij E {ξi ξj } = Tr(A).
i,j=1

To prove (3.4.44.b), by homogeneity it suffices to consider the case when kλ(A)k1 = 1, and
by rotational invariance of the distribution of ξ – the case when A is diagonal, and the
first Rank(A) of diagonal entries of A are the nonzero eigenvalues of the matrix; with this
normalization, the required relation immediately follows from the definition of ϑ(·). 2

40 . Now we are ready to prove (i). Let ξ ∼ N (0, In ). We have


 k
 m
T 1/2 1/2

ρϑ(µ)E |ξ T W 1/2 Ai W 1/2 ξ|
P P
E ρϑ(µ) |ξ W Ai W ξ| =
i=1 i=1
m
kλ(W 1/2 Ai W 1/2 k1
P
≥ ρ
i=1
[by (3.4.44.b) due to Rank(W 1/2 Ai W ) ≤ Rank(Ai ) ≤ µ, i ≥ 1]
1/2

> Tr(W 1/2 A0 W 1/2 )


[by (3.4.43)]
= Tr(ξ T W 1/2 A0 W 1/2 ξ),
[by (3.4.44.a)]

whence ( )
k
X
T 1/2 1/2 T 1/2 1/2
E ρϑ(µ) |ξ W Ai W ξ| − ξ W A0 W ξ > 0.
i=1

It follows that there exists r ∈ Rn such that


m
X
ϑ(µ)ρ |rT W 1/2 Ai W 1/2 r| > rT W 1/2 A0 W 1/2 r,
i=1
224 LECTURE 3. SEMIDEFINITE PROGRAMMING

so that setting zi = −ϑ(µ)ρsign(rT W 1/2 Ai W 1/2 r), we get


m
!
X
T 1/2
r W A0 + zi Ai W 1/2 r < 0.
i=1

m
P
We see that the matrix A0 + zi Ai is not positive semidefinite, while by construction kzk∞ ≤ ϑ(µ)ρ.
i=1
Thus, (3.4.37) holds true. (i) is proved.
To prove (ii), let α ∈ Rk be such that kαk1 = 1, and let
Z
J = |α1 u21 + ... + αk u2k |pk (u)du.
 
α
Let β = , and let ξ ∼ N (0, I2k ). We have
−α
( 2k ) ( k ) ( k )
X X X
2 2 2
E βi ξ i ≤ E βi ξ i + E βi+k ξi+k = 2J. (3.4.45)


i=1 i=1 i=1
 
α1 η1
On the other hand, let ηi = √12 (ξi − ξk+i ), ζi = √12 (ξi + ξk+i ), i = 1, ..., k, and let ω =  ... ,
 

    αk ηk
|α1 η1 | ζ1
 . 
e =  ... , ζ =  ..  . Observe that ζ and ω are independent and ζ ∼ N (0, Ik ). We have
ω
 

|αk ηk | ζk
( 2k ) ( k )
X X
βi ξi2 = 2E αi ηi ζi = 2E |ω T ζ| = E {kωk2 } E {|ζ1 |} ,

E


i=1 i=1

where the concluding equality follows from the fact that ζ ∼ N (0, Ik ) is independent of ω. We further
have Z
2
E {|ζ1 |} = |t|p1 (t)dt = √

and v
um
Z  uX
E {kωk2 } = E {ke
ω k2 } ≥ kE {e
ω } k2 = |t|p1 (t)dt t αi2 .
i=1

Combining our observations, we come to


( 2k )  2
X
2
2 4 4
E βi ξ i ≥ 2 √ kαk2 ≥ √ kαk1 = √ .


i=1
2π π k π k
2
This relation combines with (3.4.45) to yield J ≥ √
π k
. Recalling the definition of ϑ(k), we come to

π k
ϑ(k) ≤ 2 ,
as required in (3.4.38).
It remains to prove that ϑ(2) = π2 . From the definition of ϑ(·) it follows that
Z
ϑ−1 (2) = min |θu21 − (1 − θ)u22 |p2 (u)du ≡ min f (θ).
0≤θ≤1 0≤θ≤1

The function f (θ) is clearly convex and satisfies the identity f (θ) = f (1 − θ), 0 ≤ θ ≤ 1, so that its
minimum is attained at θ = 12 . A direct computation says that f ( 12 ) = π2 . Straightforward boring
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 225
h i
2γ 3/2
computation says that ϑ(3) = min 1 − 2γ + √
2−γ
= 1.7438..., and verification of the fact that ϑ(4) =
0≤γ≤1
2 is left to the reader.18 2

Corollary 3.4.1 Let the ranks of all matrices A1 , ..., Am in MC be ≤ µ. Then the optimal value
in the semidefinite problem
X i  ρAi , X i  −ρAi , i = 1, ..., m, 
 

ρ[A1 , ..., Am : A0 ] = max ρ | m (3.4.46)
X i  A0
P
ρ,X i  
i=1

is a lower bound on R[A1 , ..., Am : A0 ], and the “true” quantity is at most ϑ(µ) times (see
(3.4.39), (3.4.38)) larger than the bound:
ρ[A1 , ..., Am : A0 ] ≤ R[A1 , ..., Am : A0 ] ≤ ϑ(µ)ρ[A1 , ..., Am : A0 ]. (3.4.47)

3.4.3.2 Application: Lyapunov Stability Analysis for an interval matrix


Now we are equipped to attack the problem of certifying the stability of uncertain linear dynamic
system with interval uncertainty. The problem we are interested in is as follows:
“Interval Lyapunov”: Given a stable n × n matrix A∗ 19) and an n × n matrix D 6= 0
with nonnegative entries, find the supremum R[A∗ , D] of those ρ ≥ 0 for which all
instances of the “interval matrix”
Uρ = {A ∈ Mn,n : |Aij − (A∗ )ij | ≤ ρDij , i, j = 1, ..., n}
share a common quadratic Lyapunov function, i.e., the semi-infinite system of LMI’s
X  I; AT X + XA  −I ∀A ∈ Uρ (Lyρ )
in matrix variable X ∈ Sn is solvable.
Observe that X  I solves (Lyρ ) if and only if the matrix cube
 h i
Aρ [X] = B = −I − AT∗ X − XA∗
| {z }
A0 [X]
h i 
zij [Dij E ij ]T X + X[Dij E ij ] |zij | ≤ ρ, (i, j) ∈ D
P
+
(i,j)∈D | {z }
Aij [X]
D = {(i, j) : Dij > 0}

is contained in Sn+ ; here E ij are the “basic n × n matrices” (ij-th entry of E ij is 1, all other
entries are zero). Note that the ranks of the matrices Aij [X], (i, j) ∈ D, are at most 2. Therefore
from Proposition 3.4.4 and Theorem 3.4.7 we get the following result:
18
Here is a “semi-analytic” representation of ϑ(k) for k ≥ 2:
Z ∞ Z ∞
γ 1−γ

ϑ(k) = min min u− v πk−` (u)π` (v)dudv,
k−` `

1≤`≤k/2 0≤γ≤1
0 0
2
where πs (·) is the density of χ -distribution with s degrees of freedom.
19)
I.e., with all eigenvalues from the open left half-plane, or, which is the same, such that AT∗ X + XA∗ ≺ 0 for
certain X  0.
226 LECTURE 3. SEMIDEFINITE PROGRAMMING

Proposition 3.4.5 Let ρ ≥ 0. Then


(i) If the system of LMI’s

h i X  I, h i
X ij  −ρDij [E ij ]T X + XE ij , X ij  ρDij [E ij ]T X + XE ij , (i, j) ∈ D
n (Aρ )
X ij  −I − AT∗ X − XA∗
P
(i,j)∈D

in matrix variables X, X ij , (i, j) ∈ D, is solvable, then so is the system (Lyρ ), and the X-
component of a solution of the former system solves the latter system.
(ii) If the system of LMI’s (Aρ ) is not solvable, then so is the system (Ly πρ ).
2
In particular, the supremum ρ[A∗ , D] of those ρ for which (Aρ ) is solvable is a lower bound
for R[A∗ , D], and the “true” quantity is at most π2 times larger than the bound:
π
ρ[A∗ , D] ≤ R[A∗ , D] ≤ ρ[A∗ , D].
2

Computing ρ[A∗ , D]. The quantity ρ[A∗ , D], in contrast to R[A∗ , D], is “efficiently computable”:
applying dichotomy in ρ, we can find a high-accuracy approximation of ρ[A∗ , D] via solving a small series
of semidefinite feasibility problems (Aρ ). Note, however, that problem (Aρ ), although “computationally
tractable”, is not that simple: in the case of “full uncertainty” (Dij > 0 for all i, j) it has n2 + 1 matrix
variables of the size n×n each. It turns out [13] that one can reduce dramatically the sizes of the problem
specifying ρ[A∗ , D]. The resulting (equivalent!) description of the bound is:
  
 
 X  I, 
  
 m  

T
P
Y −
  

 


 η` ej` ej` Xei1 Xei2 ... Xeim 


`=1

 
   

T
     
1

   [Xe i 1 ] η 1  

= inf λ: T  0, , (3.4.48)
 
ρ[A∗ , D] λ,Y,X,{ηi }    
 [Xei2 ] η2 
 
   .
.. ..  

 



 

 .  



T
[Xe ] η

  
 


  im m 

A0 [X] ≡ −I − AT∗ X − XA∗  0, Y  λA0 [X], λ > 0
  

where (i1 , j1 ), ..., (im , jm ) are the positions of the uncertain entries in our uncertain matrix (i.e., the
pairs (i, j) such that Dij > 0) and e1 , ..., en are the standard basic orths in Rn .
Note that the optimization program in (3.4.48) has just two symmetric matrix variables X, Y , a single
scalar variable λ and m ≤ n2 scalar variables ηi , i.e., totally at most 2n2 + n + 2 scalar design variables,
which, for large m, is much less than the design dimension of (Aρ ).
A good exercise is to pass from (Aρ ) to (3.4.48) by utilizing Corollary 3.3.1.

Remark 3.4.2 Note that our results on the Matrix Cube problem can be applied to the in-
terval version of the Lyapunov Stability Synthesis problem, where we are interested to find the
supremum R of those ρ for which an uncertain controllable system

d
x(t) = A(t)x(t) + B(t)u(t)
dt
with interval uncertainty

(A(t), B(t)) ∈ Uρ = {(A, B) : |Aij − (A∗ )ij | ≤ ρDij , |Bi` − (B∗ )i` | ≤ ρCi` ∀i, j, `}
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 227

admits a linear feedback


u(t) = Kx(t)
such that all instances A(t)+B(t)K of the resulting closed loop system share a common quadratic
Lyapunov function. Here our constructions should be applied to the semi-infinite system of LMI’s

Y  I, BL + AY + LT B T + Y AT  −I ∀(A, B) ∈ Uρ

in variables L, Y (see Proposition 3.3.4), and them yield an efficiently computable lower bound
on R which is at most π2 times less than R.

We have seen that the Matrix Cube Theorem allows to build tight computationally tractable
approximations to semi-infinite systems of LMI’s responsible for stability of uncertain linear
dynamical systems affected by interval uncertainty. The same is true for many other semi-
infinite systems of LMI’s arising in Control in the presence of interval uncertainty, since in a
typical Control-related LMI, a perturbation of a single entry in the underlying data results in
a small-rank perturbation of the LMI – a situation well-suited for applying the Matrix Cube
Theorem.

π
3.4.3.3 Application: Nesterov’s 2 Theorem revisited
π
Our results on the Matrix Cube problem give an alternative proof of Nesterov’s 2 Theorem
(Theorem 3.4.2). Recall that in this theorem we are comparing the true maximum

OP T = max{dT Ad | kdk∞ ≤ 1}
d

of a positive semidefinite (A  0) quadratic form on the unit n-dimensional cube and the
semidefinite upper bound

SDP = max{Tr(AX) | X  0, Xii ≤ 1, i = 1, ..., n} (3.4.49)


X

on OP T ; the theorem says that


π
OP T ≤ SDP ≤ OP T. (3.4.50)
2
To derive (3.4.50) from the Matrix Cube-related considerations, assume that A  0 rather than
A  0 (by continuity reasons, to prove (3.4.50) for the case of A  0 is the same as to prove the
relation for all A  0) and let us start with the following simple observation:

Lemma 3.4.4 Let A  0 and


n o
OP T = max dT Ad | kdk∞ ≤ 1 .
d

Then ( " # )
1 1 dT 1/2
= max ρ : 0 ∀(d : kdk∞ ≤ ρ ) (3.4.51)
OP T d A−1
and
1 n o
= max ρ : A−1  X ∀(X ∈ Sn : |Xij | ≤ ρ∀i, j) . (3.4.52)
OP T
228 LECTURE 3. SEMIDEFINITE PROGRAMMING

Proof.
" To#get (3.4.51), note that by the Schur Complement Lemma, all matrices of the form
1 dT
with kdk∞ ≤ ρ1/2 are  0 if and only if dT (A−1 )−1 d = dT Ad ≤ 1 for all d,
d A−1
kdk∞ ≤ ρ1/2 , i.e., if and only if ρ·OP T ≤ 1; we have derived (3.4.51). We now have
1
(a) OP T ≥ρ
" # m [by (3.4.51)]
1 dT
0 ∀(d : kdk∞ ≤ ρ1/2 )
d A−1
m [the Schur Complement Lemma]
A−1
 ρddT
∀(d, kdk∞ ≤ 1)
m
T −1 T 2
x A x ≥ ρ(d x) ∀x ∀(d : kdk∞ ≤ 1)
m
xT A−1 x ≥ ρkxk21 ∀x
m
(b) A−1  ρY ∀(Y = Y T : |Yij | ≤ 1∀i, j)

where the concluding m is given by the evident relation


n o
kxk21 = max xT Y x : Y = Y T , |Yij | ≤ 1 ∀i, j .
Y

The equivalence (a) ⇔ (b) is exactly (3.4.52). 2

1
By (3.4.52), OP T is exactly the maximum R of those ρ for which the matrix cube

Cρ = {A−1 +
X
zij S ij | max |zij | ≤ ρ}
i,j
1≤i≤j≤n

is contained in Sn+ ; here S ij are the “basic symmetric matrices” (S ii has a single nonzero entry,
equal to 1, in the cell ii, and S ij , i < j, has exactly two nonzero entries, equal to 1, in the cells
ij and ji). Since the ranks of the matrices S ij do not exceed 2, Proposition 3.4.4 and Theorem
3.4.7 say that the optimal value in the semidefinite program
 
 X ij  ρS ij , X ij  −ρS ij , 1 ≤ i ≤ j ≤ n, 
ρ(A) = max ρ P ij
X  A−1 (S)
ρ,X ij  
i≤j

is a lower bound for R, and this bound coincides with R up to the factor π2 ; consequently, ρ(A)
1
π
is an upper bound on OP T , and this bound is at most 2 times larger than OP T . It remains
1
to note that a direct computation demonstrates that ρ(A) is exactly the quantity SDP given by
(3.4.49).

3.4.3.4 Application: Bounding robust ellitopic norms of uncertain matrix with box
uncertainty
Consider the following problem which arises, e.g., in Robust Control:
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 229

Given box-type uncertainty set

N
X
U[ρ] = {A = zi Ai : kzk∞ ≤ ρ}
i=1

in the space of m × n matrices, upper-bound the quantity

Opt∗ (ρ) = max |A|,


A∈U [ρ]

where | · | stands for the spectral norm of a matrix.

This problem can be immediately reduced to Matrix Cube; indeed, associating with m × n
matrix A symmetric (m + n) × (m + n) matrix
" #
A
L[A] = ,
AT

observe that |A| ≤ R if and only if RIm+n − L[A]  0. Therefore the relation

Opt∗ (ρ) ≤ R (3.4.53)

is equivalent to
N
X
RIm+n + zi L[Ai ]  0 ∀(z : kzk∞ ≤ ρ).
i=1

Applying the Matrix Cube machinery, we conclude that an efficiently verifiable sufficient condi-
tion for the validity of the latter semi-infinite LMI is the solvability of the parametric system of
LMIs
N
X
RIm+n − ρ Ui , Ui  ±L[Ai ], 1 ≤ i ≤ N S[R, ρ]
i=1

in matrix variables Ui . Besides this, we clearly have Rank(L[A]) ≤ 2Rank(A), so that the
conservatism of the just presented sufficient condition for (3.4.53) can be quantified by Matrix
Cube Theorem, specifically,

Setting µ = max Rank(Ai ), we have:


1≤i≤N

• (3.4.53) does take place when S[R, ρ] is feasible, and


• when S[R, ρ] is infeasible, one has Opt(ϑ(2µ)ρ) > R, with ϑ(·) from Theorem
3.4.7.

Our current goal is to extend this result from the spectral norm |·| onto a more general “ellitopic”
norm of a matrix.
230 LECTURE 3. SEMIDEFINITE PROGRAMMING

Bounding robust ellitopic norm of uncertain matrix under box uncertainty: problem
setting. Let
X = {x : ∃t ∈ T : xT Tk x ≤ tk , k ≤ K} ⊂ Rn
(3.4.54)
B∗ = {y : ∃s ∈ S : y T S` y ≤ s` , ` ≤ L} ⊂ Rm
be two basic ellitopes, see Section 3.4.2, and let Ai ∈ Rm×n , 1 ≤ i ≤ N . In the sequel we set
" #
T T A
kAk = 2 max y Ax = max [y; x] [y; x] : Rm×n → R (3.4.55)
y∈B∗ ,x∈X [y;x]∈B∗ ×X AT

Our goal is to upper-bound, in a computationally efficient fashion, the quantity


X
Opt∗ = max k i Ai k (3.4.56)
:kk∞ ≤1
i

which we refer to as robust k · k-norm of uncertain matrix A = { i i Ai : kk∞ ≤ 1}.


P

Note that when both X and B∗ are unit Euclidean balls, k · k becomes twice the spectral
norm. Another cases covered by our setup are those when X and B∗ are k · kp and k · kq balls,
2 ≤ p, q ≤ ∞, resulting in

kAk = 2 max{kAxk q : kxkp ≤ 1}.


x q−1

Processing the problem. Let S be the closed conic hull of S, T be the closed conic hull of
T (see proof of Theorem 3.4.5), so that, by item 10 of the proof of Theorem 3.4.5, we have

S = {s : [s; 1] ∈ S}, S∗ = {[g; τ ] : τ ≥ φS (−g)} , T = {t : [t; 1] ∈ T}, T∗ = {[g; τ ] : τ ≥ φT (−g)} ,

where
φH (λ) = max λT h
h∈H

is the support function of a set H.


Let
( ( h i )
Pi Ai
T  0, i ≤ N
Opt = min
λ≥0,µ≥0,Pi ,Qi
φS (λ) + φT (µ) : PAi QPi P P
P 
i i
λ S , i Qi 
` ` `
µ T
k k k
  h P Ai
i 
i
  T  0, i ≤ N, [−λ; p] ∈ S∗ , [−µ; q] ∈ T∗ 
= min p+q : PAi QP i P P
λ,µ,Pi ,Qi ,p,q   P
i i
 λ
` ` `
S , i Qi  µ T
k k k 
λ ≥ 0, µ ≥ 0
 t ∈ T ,s ∈ S
  
 P 
T Tr(Y S` ) ≤ s` , ` ≤ L, Tr(XTk ) ≤ tk , k ≤ K (3.4.57)
= max 2 i Tr(Wi Ai ) : h i
Y,X,Wi ,s,t  Y Wi
 T  0 ∀i ≤ N 
Wi X
n P n [conic duality]
o
1/2 A 1/2 )k Y  0, X  0, t ∈ T , s ∈ S,
= max 2 kσ(Y iX 1 :
|{z} Y,X,s,t i Tr(Y S` ) ≤ s` , ` ≤ L, Tr(XTk ) ≤ tk , k ≤ K
(a)
nP n o
Y  0, X  0, t ∈ T , s ∈ S
= max kλ(L[Y 1/2 Ai X 1/2 ])k1 :
|{z} Y,X,s,t i Tr(Y S` ) ≤ s` , ` ≤ L, Tr(XTk ) ≤ tk , k ≤ K
(b)

where σ(A) is the vector of singular values of a matrix A, λ(A) is the vector of eigenvalues of a
symmetric matrix A, and " #
B
L[B] = ,
BT
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 231

In (3.4.57), (a) follows from the two simple observations (cf. proof of Theorem 3.4.6):
h i
• LMI QPT Q R
 0 with p × p matrix P and r × r matrix R takes place if and only if

P  0, R  0, and Q = P 1/2 Y R1/2 with p × r matrix Y such that Y T Y  Ir , and


• for p × r matrix A, one has max{Tr(Y T A) : Y ∈ Rp×r , Y T Y  Ir } = kσ(A)k1
Y

while (b) stems from the fact that the eigenvalues of L[B] are positive singular values of B,
minus these positive singular values, and a number of zeros.
Note that Opt as defined in (3.4.57) clearly is a convex function of [A1 , ..., AN ].
Observe that Opt∗ ≤ Opt. Indeed, the problem specifying Opt clearly is solvable, and if λ ≥
0, µ ≥ 0, Pi , Qi is its optimal solution, we have for all y ∈ B∗ , x ∈ X , i = ±1 :
2i y T Ai x ≤ y T Pi y + xT Qi x ⇒ h2 i i y T Aii x ≤ y T [ ` λ` S` ] y + xT [
P P P
k µk Tk ] x
⇒ 2 i i y T Ai x ≤ maxs∈S,t∈T λT s + µT t ≤ φS (λ) + φT (t) = Opt.
P

This relation holds true for all x ∈ X , y ∈ B∗ and all i = ±1, implying that Opt∗ ≤ Opt.
Now let X  0, Y  0, t, s be such that t ∈ T , s ∈ S, Tr(Y S` ) ≤ s` , ` ≤ L, Tr(XTk ) ≤ tk ,
k ≤ K, and X
Opt = kλ(L[Y 1/2 Ai X 1/2 ])k1 .
i
By Lemma 3.4.3, if the ranks of all matrices Ai do not exceed a given κ, which we assume from
now on, then for ω ∼ N (0, Im+n ) one has
n o
E |ω T L[Y 1/2 Ai X 1/2 ]ω| ≥ kλ(L[Y 1/2 Ai X 1/2 ])k1 /ϑ(2κ),

where ϑ(k) is the universal function from Theorem 3.4.7. It follows that
( ) ( )
X X
Opt ≤ ϑ(2κ)Eω∼N (0,Im+n ) |ω T L[Y 1/2 Ai X 1/2 ]ω| = ϑ(2κ)E[η;ξ]∼N (0,Diag{Y,X}) 2 |η T Ai ξ| .
i i

Now let p(·) be the norm on Rn with the unit ball X , and q(·) be the norm on Rm with the
unit ball B∗ ; then
X X
∀(η ∈ Rm , ξ ∈ Rn ) : 2 |η T Ai ξ| = max 2η T [ i Ai ]ξ ≤ q(η)p(ξ)Opt∗ ,
i =±1
i i

and we arrive at the relation


Opt ≤ ϑ(2κ)Opt∗ E[η;ξ]∼N (0,Diag{Y,X}) {q(η)p(ξ)} = ϑ(2κ)Opt∗ Eξ∼N (0,X) {p(ξ)} Eη∼N (0,Y ) {q(η)}
(3.4.58)

Lemma 3.4.5 Let


Z = {z ∈ Rd : ∃w ∈ W : z T Zj z ≤ wj , 1 ≤ j ≤ J} ⊂ Rd
be a basic ellitope, W  0 be symmetric d × d matrix such that
∃w ∈ W : Tr(W Zj ) ≤ wj , j ≤ J,
and ω ∼ N (0, W ). Denoting by r(·) the norm on Rd with the unit ball Z, we have
(
1,p J =1
E{r(ω)} ≤ υ(J) := 5 (3.4.59)
2 ln(2J), J > 1
232 LECTURE 3. SEMIDEFINITE PROGRAMMING

Proof. Let us start with the case of J = 1. Setting w̄ = max{w : w ∈ W} and Z = Z1 /w̄, we have
Tr(W Z) ≤ 1 and r(u) = kZ 1/2 uk2 . Setting W̄ = Z 1/2 W Z 1/2 and ω̄ = Z 1/2 ω, we get ω̄ ∼ N (0, W̄ ),
Tr(W̄ ) ≤ 1, and q q
E{r(ω)} = E{kω̄k2 } ≤ E{ω̄ T ω̄} = Tr(W̄ ) ≤ 1 = υ(1).
Now let J > 1. Observe that if Θ  0 is a d × d matrix with trace 1, then

0 ≤ θ < 1/2 ⇒ Eζ∼N (0,Id ) exp{θζ T Θζ} = Eζ∼N (0,Id ) exp{θ i ζi2 λi (Θ)} P
P
≤ Es∼N (0,1) {exp{θs2 }} =(1 − 2θ)−1/2 [by convexity of Eζ∼N (0,Id ) exp{θ i ζi2 λi } in λ]

1
⇒ ∀s ≥ 0 : Probζ∼N (0,Id ) ζ T Θζ ≥ s2 } ≤ √1−2θ exp{−θs2 }.

Under the premise of Lemma, let w ∈ W be such that Tr(W Zj ) ≤ wj for all j. For every j such that
wj > 0, setting Θj = W 1/2 Zj W 1/2 /wj , we get Θj  0, Tr(Θj ) ≤ 1, so that by the above
exp{−θs2 }
∀s > 0 : Probω∼N (0,W ) {ω T Zj ω > s2 wj } = Probζ∼N (0,Id ) {ζ T Θj ζ > s2 } ≤ √ .
1 − 2θ
The resulting inequality clearly holds true for j with wj = 0 as well. Now, when ω and s > 0 are such
that ω T Zj ω ≤ s2 wj for all j, we have r(ω) ≤ s. Combining our observations, we get
exp{−θs2 }
 
Probω∼N (0,W ) {r(ω) > s} ≤ min 1, K √ ,
1 − 2θ
implying that

exp{−θs2 }
Z  
Eω∼N (0,W ) {r(ω)} ≤ min 1, K √ ds
0 1 − 2θ
Optimizing in θ, we arrive at
5p
Eω∼N (0,W ) {r(ω)} ≤ ln(2J) = υ(J). 2
2
Applying Lemma to Z = X , W = X, and to Z = B∗ , W = Y , we get from (3.4.58) the following
conclusion:
Proposition 3.4.6 In the situation described in Problem Setting, assuming that ranks of all Ai
are ≤ κ, the efficiently computable quantity Opt as given by (3.4.57) is a reasonably tight upper
bound on the quantity of interest Opt∗ as given by (3.4.56), specifically,
Opt∗ ≤ Opt ≤ υ(K)υ(L)ϑ(2κ), √
(3.4.60)
ϑ(1) = 1, ϑ(2) = π2 , ϑ(4) = 2, ϑ(k) ≤ π k/2.
with the same ϑ(·) as in Theorem 3.4.7, with K, L given by (3.4.54), and with υ(·) given by
(3.4.59).

Remark 3.4.3 Assume that matrices Ai = Ai [χ] are affine in some vector χ of control param-
eters. In this case, the quantities Opt∗ and Opt as defined by (3.4.56), resp., (3.4.57), become
functions Opt∗ (χ), Opt = Opt(χ) of χ, and it is immediately seen that both of them are convex.
As a result, we can handle, to some extent, the problem of minimizing in χ the robust k · k-norm
of uncertain matrix X
A[χ] = {A = i Ai [χ] : kk∞ ≤ 1}.
i
More precisely, we can minimize over χ efficiently computable convex upper bound Opt(χ) on
the robust norm Opt∗ (χ) of A[χ], the bound being reasonably tight provided that the ranks of
matrices Ai [χ] are small for all χ in question.
3.4. SEMIDEFINITE RELAXATIONS OF INTRACTABLE PROBLEMS 233

Extension. The above results can be straightforwardly extended from what we called ellitopic
norms of matrices onto a more general class of matrix norms. Specifically, let

• X ⊂ Rn be a set with nonempty interior represented as

I PI
X Pi Xi } = {x = : xi ∈ Xi , λi ≥ 0,
S P
= Conv{ i=1 λi Pi xi i λi = 1}
i=1
= {x = kxi kXi ≤ 1}, [k · kXi : norm on Rni with unit ball Xi ]
P P
Pi xi :
i i

where Xi ⊂ Rni are basic ellitopes. Clearly, X is a convex compact set symmetric w.r.t.
the origin and containing a neighbourhood of the origin and as such is the unit ball of
certain norm k · kX on Rn .

Our assumption allows, of course, X to be an ellitope, but it allows also for a much wider
family of convex sets and associated norms. For example, let pi ∈ [2, ∞], Xi = {xi ∈ Rni :
kxi kpi ≤ 1}, and let

Pi xi = [0; ...; 0; xi ; 0; ...; 0]

be natural embeddings of Rni into Rn1 +...+nI = Rn1 × ... × RnI . In this case

X X
X = {[x1 ; ...; xI ] : xi ∈ Rni , i ≤ I : kxi kpi ≤ 1} & k[x1 ; ...; xI ]kX = kxi kpi ;
i i

in particular, we get handle on block `1 /`2 norm.

• Y ⊂ Rm is the polar of a set of the structure just described:

J
X X
Y = {y ∈ Rm : max z T y ≤ 1}, Z = {z = µj Qj zj , zj ∈ Zj , µj ≥ 0, µj = 1}
z∈Z
j=1 j

where Zj ⊂ Rmj are basic ellitopes and Qj , Zj are such that Z has a nonempty interior.
Y is the unit ball of some norm, k · kY , on Rm ; we clearly have

kykY = max z T y = max max [QTj y]T zj = max kQTj ykZj ,∗ ,


z∈Z j zj ∈Zj j

where k · kZj ,∗ is the norm conjugate to the norm on Rmj with the unit ball Zj .
For example, when Zj = {zj ∈ Rmj : kzj krj ≤ 1}, rj ∈ [2, ∞], we get

rj
kykY = max kQTj ykqj , qj = .
j rj − 1
234 LECTURE 3. SEMIDEFINITE PROGRAMMING

Now, when A ∈ Rm×n , the operator norm of A induced by the norms k · kX and k · kY on the
argument and image spaces is

kAkX →Y = max{kAxkY : kxkX ≤ 1}


x  
= max max kQTj AxkZj ,∗ : kxkX ≤ 1
x  j 
= max max max{zjT QTj Ax : zj ∈ Zj } : kxkX ≤ 1
x  j zj 
= max max zjT QTj Ax : zj ∈ Zj , x ∈ X
j  x,z j 
= max max zjT QTj Ax : zj ∈ Zj , x ∈ Conv{∪i Pi Xi }
j  z ,x
j 
= max max zjT QTj APi xi : zj ∈ Zj , xi ∈ Xi
j zj ,x1 ,...,x
 I 
= max max max zjT QTj APi xi
j i zj ∈Zj ,xi ∈Xi
= T
max kQj APi kij ,
i,j

where
kQTj APi kij = max zjT [QTj APi ]xi .
zj ∈Zj ,xi ∈Xi

As we know from Theorem 3.4.6, we can upper-bound kQTj APi kij by Φij (QTj APi ) with convex
q
and efficiently computable Φij (·), the bound being tight within the factor 3 ln(4Ki ) ln(4Lj ),
where Ki and Lj are ellitopic sizes (numbers of quadratic constraints in the ellitopic descriptions)
of Xi and Zj . Besides this, the above reasoning combines with Proposition 3.4.6 to imply that if
A={ K k=1 k Ak : kk∞ ≤ 1} is uncertain m × n matrix with box uncertainty, we can efficiently
P

upper-bound the robust X , Y-norm

kAkX →Y = max kAkX →Y


A∈A

of A by properly defined quantity ΦX →Y (A1 , ..., AK ), where ΦX →Y (·) is an efficiently computable


convex function, and this upper bound is tight within the factor

max υ(Ki )υ(Lj )ϑ(2κ),


i,j

where κ is the maximum over k ≤ K of ranks of Ak , with υ(·) defined by (3.4.59) and ϑ(·)
defined in Theorem 3.4.7.
Illustration: Let
PK
m = n = k=1 dk , I = 1, J = K
X = X1 = {w = [w1 ; ...; wK ] ∈ Rn : wk ∈ Rdk , kwk k2 ≤ 1, k ≤ K}
= {x = [x1 ; ...; xK ] ∈ Rn1 × ... × RnK : ∃t ∈ T = [0, 1]K : xT Tk x := xTk xk ≤ tk , k ≤ K}
= {x = [x1 ; ...; xK ] ∈ Rn1 × ... × RnK : kxk k2 ≤ 1, k ≤ K}
Qj zj = [01×d1 ; ...; 01×dj−1 ; zj ; 01×dj+1 ; ...; 01×dK ] : Rdj → Rn , j ≤ K
PJ
{z = j=1 µj Qj zj : zj ∈ Zj := {zi ∈ Rdj : zjT zj ≤ 1}, µj ≥ 0, j µj = 1},
P
Z =
{z = [z1 ; ...; zK ] : zk ∈ Rdk , k≤K kzk k2 ≤ 1}
P
=
Y = {y ∈ Rm ≡ Rn : max z T y ≤ 1} = {y = [y1 ; ...; yK ] : yk ∈ Rdk , kyk k2 ≤ 1, k ≤ K}
z∈Z
= X,
k[x1 ; ...; xK ]kX = k[x1 ; ...; xK ]kY = max kxk k2 .
k≤K
3.5. S-LEMMA AND APPROXIMATE S-LEMMA 235

In this case, A ∈ Rn×n can be reprsented as A = [Aij ]i,j≤K with di × dj blocks Aij , and
 
K
 X 
kAkX →Y = kAkX →X = max max k Aij yj k2 : kyj k2 ≤ 1, j ≤ K
i≤K y=[y1 ;...;yK ]  
j=1
∈Rd1 ×...×RdK

Toy experiment. Given an uncertain n × n matrix A = {A = N T


i=1 i ai bi : kk∞ ≤ 1}, we
P

are interested to lower-bound the supremum ρ∗ of reals ρ > 0 such that all instances of the
uncertain matrix In + ρA are nonsingular. A sufficient condition for a given ρ > 0 to satisfy the
latter condition is that ρ max kAkX →X < 1, and different selections of X result, in general, in
A∈A
different lower bounds on ρ∗ . In our experiment,

• we set n = 32, select at random columns ai of the n × n Hadamard matrix and columns
bi of In , 1 ≤ i ≤ N = 14, and set

N
X
A={ i ai bTi : kk∞ ≤ 1}
i=1

• for K = 2` , 0 ≤ ` ≤ 5, we applied the above machinery to upper bound the robust norm
of A induced by the norm kxk(K) = max1≤k≤K kxk k2 , where x1 ,...,xK are consecutive
segments, with n/K entries each, of x ∈ Rn .

In our experiment, the resulting bounds on the robust norms of A were as follows:

K 1 2 4 8 16 32
bound 10.4525 9.7980 11.1589 11.6066 13.8419 14.0000

so that the best lower bound on ρ∗ corresponds to K = 2 and is equal to 1/9.7980 = 0.1021.
Brute force simulation demonstrates that ρ∗ ≤ 0.1722, thus our lower bound on ρ∗ is tight within
the factor 1.69.

3.5 S-Lemma and Approximate S-Lemma


3.5.1 S-Lemma
Let us look again at the Lagrange relaxation of a quadratically constrained quadratic problem,
but in the very special case when all the forms involved are homogeneous, and the right hand
sides of the inequality constraints are zero:

minimize xT Bx
(3.5.1)
s.t. xT Ai x ≥ 0, i = 1, ..., m

(B, A1 , ..., Am are given symmetric m × m matrices). Assume that the problem is feasible. In
this case (3.5.1) is, at a first glance, a trivial problem: due to homogeneity, its optimal value
is either −∞ or 0, depending on whether there exists or does not exist a feasible vector x such
that xT Bx < 0. The challenge here is to detect which one of these two alternatives takes
place, i.e., to understand whether or not a homogeneous quadratic inequality xT Bx ≥ 0 is a
236 LECTURE 3. SEMIDEFINITE PROGRAMMING

consequence of the system of homogeneous quadratic inequalities xT Ai x ≥ 0, or, which is the


same, to understand when the implication
(a) xT Ai x ≥ 0, i = 1, ..., m
⇓ (3.5.2)
(b) xT Bx ≥ 0
holds true.
In the case of homogeneous linear inequalities it is easy to recognize when an inequality
x b ≥ 0 is a consequence of the system of inequalities xT ai ≥ 0, i = 1, ..., m: by Farkas
T

Lemma, it is the case if and only if the inequality is a linear consequence of the system, i.e.,
if b is representable as a linear combination, with nonnegative coefficients, of the vectors ai .
Now we are asking a similar question about homogeneous quadratic inequalities: when (b) is a
consequence of (a)?
In general, there is no analogy of the Farkas Lemma for homogeneous quadratic inequalities.
Note, however, that the easy “if” part of the Lemma can be extended to the quadratic case:
if the target inequality (b) can be obtained by linear aggregation of the inequalities (a) and a
trivial – identically true – inequality, then the implication in question is true. Indeed, a linear
aggregation of the inequalities (a) is an inequality of the type
m
X
xT ( λi Ai )x ≥ 0
i=1

with nonnegative weights λi , and a trivial – identically true – homogeneous quadratic inequality
is of the form
xT Qx ≥ 0
with Q  0. The fact that (b) can be obtained from (a) and a trivial inequality by linear
m
λi Ai + Q with λi ≥ 0, Q  0, or, which
P
aggregation means that B can be represented as B =
i=1
m
is the same, if B 
P
λi Ai for certain nonnegative λi . If this is the case, then (3.5.2) is trivially
i=1
true. We have arrived at the following simple
Proposition 3.5.1 Assume that there exist nonnegative λi such that B 
P
λi Ai . Then the
i
implication (3.5.2) is true.
Proposition 3.5.1 is no more than a sufficient condition for the implication (3.5.2) to be true,
and in general this condition is not necessary. There is, however, an extremely fruitful particular
case when the condition is both necessary and sufficient – this is the case of m = 1, i.e., a single
quadratic inequality in the premise of (3.5.2):
Theorem 3.5.1 [S-Lemma] Let A, B be symmetric n × n matrices, and assume that the
quadratic inequality
xT Ax ≥ 0 (A)
is strictly feasible: there exists x̄ such that x̄T Ax̄ > 0. Then the quadratic inequality
xT Bx ≥ 0 (B)
is a consequence of (A) if and only if it is a linear consequence of (A), i.e., if and only if there
exists a nonnegative λ such that
B  λA.
3.5. S-LEMMA AND APPROXIMATE S-LEMMA 237

We are about to present an “intelligent” proof of the S-Lemma based on the ideas of semidef-
inite relaxation.
In view of Proposition 3.5.1, all we need is to prove the “only if” part of the S-Lemma, i.e.,
to demonstrate that if the optimization problem
n o
min xT Bx : xT Ax ≥ 0
x

is strictly feasible and its optimal value is ≥ 0, then B  λA for certain λ ≥ 0. By homogeneity
reasons, it suffices to prove exactly the same statement for the optimization problem
n o
min xT Bx : xT Ax ≥ 0, xT x = n . (P)
x

The standard semidefinite relaxation of (P) is the problem

min {Tr(BX) : Tr(AX) ≥ 0, Tr(X) = n, X  0} . (P0 )


X

If we could show that when passing from the original problem (P) to the relaxed problem (P0 )
the optimal value (which was nonnegative for (P)) remains nonnegative, we would be done.
Indeed, observe that (P0 ) is clearly bounded below (its feasible set is compact!) and is strictly
feasible (which is an immediate consequence of the strict feasibility of (A)). Thus, by the Conic
Duality Theorem the problem dual to (P0 ) is solvable with the same optimal value (let it be
called nθ∗ ) as the one in (P0 ). The dual problem is

max {nµ : λA + µI  B, λ ≥ 0} ,
µ,λ

and the fact that its optimal value is nθ∗ means that there exists a nonnegative λ such that

B  λA + nθ∗ I.

If we knew that the optimal value nθ∗ in (P0 ) is nonnegative, we would conclude that B  λA
for certain nonnegative λ, which is exactly what we are aiming at. Thus, all we need is to prove
that under the premise of the S-Lemma the optimal value in (P0 ) is nonnegative, and here is
the proof:
Observe first that problem (P0 ) is feasible with a compact feasible set, and thus is solvable.
Let X ∗ be an optimal solution to the problem. Since X ∗ ≥ 0, there exists a matrix D such
that X ∗ = DDT . Note that we have
0 ≤ Tr(AX ∗ ) = Tr(ADDT ) = Tr(DT AD),
nθ∗ = Tr(BX ∗ ) = Tr(BDDT ) = Tr(DT BD), (*)
n = Tr(X ∗ ) = Tr(DDT ) = Tr(DT D).

It remains to use the following observation

(!) Let P, Q be symmetric matrices such that Tr(P ) ≥ 0 and Tr(Q) < 0. Then
there exists a vector e such that eT P e ≥ 0 and eT Qe < 0.

Indeed, let us believe that (!) is valid, and let us prove that θ∗ ≥ 0. Assume, on the contrary,
that θ∗ < 0. Setting P = DT BD and Q = DT AD and taking into account (*), we see that
the matrices P, Q satisfy the premise in (!), whence, by (!), there exists a vector e such that
0 ≤ eT P e = [De]T A[De] and 0 > eT Qe = [De]T B[De], which contradicts the premise of the
S-Lemma.
238 LECTURE 3. SEMIDEFINITE PROGRAMMING

It remains to prove (!). Given P and Q as in (!), note that Q, as every symmetric matrix,
admits a representation
Q = U T ΛU
with an orthonormal U and a diagonal Λ. Note that θ ≡ Tr(Λ) = Tr(Q) < 0. Now let ξ be
a random n-dimensional vector with independent entries taking values ±1 with probabilities
1/2. We have

[U T ξ]T Q[U T ξ] = [U T ξ]T U T ΛU [U T ξ] = ξ T Λξ = Tr(Λ) = θ ∀ξ,

while
[U T ξ]T P [U T ξ] = ξ T [U P U T ]ξ,
and the expectation of the latter quantity over ξ is clearly Tr(U P U T ) = Tr(P ) ≥ 0. Since
the expectation is nonnegative, there is at least one realization ξ¯ of our random vector ξ such
that
¯ T P [U T ξ].
0 ≤ [U T ξ] ¯

We see that the vector e = U T ξ¯ is a required one: eT Qe = θ < 0 and eT P e ≥ 0. 2

3.5.2 Inhomogeneous S-Lemma


Proposition 3.5.2 [Inhomogeneous S-Lemma] Consider optimization problem with quadratic
objective and a single quadratic constraint:
n o
f∗ = min f0 (x) ≡ xT A0 x + 2bT0 x + c0 : f1 (x) ≡ xT A1 x + 2bT1 x + c1 ≤ 0 (3.5.3)
x

Assume that the problem is strictly feasible and below bounded. Then the Semidefinite relaxation
(3.4.5) of the problem is solvable with the optimal value f∗ .

Proof. By Proposition 3.4.1, the optimal value in (3.4.5) can be only ≤ f∗ . Thus, it suffices to
verify that (3.4.5) admits a feasible solution with the value of the objective ≥ f∗ , that is, that
there exists λ∗ ≥ 0 such that
!
c0 + λ∗ c1 − f∗ bT0 + λ∗ bT1
 0. (3.5.4)
b0 + λ∗ b1 A0 + λ∗ A1

To this end, let us associate with (3.5.3) a pair of homogeneous quadratic forms of the extended
vector of variables y = (t, x), where t ∈ R, specifically, the forms

y T P y ≡ xT A1 x + 2tbT1 x + c1 t2 , y T Qy = −xT A0 y − 2tbT0 x − (c0 − f∗ )t2 .

We claim that, first, there exist 0 > 0 and ȳ with ȳ T P ȳ < −0 ȳ T ȳ and, second, that for every
 ∈ (0, 0 ] the implication
y T P y ≤ −y T y ⇒ y T Qy ≤ 0 (3.5.5)
holds true. The first claim is evident: by assumption, there exists x̄ such that f1 (x̄) < 0; setting
ȳ = (1, x̄), we see that ȳ T P ȳ = f1 (x̄) < 0, whence ȳ T P ȳ < −0 ȳ T ȳ for appropriately chosen
0 > 0. To support the second claim, assume that y = (t, x) is such that y T P y ≤ −y T y, and
let us prove that then y T Qy ≤ 0.
3.5. S-LEMMA AND APPROXIMATE S-LEMMA 239

• Case 1: t 6= 0. Setting y 0 = t−1 y = (1, x0 ), we have f1 (x0 ) = [y 0 ]T P y 0 = t−2 y T P y ≤ 0,


whence f0 (x0 ) ≥ f∗ , or, which is the same, [y 0 ]T Qy 0 ≤ 0, so that y T Qy ≤ 0, as required in
(3.5.5).

• Case 2: t = 0. In this case, −xT x = −y T y ≥ y T P y = xT A1 x and y T Qy = −xT A0 x, and


we should prove that the latter quantity is nonpositive. Assume, on the contrary, that this
quantity is positive, that is, xT A0 x < 0. Then x 6= 0 and therefore xT A1 x ≤ −xT x < 0.
From xT A1 x < 0 and xT A0 x < 0 it follows that f1 (sx) → −∞ and f0 (sx) → −∞
as s → +∞, which contradicts the assumption that (3.5.3) is below bounded. Thus,
y T Qy ≤ 0.

Our observations combine with S-Lemma to imply that for every  ∈ (0, 0 ] there exists λ =
λ ≥ 0 such that
B  λ (A + I), (3.5.6)
whence, in particular,
ȳ T B ȳ ≤ λ ȳ T [A + I]ȳ.
The latter relation, due to ȳ T Aȳ < 0, implies that λ remains bounded as  → +0. Thus, we
have λi → λ∗ ≥ 0 as i → ∞ for a properly chosen sequence i → +0 of values of , and (3.5.6)
implies that B  λ∗ A. Recalling what are A and B, we arrive at (3.5.4). 2

3.5.3 Approximate S-Lemma


In general, the S-Lemma fails to be true when there is more than a single quadratic form in
(3.5.2) (that is, when m > 1). Similarly, Inhomogeneous S-Lemma fails to be true for general
quadratic quadratically constrained problems with more than a single quadratic constraint.
There exists, however, a useful approximate version of the Inhomogeneous S-Lemma in the
“multi-constrained” case20 .
Consider a single-parametric family of ellitopes
n o
X [ρ] = {x ∈ Rn : ∃z ∈ Z[ρ] : x = P z}, Z[ρ] = z ∈ RN : ∃t ∈ T : z T Sk z ≤ ρtk , k ≤ K
(3.5.7)
where ρ > 0 and Sk and T as they should be to specify an ellitope, see Section 3.4.2, and let

Opt∗ (ρ) = minx∈X T T


n [ρ] [x Ax + 2b x] o
= minz z T Qz + 2q T z : z ∈ Z[ρ]
n o (3.5.8)
= minz,t z T Qz + 2q T z : z T Sk z ≤ tk , k ≤ K, t ∈ T ,
Q := P T AP, q = P T b

Let us set " #


Q q
Q+ = ,
qT
and let
φT (λ) = max λT t
t∈T
20
What follows is an ellitopic version of Approximate S-Lemma from [14].
240 LECTURE 3. SEMIDEFINITE PROGRAMMING

be the support function of T .


The following proposition establishes the quality of the semidefinite relaxation upper bound
on the quantity Opt∗ (ρ).

Proposition 3.5.3 [Approximate S-Lemma] In the situation just defined, let

λ ∈ RK
 

 µ≥0
" ,P
+ # 

Opt[ρ] = min ρφT (λ) + µ : k λk Sk (3.5.9)
λ,µ  Q+  
 µ 

Then
Opt∗ [ρ] ≤ Opt[ρ] ≤ Opt∗ [κρ] (3.5.10)
with
κ = 3 ln(6K) (3.5.11)

Proof follows the lines of the proof of Theorem 3.4.5 (which, essentially, is the homogeneous
case b = 0 of Proposition 3.5.3). Replacing T with ρT , we assume once for ever that ρ = 1.
10 . As we have seen when proving Theorem 3.4.5, setting

T = cl {[t; τ ] : τ > 0, t/τ ∈ T }

we get a regular cone with the dual cone

T∗ = {[g; s] : s ≥ φT (−g)}

and such that


T = {t : [t; 1] ∈ T}.
Problem (3.5.9) with ρ = 1 is the conic problem

λ ∈ RK
 

 +,µ ≥ 0 

[−λ; τ ] ∈ T∗ #

 

Opt[1] = min τ +µ: " P (∗)
λ,τ,µ  k λk Sk
− Q+  0
 

 

µ 

It is immediately seen that (∗) is strictly feasible and solvable, so that Opt[1] is the optimal
value in the conic dual of (∗). The latter problem, as is immediately seen, after straightforward
simplifications, becomes
 

 t" ∈ T , Tr(S
# k V ) ≤ tk , k ≤ K 

T
Opt[1] = max Tr(V Q) + 2v q : V v (3.5.12)
V,v,t  0
vT 1
 

By definition,
n o
Opt∗ [1] = max z T Qz + 2q T z : t ∈ T , z T Sk z ≤ tk , k ≤ K .
z,t

If (z, t) is a feasible solution to the latter problem, then V = zz T , v = z, t is a feasible solution


to (3.5.12) with the same value of the objective, implying that Opt[1] ≥ Opt∗ [1], as stated in
the first relation in (3.5.10) (recall that we are in the case of ρ = 1).
3.5. S-LEMMA AND APPROXIMATE S-LEMMA 241

20 . We have already stated that problem (3.5.12) is solvable. Let V∗ , v∗ , t∗ be its optimal
solution, and let " #
V∗ v∗
X∗ = .
v∗T 1
Let ζ be Rademacher random vector of the same size as the one of Q+ , let
1/2 1/2
X∗ Q+ X∗ = U Diag{µ}U T
1/2
with orthogonal U , and let ξ = X∗ U ζ = [η; τ ], where τ is the last entry in ξ. Then
1/2 1/2
ξ T Q+ ξ = ζ T U T X∗ Q+ X∗ U ζ = ζ T Diag{µ}ζ
P 1/2 1/2 (3.5.13)
= i µi = Tr(X∗ Q+ X∗ ) = Tr(X∗ Q+ ) = Opt[1].
Next, we have " #
T ηη T τη 1/2 1/2
ξξ = = X∗ U ζζ T U T X∗
τ ηT τ2
whence " # " #
T E{ηη T } E{τ η} V∗ v∗
E{ξξ } = = X∗ = (3.5.14)
E{τ η T } E{τ 2 } v∗T 1
In particular,
E{η T Sk η} = Tr(Sk E{ηη T }) = Tr(Sk V∗ ) ≤ t∗k , k ≤ K,
We have η = W ζ for certain rectangular matrix W such that V∗ = E{ηη T } = E{W ζζ T W T } =
W W T . Consequently, Tr(W T Sk W ) = E{ζ T W T Sk W ζ} = E{η T Sk η} ≤ t∗k . Relations
W T Sk W  0, Tr(W T Sk W ) ≤ t∗k combine with Lemma 3.4.1 to imply that

Prob{η T Sk η > rt∗k } = Prob{ζ T W T Sk W ζ > rt∗k } ≤ 3 exp{−r/3} ∀r ≥ 0, 1 ≤ k ≤ K.
(3.5.15)
0
3 . Invoking (3.5.14) we have
E{τ 2 } = E{[ξξ T ]N +1,N +1 } = 1 (3.5.16)
and
τ = βT ζ
for some vector β with kβk2 = 1 due to (3.5.16). Now let us use the following fact [14, Lemma
A.1]
Lemma 3.5.1 Let β be a deterministic k · k2 -unit vector in RN and ζ be N -
dimensional Rademacher random vector. Then Prob{|β T ζ| ≤ 1} ≥ 1/3.
Now, from definition of κ it follows that

K 3 exp{−κ/3} < 1/3.
By (3.5.15) as applied with r = κ and Lemma 3.5.1 there exists a realization ξ¯ = [ξ;
¯ τ̄ ] of ξ such
that
ξ¯T Sk ξ¯ ≤ κt∗k , k ≤ K & |τ̄ | ≤ 1. (3.5.17)
Invoking (3.5.13) and taking into account that |τ̄ | ≤ 1 we have
Opt[1] = ξ¯T Q+ ξ¯ = ξ¯T Qξ¯ + 2τ̄ q T ξ¯ ≤ ξbT Qξb + 2q T ξ,
b

where ξb = ξ¯ when q T ξ¯ ≥ 0 and ξb = −ξ¯ otherwise. In both cases from the first relation in (3.5.17)
we conclude that ξb ∈ Z[r̄], and we arrive at Opt[1] ≤ Opt∗ [κ]. 2
242 LECTURE 3. SEMIDEFINITE PROGRAMMING

3.5.3.1 Application: Approximating Affinely Adjustable Robust Counterpart of


Uncertain Linear Programming problem with ellitopic uncertainty
The notion of Affinely Adjustable Robust Counterpart (AARC) of uncertain LP was introduced
and motivated in Section 2.4.5. As applied to uncertain LP
n n o o
LP = min cT [ζ]x : A[ζ]x − b[ζ] ≥ 0 : ζ ∈ Z (3.5.18)
x

affinely parameterized by perturbation vector ζ and with variables xj allowed to be affine func-
tions of Pj ζ:
xj = µj + νjT Pj ζ, (3.5.19)
the AARC is the following semi-infinite optimization program in variables t, µj , νj :
 
cj [ζ][µj + νjT Pj ζ] ≤ t ∀ζ ∈ Z
P

 

j
min t: P (AARC)
t,{µj ,νj }n 
j=1 
[µj + νjT Pj ]Aj [ζ] − b[ζ] ≥ 0 ∀ζ ∈ Z 

j

It was explained that in the case of fixed recourse (cj [ζ] and Aj [ζ] are independent of ζ for
all j for which xj is adjustable, that is, Pj 6= 0), (AARC) is equivalent to an explicit conic
quadratic program, provided that the perturbation set Z is CQr with essentially strictly feasible
CQR. In fact CQ-representability plays no crucial role here (see Remark 2.4.1); in particular,
when Z is SDr with an essentially strictly feasible SDR, (AARC), in the case of fixed recourse, is
equivalent to an explicit semidefinite program. What indeed plays a crucial role is the assumption
of fixed recourse; it can be shown that when this assumption does not hold, (AARC) can be
computationally intractable. Our current goal is to demonstrate that even in this difficult case
(AARC) admits a “tight” computationally tractable approximation, provided that Z is taken
from a single-parametric family of ellitopes
n o
Z[ρ] = ζ : ∃t ∈ T : ζ T Sk ζ ≤ ρtk , k ≤ K

where the Sk and T are as required by the definition of ellitope (see Section 3.4.2) and ρ > 0 is
the “uncertainty level.”
As it is immediately seen, AARC of uncertain LP with perturbation set Z = Z[ρ] is opti-
mization problem of the form
( )
h i
Opt∗ [ρ] = min c θ : T
Opt∗i [θ; ρ] T
:= max ζ Pi [θ]ζ + 2pTi [θ]ζ ≤ ri [θ], i ≤ I (AARC[ρ])
θ∈Θ ζ∈Z[ρ]

where
• θ ∈ RN is a collection of parameters of the affine decision rules we are looking for,

• Θ is a given nonempty set ∈ RN (typically, Θ = RN ; from now on, we assume that Θ is


closed, convex, and computationally tractable, e.g., is given by an SDR),

• Pi [θ], pi [θ], ri θ] are known affine in θ matrix-, vector-, and real-valued functions.
Applying the construction from Section 3.5.3 to every one of the quantities
h i
max ζ T Pi [θ]ζ + 2pTi [θ]ζ
ζ∈Z[ρ]
3.5. S-LEMMA AND APPROXIMATE S-LEMMA 243

and invoking Proposition 3.5.3, we arrive at computationally tractable convex problem


n o
Opt[ρ] = minθ∈Θ cT θ : Opti [θ; ρ] ≤ ri [θ], i ≤ I
λi ≥ 0, ν i ≥ 0
 
(APPR[ρ])

 " # 

P i
Opti [θ; ρ] = minλi ,µi ρφT (λi ) + µi : λ
k k S k − P i [θ] −pi [θ]
0
−pTi θ µi

 

such that
∀(ρ > 0, θ, i) : Opt∗i [θ; ρ] ≤ Opti [θ; ρ] ≤ Opt∗i [θ; κρ]
(3.5.20)
[κ = 3 ln(6K)]
Note that when φT is SDr (which definitely is the case when T is a SDr set with essentially
strictly feasible SDR, see comments after Theorem 3.4.5), (APPR[ρ]) reduces to a semidefinite
program.
By (3.5.20), computationally tractable problem (APPR[ρ]) is a safe tractable approximation
of the problem of interest (AARC[ρ]), meaning that the objectives of the problems are identical
and every feasible solution θ of the approximating problem is feasible for the problem of interest
as well. We are about to demonstrate that as far as dependence on the uncertainty level is
concerned, this approximation is tight within the factor γ:
Opt∗ [ρ] ≤ Opt[ρ] ≤ Opt∗ [κρ]. (3.5.21)
Indeed, we have already established the left inequality; to establish the right one, note that
every feasible solution θ to (AARC[κρ]) in view of (3.5.20) is a feasible solution to (APPR[ρ])
with the same value of the objective.
The “tightness result” has a quite transparent interpretation. In general, the problem of
interest (AARC[ρ]) is computationally intractable; in contrast, its safe approximation (APPR[ρ])
is tractable. This approximation is conservative (when feasible, can result in a worse value of
the objective that the problem of interest, or can be infeasible when the problem of interest is
feasible), but this conservatism can be “compensated” by moderate reduction of the uncertainty
level: if “in the nature” there exist affine decision rules which guarantee, in a robust w.r.t.
uncertainty of level ρ, some value v of the objective, our methodology finds in a computationally
efficient fashion affine decision rules which will guarantee the same value v of the objective
robustly w.r.t. reduced uncertainty level ρ/κ with a “moderate” κ.

3.5.3.2 Application: Robust Conic Quadratic Programming with ellitopic uncer-


tainty
The concept of robust counterpart of an optimization problem with uncertain data (see Section
2.4.1) is in no sense restricted to Linear Programming. Whenever we have an optimization
problem depending on some data, we may ask what happens when the data are uncertain and all
we know is an uncertainty set the data belong to. Given such an uncertainty set, we may require
from candidate solutions to be robust feasible – to satisfy the realizations of the constraints for
all data running through the uncertainty set. The robust counterpart of an uncertain problem
is the problem of minimizing the objective21) over the set of robust feasible solutions.
21)
Without loss of generality, we may assume that the objective is “certain” – is not affected by the data
uncertainty. Indeed, we can always ensure this situation by passing to an equivalent problem with linear (and
standard) objective:
min{f (x) : x ∈ X} 7→ min {t : f (x) − t ≤ 0, x ∈ X} .
x t,x
244 LECTURE 3. SEMIDEFINITE PROGRAMMING

Now, we have seen in Section 2.4.1 that the “robust form” of an uncertain linear inequality
with the coefficients varying in an ellipsoid is a conic quadratic inequality; as a result, the robust
counterpart of an uncertain LP problem with ellipsoidal uncertainty (or, more general, with a
CQr uncertainty set) is a conic quadratic problem. What is the “robust form” of an uncertain
conic quadratic inequality

kAx + bk2 ≤ cT x + d [A ∈ Mm,n , b ∈ Rm , c ∈ Rn , d ∈ R] (3.5.22)

with uncertain data (A, b, c, d) ∈ U? The question is how to describe the set of all robust feasible
solutions of this inequality, i.e., the set of x’s such that

kAx + bk2 ≤ cT x + d ∀(A, b, c, d) ∈ U. (3.5.23)

We intend to focus on the case when the uncertainty is “side-wise” – the data (A, b) of the
left hand side and the data (c, d) of the right hand side of the inequality (3.5.22) independently
of each other run through respective uncertainty sets U left [ρ], U right (ρ ≥ 0 is the left hand side
uncertainty level). It suffices to assume the right hand side uncertainty set to be SDr with an
essentially strictly feasible SDR:

U right = {(c, d) | ∃u : Pc + Qd + Ru  S}. (3.5.24)

As about the left hand side uncertainty set, we assume that it is parameterized by “perturbation”
ζ running through a single-parametric family of basic ellitopes:
X
U left [ρ] = {[A, b] = [A∗ , b∗ ] + ζj [Aj , bj ] : ζ ∈ Z[ρ]} Z[ρ] = {z ∈ RN : ∃t ∈ T : z T Sk z ≤ ρtk , k ≤ K}
j
(3.5.25)
where T and Sk are as required in the definition of an ellitope (see Section 3.4.2.1).
Since the left hand side and the right hand side data independently of each other run through
respective uncertainty sets, a point x is robust feasible if and only if there exists a real τ such
that
(a) τ ≤ cT x + d ∀(c, d) ∈ U right ,
(3.5.26)
(b) kAx + bk2 ≤ τ ∀[A, b] ∈ U left [ρ].
We know that the set of (τ, x) satisfying (3.5.26.a) is SDr (see Proposition 2.4.2 and Remark
2.4.1); it is easy to verify that the corresponding SDR is as follows:

(a) (x, τ ) satisfies (3.5.26.a)


m
(3.5.27)
∃Λ :
(b) Λ  0, P ∗ Λ = x, Tr(QΛ) = 1, R∗ Λ = 0, Tr(SΛ) ≥ τ.

As about building SDR of the set of pairs (τ, x) satisfying (3.5.26.b), this is much more difficult
(and in many cases even hopeless) task, since (3.5.23) in general turns out to be NP-hard and
as such cannot be posed as an explicit semidefinite program. We can, however, build a kind of
safe (i.e., inner) approximation of the set in question utilizing semidefinite relaxation.
Observe that with sidewise uncertainty, we lose nothing when assuming that all we want it
to build/safely approximate the robust counterpart

∀[A, b] ∈ U[ρ] : kAx + bk2 ≤ τ


3.5. S-LEMMA AND APPROXIMATE S-LEMMA 245

of uncertain conic quadratic inequality


{kAx + bk2 ≤ τ |[A, b] ∈ U[ρ]}
in variables x, τ (which we eventually will link by the constraints like (3.5.26), but for the time
being it is irrelevant). Note that with our uncertainty set, the robust counterpart can be written
down as the constraint
kA[x][ζ; 1]k2 ≤ τ ∀ζ ∈ Z[ρ] RC[ρ]
in variables x, τ with A[x] affine in x. Equivalently, the constraint can be written down as
∀ζ ∈ Z[ρ] : [ζ; 1]T AT [x]A[x][ζ; 1] ≤ τ 2 & τ ≥ 0.
Note that by Schur Complement Lemma, (τ, x) is feasible for the latter constraints if and only
if (τ, x) can be augmented by symmetric matrix X to yield a robust solution to the semi-infinite
system of constraints " #
X AT [x]
 0 (a)
A[x] τI
[ζ; 1]T X[ζ; 1] ≤ τ ∀ζ ∈ Z[ρ] (b)
in variables X, x, τ . Constraint (a) in this system is an explicit semidefinite constraint, which
allows us to focus on the only troublemaker – the semiinfinite constraint (b). Representing
" #
V v
X= ,
vT w
our task reduces to processing the semiinfinite constraint
ζ T V ζ + 2v T ζ ≤ τ − w ∀ζ ∈ Z[ρ] (S[ρ])
in variables V, v, τ, w. The task at hand can be resolved by Approximate S-Lemma (Proposition
3.5.3) which says that the system of explicit convex constraints

" λ #∈ R"K
+, µ ≥ 0 #
P
V v k λ k Sk

vT µ (3.5.28)
h ρφ T (λ) + µ ≤ τ − w i
φT (λ) = maxt∈T λT t is the support function of T

in variables V, v, w, τ, λ, µ is a safe tractable approximation of (S[ρ]): whenever V, v, w, τ can be


extended by properly selected λ, µ to a feasible solution to (3.5.28), V, v, w, τ satisfy (S[ρ]). More-
over, this approximation is tight within the factor κ = 3 ln(6K), meaning that when V, v, w, τ
cannot be extended to a feasible solution of (3.5.28), V, v, w, τ is unfeasible for (S[κρ]).
Combining (3.5.28) with our preceding observations, we end up with a system (S[ρ]) of
explicit convex constraints in “variables of interest” x, τ and additional variables ω which is a
safe approximation of RC[ρ]: whenever x, τ can be augmented by appropriate value of ω to a
feasible solution of (S[ρ]), (x, τ ) is feasible for RC[ρ]. And if such “augmentation” is impossible,
then x, τ perhaps are feasible for RC[ρ], but definitely are not feasible for RC[κρ]. Thus, we have
built computationally tractable safe approximation of the robust counterpart of uncertain conic
quadratic inequality (3.5.22) with ellitopic left hand side perturbation set Z[ρ], see (3.5.25),
with moderate – logarithmic in K – conservatism in terms of the uncertainty level ρ.
Finally, we note that
246 LECTURE 3. SEMIDEFINITE PROGRAMMING

• When φT is SDr (which definitely is the case when T is SDr with essentially strictly feasible
SDR, see comments to Theorem 3.4.5), (S[ρ]) reduces to a semidefinite program.

• When K = 1 (i.e., Z[ρ] is the family of proportional to each other ellipses centered at
the origin), Inhomogeneous S-Lemma in the role of Approximate S-Lemma shows that
our safe conservative approximation of the robust counterpart of uncertain conic quadratic
constraint is not conservative at all – (x, τ ) can be extended to a feasible solution to (S[ρ])
if and only if (x, τ ) is feasible for RC[ρ]

• Our uncertainty level ρ is a kind of energy of perturbation√rather than its magnitude


— increasing ρ by constant factor θ, Z[ρ] is multiplied by θ; as a result, the “true”

conservatism of our safe approximation of RC[ρ] is κ rather than κ: if (x, τ ) cannot
be extended to a feasible solution of (S[ρ]) at certain uncertainty level, increasing an

appropriate perturbation ζ ∈ Z[ρ] by factor κ to get enlarged perturbation ζ , we arrive
at kA[x][ζ; 1]k2 > τ .

Example: Antenna Synthesis revisited. To illustrate the potential of the Robust Opti-
mization methodology as applied to conic quadratic problems, consider the Circular Antenna
Design problem from Section 2.4.1. Assume that now we deal with 40 ring-type antenna ele-
ments, and that our goal is to minimize the (discretized) L2 -distance from the synthesized dia-
40
xj Drj−1 ,rj (·) to the “ideal” diagram D∗ (·) which is equal to 1 in the range 77o ≤ θ ≤ 90o
P
gram
j=1
and is equal to 0 in the range 0o ≤ θ ≤ 70o . The associated problem is just the Least Squares
problem
 
 v P 
D2 (θ) + (Dx (θ) − 1)2

 u P 

u θ∈Θcns x

 u 

θ∈Θ
 
obj
minτ,x τ: t
≤τ ,
 card(Θcns ∪ Θobj ) 



 | {z }



 (3.5.29)
 
kD∗ −Dx k2
40
P
Dx (θ) = xj Drj−1 ,rj (θ)
j=1

where Θcns and Θobj are the intersections of the 240-point grid on the segment 0 ≤ θ ≤ 90o with
the “angle of interest” 77o ≤ θ ≤ 90o and the “sidelobe angle” 0o ≤ θ ≤ 70o , respectively.
The Nominal Least Squares design obtained from the optimal solution to this problem is
completely unstable w.r.t. small implementation errors xj 7→ (1 + ξj )xj , |ξj | ≤ ρ:
3.6. SEMIDEFINITE RELAXATION AND CHANCE CONSTRAINTS 247

90 1 103 90 103 90
110 1.1607 110 1
120 60 120 60 120 60
0.8 0.92859 0.8

0.6 0.69644 0.6


150 30 150 30 150 30
0.4 0.4643 0.4

0.2 0.23215 0.2

180 0 180 0 180 0

210 330 210 330 210 330

240 300 240 300 240 300

270 270 270

Dream Reality Reality


D, no errors a realization of D, 0.1% errors a realization of D, 2% errors
kD∗ − Dk2 = 0.014 kD∗ − Dk2 ∈ [0.17, 0.89] kD∗ − Dk2 ∈ [2.9, 19.6]
Nominal Least Squares design: dream and reality.
Range of kD∗ − Dk2 is obtained by simulating 100 diagrams affected by implementation errors.
In order to take into account implementation errors, we should treat (3.5.29) as an uncertain
conic quadratic problem  
min {τ : kAx − bk2 ≤ τ } |A ∈ U
τ,x

with the uncertainty set of the form


n o
U = A = A∗ + A∗ Diag(ξ) | ξk2 ≤ ρ ∀k ,

which is a particular case of the ellitopic uncertainty with K = dim ξ. In the experiments to be
reported, we use ρ = (0.02)2 (2% implementation errors). The approximate Robust Counterpart
(S[ρ]) of our uncertain conic quadratic problem yields the Robust design as follows:

103 90 103 90 103 90


110 1.0344 110 1.0348 110 1.042
120 60 120 60 120 60

0.82755 0.82781 0.83362

0.62066 0.62086 0.62521


150 30 150 30 150 30
0.41378 0.4139 0.41681

0.20689 0.20695 0.2084

180 0 180 0 180 0

210 330 210 330 210 330

240 300 240 300 240 300

270 270 270

Dream Reality Reality


D, no errors a realization of D, 0.1% errors a realization of D, 2% errors
kD∗ − Dk2 = 0.025 kD∗ − Dk2 ≈ 0.025 kD∗ − Dk2 ≈ 0.025
Robust Least Squares design: dream and reality. Data from 100-element sample.

3.6 Semidefinite Relaxation and Chance Constraints


3.6.1 Chance constraints
We already have touched on two different occasions (Sections 2.4, 3.5.3.2) the important area of
optimization under uncertainty — solving optimization problems with partially unknown (“un-
certain”) data. However, for the time being, we have dealt solely with uncertain-but-bounded
248 LECTURE 3. SEMIDEFINITE PROGRAMMING

data perturbations, that is, data perturbations running through a given uncertainty set, and
with a specific interpretation of an uncertainty-affected constraint: we wanted from the candi-
date solutions (non-adjustable or adjustable alike) to satisfy the constraints for all realizations of
the data perturbations from the uncertainty set; this was called Robust Optimization paradigm.
In Optimization there exists another approach to treating data uncertainty, the Stochastic Opti-
mization approach historically by far preceding the RO one. With the Stochastic Optimization
approach, the data perturbations are assumed to be random with completely or partially known
probability distribution. In this situation, a natural way to handle data uncertainty is to pass
to chance constrained forms of the uncertainty-affected constraints22 , namely, to replace a con-
straint
f (x, ζ) ≤ 0

(x is the decision vector, ζ stands for data perturbation) with the constraint

Probζ∼P {ζ : f (x, ζ) ≤ 0} ≤ 1 − ,

where   1 is a given tolerance, and P is the distribution of ζ. This formulation tacitly assumes
that the distribution of ζ is known exactly, which typically is not the case in reality. In real life,
even if we have reasons to believe that the data perturbations are random (which by itself is a
“big if”), we usually are not informed enough to point out their distribution exactly. Instead,
we usually are able to point out a family P of probability distributions which contains the true
distribution of ζ. In this case, we usually pass to ambiguously chance constrained version of the
uncertain constraint:
sup Probζ∼P {ζ : f (x, ζ) ≤ 0} ≥ 1 − . (3.6.1)
P ∈P

Chance constraints is a quite old notion (going back to mid-1950’s); over the years, they were
subject of intensive and fruitful research of numerous excellent scholars worldwide. With all
due respect to this effort, chance constraints, even as simply-looking as scalar linear inequalities
with randomly perturbed coefficients, remain quite challenging computationally23 . The reason
is twofold:

• Checking the validity of a chance constraint at a given point x, even with exactly known
distribution of ζ, requires multi-dimensional (provided ζ is so) integration, which typi-
cally is a computationally intractable task. Of course, one can replace precise integration
by Monte Carlo simulation — by generating a long sample of independent realizations
ζ 1 , ..., ζ N of ζ and replacing the true probability with its empirical approximation – the
frequency of the event f (x, ζ t ) ≤ 0 in the sample. However, this approach is unapplicable
to the case of an ambiguous chance constraint and, in addition, does not work when 
is “really small,” like 10−6 or less. The reason is that in order to decide reliably from
simulations that the probability of the event f (x, ζ) > 0 indeed is ≤ , the sample size N
should be of order of 1/ and thus becomes impractically large when  is “really small.”
22
By adding slack variable t and replacing minimization of the true objective f (x) with minimizing t under
added constraint f (x) ≤ t, we always can assume that the objective is linear and certain — not affected by
data uncertainty. By this reason, we can restrict ourselves with the case where the only uncertainty-affected
components of an optimization problem are the constraints.
23
By the way, this is perhaps the most important argument in favour of RO: as we have seen in Section 2.4, the
RO approach, at least in the case of non-adjustable uncertain Linear Programming, results in computationally
tractable (provided the uncertainty set is so) robust counterpart of the uncertain problem of interest.
3.6. SEMIDEFINITE RELAXATION AND CHANCE CONSTRAINTS 249

• Another potential difficulty with chance constraint is that its feasible set can be non-
convex already for a pretty simple – just affine in x – function f (x, ζ), which makes the
optimization of objective over such a set highly problematic.

Essentially, the only generic situation where neither one of the above two difficulties occur is the
case of scalar linear constraint
n
X
ζi x i ≤ 0
i=1

with Gaussian ζ = [ζ1 ; ...; ζN ]. Assuming that the distribution of ζ is known exactly and denoting
by µ, Σ the expectation and the covariance matrix of ζ, we have

µi ζi + Φ−1 () xT Σx ≤ 0,
X X
Prob{ ζi xi ≤ 0} ≥ 1 −  ⇔
i i

where Φ−1 (·) is the inverse error function given by:


Z∞
1
√ exp{−s2 /2}ds = .

Φ−1 ()

When  ≤ 1/2, the above “deterministic equivalent” of the chance constraint is an explicit Conic
Quadratic inequality.

3.6.2 Safe tractable approximations of chance constraints


When a chance constraint “as it is” is computationally intractable, one can look for the “second
best thing” — a system S of efficiently computable convex constraints in variable x and perhaps
additional slack variables u which “safely approximates” the chance constraint, meaning that
whenever x can be extended, by a properly chosen u, to a feasible solution of S, x is feasible for
the chance constraint of interest. For a survey of state-of-the-art techniques for safe tractable
approximations of chance constraints, we refer an interested reader to [39, 15] and references
therein. In the sequel, we concentrate on a particular technique of this type, originating from
[17, 16] and heavily exploiting semidefinite relaxation; the results to follow are taken from [39].

3.6.3 Situation and goal


In the sequel, we focus on the chance constrained form of a quadratically perturbed scalar linear
constraint. A generic form of such a chance constraint is
" #
1 ζT
Probζ∼P {Tr(W Z[ζ]) ≤ 0} ≥ 1 −  ∀P ∈ P, Z[ζ] = ∀P ∈ P, (3.6.2)
ζ ζζ T

where the data perturbations ζ ∈ Rd , and the symmetric matrix W is affine in the decision
variables; we lose nothing by assuming that W itself is the decision variable.
We start with the description of P. Specifically, we assume that our a priori information on
the distribution P of the uncertain data can be summarized as follows:

P.1 We know that the marginals Pi of P (i.e., the distributions of the entries ζi in ζ) belong
to given families Pi of probability distributions on the axis;
250 LECTURE 3. SEMIDEFINITE PROGRAMMING

P.2 The matrix VP = Eζ∼P {Z[ζ]} of the first and the second moments of P is known to belong
to a given convex closed subset V of the positive semidefinite cone;

P.3 P is supported on a set S given by a finite system of quadratic constraints:

S = {ζ : Tr(A` Z[ζ]) ≤ 0, 1 ≤ ` ≤ L}

The above assumptions model well enough a priori information on uncertain data in typical
applications of decision-making origin.

3.6.4 Approximating chance constraints via Lagrangian relaxation


The approach, which we in the sequel refer to as Lagrangian approximation, implements the
idea as follows. Assume that given W , we have a systematic way to generate pairs (γ(·) : Rd →
R, θ > 0) such that (I) γ(·) ≥ 0 in S, (II) γ(·) ≥ θ in the part of S where Tr(W Z[ζ]) ≥ 0, and
(c) we have at our disposal a functional Γ[γ] such that Γ[γ] ≥ Γ∗ [γ] := supP ∈P Eζ∼P {γ(ζ)}.
Since γ(·) is nonnegative at the support of ζ and is ≥ θ in the part of this support where the
body Tr(W Z[ζ]) of the chance constraint is positive, we clearly have Γ[γ] ≥ Eζ∼P ∈P {γ(ζ)} ≥
θProbζ∼P {Tr(W Z[ζ]) < 0}, so that the condition

Γ[γ] ≤ θ

clearly is sufficient for the validity of (3.6.2), and we can further optimize this sufficient condition
over the pairs γ, θ produced by our hypothetic mechanism.
The question is, how to build the required mechanism, and here is an answer. Let us start
with building Γ[·]. Under our assumptions on P, the most natural family of functions γ(·) for
which one can bound from above Γ∗ [γ] is comprised of functions of the form
Xd
γ(ζ) = Tr(QZ[ζ]) + γi (ζi ) (3.6.3)
i=1

with Q ∈ Sd+1 . For such a γ, one can set


Xd Z
Γ[γ] = sup Tr(QV ) + sup γi (ζi )dPi (ζi )
V ∈V i=1 P ∈P
i i

Further, the simplest way to ensure (I) is to use Lagrangian relaxation, specifically, to require
from the function γ(·) given by (3.6.3) to be such that with properly chosen µ` ≥ 0 one has
Xd XL
Tr(QZ[ζ]) + γi (ζi ) + µ Tr(A` Z[ζ]) ≥ 0 ∀ζ ∈ Rd .
i=1 `=1 `

In turn, the simplest way to ensure the latter relation is to impose on γi (ζi ) the restrictions

(a) γi (ζi ) ≥ pi ζi2 + 2qi ζi + ri ∀ζi ∈ R,


(3.6.4)
(b) Tr(QZ[ζ]) + di=1 [pi ζi2 + 2qi ζi + ri ] + L d
`=1 µ` Tr(A` Z[ζ]) ≥ 0 ∀ζ ∈ R ;
P P

note that (b) reduces to the LMI


" P #
i ri qT XL
Q+ + µA 0 (3.6.5)
q Diag{p} `=1 ` `
3.6. SEMIDEFINITE RELAXATION AND CHANCE CONSTRAINTS 251

in variables p = [p1 ; ...; pd ], q = [q1 ; ...; qd ], r = [r1 ; ...; rd ], Q, {µ` ≥ 0}L


`=1 .
Similarly, a sufficient condition for (II) is the existence of p0 , q 0 , r0 ∈ Rd and nonnegative ν`
such that
(a) γi (ζi ) ≥ p0i ζi2 + 2qi0 ζi + ri0 ∀ζi ∈ R,
(b) Tr(QZ[ζ]) + di=1 [p0i ζi2 + 2qi0 ζi + ri0 ] + L d
`=1 ν` Tr(A` Z[ζ]) − Tr(W Z[ζ]) ≥ θ ∀ζ ∈ R ,
P P

(3.6.6)
with (b) reducing to the LMI
L
" P #
0 [q 0 ]T
i ri − θ X
Q+ + ν` A` − W  0 (3.6.7)
q0 Diag{p0 }
`=1

in variables W , p0 , q 0 , r0 ∈ Rd , Q, {νi ≥ 0}L i=1 and θ > 0. Finally, observe that under the
restrictions (3.6.4.a), (3.6.6.a), the best – resulting in the smallest possible Γ[γ] – choice of γi (·)
is
γi (ζi ) = max[pi ζi2 + 2qi ζi + ri , p0i ζi2 + 2qi0 ζi + ri0 ].
We have arrived at the following result:

Proposition 3.6.1 Let (S) be the system of constraints in variables W , p, q, r, p0 , q 0 , r0 ∈ Rd ,


Q ∈ Sd+1 , {ν` ∈ R}L L
`=1 , θ ∈ R, {µ` ∈ R}`=1 comprised of the LMIs (3.6.5), (3.6.7) augmented
by the constraints
µ` ≥ 0 ∀`, ν` ≥ 0 ∀`, θ > 0 (3.6.8)
and
Xd Z
sup Tr(QV ) + sup max[pi ζi2 + 2qi ζi + ri , p0i ζi2 + 2qi0 ζi + ri0 ]dPi ≤ θ. (3.6.9)
V ∈V i=1 P ∈P
i i

(S) is a system of convex constraints which is a safe approximation of the chance constraint
(3.6.2), meaning that whenever W can be extended to a feasible solution of (S), W is feasible
for the ambiguous chance constraint (3.6.2), P being given by P.1-3. This approximation is
tractable, provided that the suprema in (3.6.9) are efficiently computable.
 
θ 1
Note that the strict inequality θ > 0 in (3.6.8) can be expressed by the LMI 1 λ
 0.

3.6.4.1 Illustration I
Consider the situation as follows: there are d = 15 assets with yearly returns ri = 1 + µi + σi ζi ,
where µi is the expected profit of i-th return, σi is the return’s variability, and ζi is random factor
with zero mean supported on [−1, 1]. The quantities µi , σi used in our illustration are shown on
the left plot on figure 3.1. The goal is to distribute $1 between the assets in order to maximize
the value-at-1% risk (the lower 1%-quantile) of the yearly profit. This is the ambiguously chance
constrained problem
( ( 15 15
) 15
)
X X X
Opt = max t : Probζ∼P µi x i + ζi σ i x i ≥ t ≥ 0.99 ∀P ∈ P, x ≥ 0, xi = 1
t,x
i=1 i=1 i=1
(3.6.10)
Consider three hypotheses A, B, C about P. In all of them, ζi are zero mean and supported on
[−1, 1], so that the domain information is given by the quadratic inequalities ζi2 ≤ 1, 1 ≤ i ≤ 15;
252 LECTURE 3. SEMIDEFINITE PROGRAMMING

Hypothesis Approximation Guaranteed profit-at-1%-risk


A Bernstein 0.0552
B, C Lagrangian 0.0101

Table 3.1: Optimal values in various approximations of (3.6.10).

3 1

0.9

2.5 0.8

0.7

2 0.6

0.5

1.5 0.4

0.3

1 0.2

0.1

0.5 0
0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16

Optimal portfolios: diversified (hypothesis A)


Expectations and ranges of returns
and single-asset (hypotheses B, C)

Figure 3.1: Data and results for portfolio allocation.

this is exactly what is stated by C. In addition, A says that ζi are independent, and B says that
the covariance matrix of ζ is proportional to the unit matrix. Thus, the sets V associated with
the hypotheses are, respectively, {V ∈ Sd+1 + : Vii ≤ V00 = 1, Vij = 0, i 6= j}, {V ∈ Sd+1
+ :1=
d+1
V00 ≥ V11 = V22 = ... = Vdd , Vij = 0, i 6= j}, and {V ∈ S+ : Vii ≤ V00 = 1, V0j = 0, 1 ≤ j ≤ d},
where Sk+ is the cone of positive semidefinite symmetric k × k matrices. Solving the associated
safe tractable approximations of the problem, specifically, the Bernstein approximation in the
case of A, and the Lagrangian approximations in the cases of B, C, we arrive at the results
displayed in table 3.1 and on figure 3.1.
Note that in our illustration, the (identical to each other) single-asset portfolios yielded by
the Lagrangian approximation under hypotheses B, C are exactly optimal under circumstances.
Indeed, on a closest inspection, there exists a distribution P∗ compatible with hypothesis B (and
therefore – with C as well) such that the probability of “crisis,” where all ζi simultaneously are
equal to −1, is ≥ 0.01. It follows that under hypotheses B, C, the worst-case, over P ∈ P,
profit at 1% risk of any portfolio cannot be better than the profit of this portfolio in the case
of crisis, and the latter quantity is maximized by the single-asset portfolio depicted on figure
3.1. Note that the Lagrangian approximation turns out to be “intelligent enough” to discover
this phenomenon and to infer its consequences. A couple of other instructive observations is as
follows:

• the diversified portfolio yielded by the Bernstein approximation24 in the case of crisis
exhibits negative profit, meaning that under hypotheses B, C its worst-case profit at 1%
risk is negative;

• assume that the yearly returns are observed on a year-by-year basis, and the year-by-year
realizations of ζ are independent and identically distributed. It turns out that it takes
24
See, e.g., [39]. This is a specific safe tractable approximation of affinely perturbed linear scalar chance
constraint which heavily exploits the assumption that the entries ζi in ζ are mutually independent.
3.6. SEMIDEFINITE RELAXATION AND CHANCE CONSTRAINTS 253

over 100 years to distinguish, with reliability 0.99, between hypothesis A and the “bad”
distribution P∗ via the historical data.
To put these observations into proper perspective, note that it is extremely time-consuming to
identify, to reasonable accuracy and with reasonable reliability, a multi-dimensional distribution
directly from historical data, so that in applications one usually postulates certain parametric
form of the distribution with a relatively small number of parameters to be estimated from
the historical data. When dim ζ is large, the requirement on the distribution to admit a low-
dimensional parameterization usually results in postulating some kind of independence. While in
some applications (e.g., in telecommunications) this independence in many cases can be justified
via the “physics” of the uncertain data, in Finance and other decision-making applications
postulating independence typically is an “act of faith” which is difficult to justify experimentally,
and we believe a decision-maker should be well aware of the dangers related to these “acts of
faith.”

3.6.5 Modification
When building safe tractable approximation of an affinely perturbed scalar linear chance con-
straint
d
X
∀p ∈ P : Probζ∼P {w0 + ζi wi ≤ 0} ≥ 1 − , (3.6.11)
i=1
one, essentially, is interested in efficient bounding from above the quantity
d
( )
X
p(w) = sup Eζ∼P f (w0 + ζ i wi )
P ∈P i=1

for a very specific function f (s), namely, equal to 0 when s ≤ 0 and equal to 1 when s > 0.
There are situations when we are interested in bounding similar quantity for other functions f ,
specifically, piecewise linear convex function f (s) = max [ai + bi s], see, e.g., [21]. Here again
1≤j≤J
one can use Lagrange relaxation, which in fact is able to cope with a more general problem of
bounding from above the quantity

Ψ[W ] = sup Eζ∼P {f (W, ζ)} , f (W, ζ) = max Tr(W j Z[ζ]);


P ∈P 1≤j≤J

here the matrices W j ∈ Sd+1 are affine in the decision variables and W = [W 1 , ..., W J ]. Specif-
ically, with the assumptions P.1-3 in force, observe that if, for a given W , a matrix Q ∈ Sd+1
and vectors pj , q j , rj ∈ Rd , 1 ≤ j ≤ J are such that that the relations
Xd
Tr(QZ[ζ]) + [pji ζi2 + 2qij ζi + rij ] ≥ Tr(W j Z[ζ]) ∀ζ ∈ S (Ij )
i=1

take place for 1 ≤ j ≤ J, then the function


Xd
γ(ζ) = Tr(QZ[ζ]) + max [pji ζi2 + 2qij ζi + rij ] (3.6.12)
i=1 1≤j≤J

upper-bounds f (W, ζ) when ζ ∈ S, and therefore the quantity


Xd Z
F (W, Q, p, q, r) = sup Tr(QV ) + sup max [pji ζi2 + 2qij ζi + rij ]dPi (ζi ) (3.6.13)
V ∈V i=1 P ∈P 1≤j≤J
i i
254 LECTURE 3. SEMIDEFINITE PROGRAMMING

is an upper bound on Ψ[W ]. Using Lagrange relaxation, a sufficient condition for the validity
of (Ij ), 1 ≤ j ≤ J, is the existence of nonnegative µj` such that
L
" P #
j
i ri [q j ]T j
X
Q+ −W + µj` A`  0, 1 ≤ j ≤ J. (3.6.14)
qj Diag{pj } `=1

We have arrived at the result as follows:

Proposition 3.6.2 Consider the system S of constraints in the variables W = [W 1 , ..., W d ],


t and slack variables Q ∈ Sd+1 , {pj , q j , rj ∈ Rd : 1 ≤ j ≤ J}, {µij : 1 ≤ i ≤ d, 1 ≤ j ≤ J}
comprised of LMIs (3.6.14) augmented by the constraints

(a) µij ≥ 0, ∀i, j


d (3.6.15)
Z
max [pji ζi2 + 2qij ζi + rij ]dPi (ζi ) ≤ t.
P
(b) max Tr(QV ) + supPi ∈Pi
V ∈V i=1 1≤j≤J

The constraints in the system are convex; they are efficiently computable, provided that the
suprema in (3.6.15.b) are efficiently computable, and whenever W, t can be extended to a feasible
solution to S, one has Ψ[W ] ≤ t. In particular, when the suprema in (3.6.15.b) are efficiently
computable, the efficiently computable quantity

Opt[W ] = min {t : W, t, Q, p, q, r, µ satisfy S} (3.6.16)


t,Q,p,q,r,µ

is a convex in W upper bound on Ψ[W ].

3.6.5.1 Illustration II
Consider a special case of the above situation where all we know about ζ are the marginal
distributions Pi of ζi with well defined first order moments; in this case, Pi = {Pi } are singletons,
and we lose nothing when setting V = Sd+1 d
+ , S = R . Let a piecewise linear convex function on
the axis:
f (s) = max [aj + bj s]
1≤j≤J

be given, and let our goal be to bound from above the quantity
Xd
ψ(w) = sup Eζ∼P {f (ζ w )} , ζ w = w0 + ζ i wi .
P ∈P i=1

This is a special case of the problem we have considered, corresponding to


1 1
 
aj + bj w0 2 bj w1 ... 2 bj wd
1
2 bj w1
 
Wj = 
 
.. .
.
 
 
1
2 bj wd

In this case, system S from Proposition 3.6.2, where we set pji = 0, Q = 0, reads

(a) 2qij = bj wi , 1 ≤ i ≤ d, q ≤ j ≤ J, j
Pd
i=1 ri ≥ aj + bj w0 ,
max [2qij ζi + rij ]dPi (ζi ) ≤ t
Pd R
(b) i=1
1≤j≤J
3.7. EXTREMAL ELLIPSOIDS 255

so that the upper bound (3.6.16) on supP ∈P Eζ∼P {f (ζ w )} implies that


X Z 
d
rij ]dPi (ζi )
X j
Opt[w] = min max [bj wi ζi + : r = aj + bj w0 , 1 ≤ j ≤ J (3.6.17)
{rij } i=1 1≤j≤J i i

is an upper bound on ψ(w). A surprising fact is that in the situation in question (i.e., when P
is comprised of all probability distributions with given marginals P1 , ..., Pd ), the upper bound
Opt[w] on ψ(w) is equal to ψ(w) [15, Poposition 4.5.4]. This result offers an alternative (and
simpler) proof for the following remarkable fact established by Dhaene et al [21]: if f is a
convex function on the axis and η1 , ..., ηd are scalar random variables with distributions P1 , ..., Pd
possessing first order moments, then supP ∈P Eη∼P {f (η1 + ... + ηd )}, P being the family of all
distributions on Rd with marginals P1 , ..., Pd , is achieved when η1 , ..., ηd are comonotone, that
is, are deterministic monotone transformation of a single random variable uniformly distributed
on [0, 1].

3.7 Extremal ellipsoids


We already have met, on different occasions, with the notion of an ellipsoid – a set E in Rn
which can be represented as the image of the unit Euclidean ball under an affine mapping:

E = {x = Au + c | uT u ≤ 1} [A ∈ Mn,q ] (Ell)

Ellipsoids are very convenient mathematical entities:


• it is easy to specify an ellipsoid – just to point out the corresponding matrix A and vector
c;

• the family of ellipsoids is closed with respect to affine transformations: the image of an
ellipsoid under an affine mapping again is an ellipsoid;

• there are many operations, like minimization of a linear form, computation of volume,
etc., which are easy to carry out when the set in question is an ellipsoid, and is difficult
to carry out for more general convex sets.
By the indicated reasons, ellipsoids play important role in different areas of applied mathematics;
in particular, people use ellipsoids to approximate more complicated sets. Just as a simple
motivating example, consider a discrete-time linear time invariant controlled system:
x(t + 1) = Ax(t) + Bu(t), t = 0, 1, ...
x(0) = 0
and assume that the control is norm-bounded:

ku(t)k2 ≤ 1 ∀t.

The question is what is the set XT of all states “reachable in a given time T ”, i.e., the set of all
possible values of x(T ). We can easily write down the answer:

XT = {x = BuT −1 + ABuT −2 + A2 BuT −3 + ... + AT −1 Bu0 | kut k2 ≤ 1, t = 0, ..., T − 1},

but this answer is not “explicit”; just to check whether a given vector x belongs to XT requires
to solve a nontrivial conic quadratic problem, the complexity of the problem being the larger the
256 LECTURE 3. SEMIDEFINITE PROGRAMMING

larger is T . In fact the geometry of XT may be very complicated, so that there is no possibility
to get a “tractable” explicit description of the set. This is why in many applications it makes
sense to use “simple” – ellipsoidal - approximations of XT ; as we shall see, approximations of
this type can be computed in a recurrent and computationally efficient fashion.
It turns out that the natural framework for different problems of the “best possible” approx-
imation of convex sets by ellipsoids is given by semidefinite programming. In this Section we
intend to consider a number of basic problems of this type.

3.7.1 Preliminaries on ellipsoids


According to our definition, an ellipsoid in Rn is the image of the unit Euclidean ball in certain
Rq under an affine mapping; e.g., for us a segment in R100 is an ellipsoid; indeed, it is the image
of one-dimensional Euclidean ball under affine mapping. In contrast to this, in geometry an
ellipsoid in Rn is usually defined as the image of the n-dimensional unit Euclidean ball under
an invertible affine mapping, i.e., as the set of the form (Ell) with additional requirements that
q = n, i.e., that the matrix A is square, and that it is nonsingular. In order to avoid confusion, let
us call these “true” ellipsoids full-dimensional. Note that a full-dimensional ellipsoid E admits
two nice representations:

• First, E can be represented in the form (Ell) with positive definite symmetric A:

E = {x = Au + c | uT u ≤ 1} [A ∈ Sn++ ] (3.7.1)

Indeed, it is clear that if a matrix A represents, via (Ell), a given ellipsoid E, the matrix AU , U
being an orthogonal n × n matrix, represents E as well. It is known from Linear Algebra that by
multiplying a nonsingular square matrix from the right by a properly chosen orthogonal matrix,
we get a positive definite symmetric matrix, so that we always can parameterize a full-dimensional
ellipsoid by a positive definite symmetric A.

• Second, E can be given by a strictly convex quadratic inequality:

E = {x | (x − c)T D(x − c) ≤ 1} [D ∈ Sn++ ]. (3.7.2)

Indeed, one may take D = A−2 , where A is the matrix from the representation (3.7.1).

Note that the set (3.7.2) makes sense and is convex when the matrix D is positive semidefinite
rather than positive definite. When D  0 is not positive definite, the set (3.7.1) is, geometrically,
an “elliptic cylinder” – a shift of the direct product of a full-dimensional ellipsoid in the range
space of D and the complementary to this range linear subspace – the kernel of D.
In the sequel we deal a lot with volumes of full-dimensional ellipsoids. Since an invertible
affine transformation x 7→ Ax + b : Rn → Rn multiplies the volumes of n-dimensional domains
by |DetA|, the volume of a full-dimensional ellipsoid E given by (3.7.1) is κn DetA, where κn
is the volume of the n-dimensional unit Euclidean ball. In order to avoid meaningless constant
factors, it makes sense to pass from the usual n-dimensional volume mesn (G) of a domain G to
its normalized volume
Vol(G) = κ−1n mesn (G),

i.e., to choose, as the unit of volume, the volume of the unit ball rather than the one of the cube
with unit edges. From now on, speaking about volumes of n-dimensional domains, we always
3.7. EXTREMAL ELLIPSOIDS 257

mean their normalized volume (and omit the word “normalized”). With this convention, the
volume of a full-dimensional ellipsoid E given by (3.7.1) is just

Vol(E) = DetA,

while for an ellipsoid given by (3.7.1) the volume is

Vol(E) = [DetD]−1/2 .

3.7.2 Outer and inner ellipsoidal approximations


It was already mentioned that our current goal is to realize how to solve basic problems of “the
best” ellipsoidal approximation E of a given set S. There are two types of these problems:
• Outer approximation, where we are looking for the “smallest” ellipsoid E containing the
set S;

• Inner approximation, where we are looking for the “largest” ellipsoid E contained in the
set S.
In both these problems, a natural way to say when one ellipsoid is “smaller” than another one
is to compare the volumes of the ellipsoids. The main advantage of this viewpoint is that it
results in affine-invariant constructions: an invertible affine transformation multiplies volumes
of all domains by the same constant and therefore preserves ratios of volumes of the domains.
Thus, what we are interested in are the largest volume ellipsoid(s) contained in a given set
S and the smallest volume ellipsoid(s) containing a given set S. In fact these extremal ellipsoids
are unique, provided that S is a solid – a closed and bounded convex set with a nonempty
interior, and are not too bad approximations of the set:
Theorem 3.7.1 [Löwner – Fritz John] Let S ⊂ Rn be a solid. Then
(i) There exists and is uniquely defined the largest volume full-dimensional ellipsoid Ein
contained in S. The concentric to Ein n times larger (in linear sizes) ellipsoid contains S; if S

is central-symmetric, then already n times larger than Ein concentric to Ein ellipsoid contains
S.
(ii) There exists and is uniquely defined the smallest volume full-dimensional ellipsoid Eout
containing S. The concentric to Eout n times smaller (in linear sizes) ellipsoid is contained

in S; if S is central-symmetric, then already n times smaller than Eout concentric to Eout
ellipsoid is contained in S.
The proof is the subject of Exercise 3.37.
The existence of extremal ellipsoids is, of course, a good news; but how to compute these
ellipsoids? The possibility to compute efficiently (nearly) extremal ellipsoids heavily depends on
the description of S. Let us start with two simple examples.

3.7.2.1 Inner ellipsoidal approximation of a polytope


Let S be a polyhedral set given by a number of linear equalities:

S = {x ∈ Rn | aTi x ≤ bi , i = 1, ..., m}.


258 LECTURE 3. SEMIDEFINITE PROGRAMMING

Proposition 3.7.1 Assume that S is a full-dimensional polytope (i.e., is bounded and possesses
a nonempty interior). Then the largest volume ellipsoid contained in S is

E = {x = Z∗ u + z∗ | uT u ≤ 1},

where Z∗ , z∗ are given by an optimal solution to the following semidefinite program:

maximize t
s.t.
(a) t ≤ (DetZ)1/n , (In)
(b) Z  0,
(c) kZai k2 ≤ bi − aTi z, i = 1, ..., m,

with the design variables Z ∈ Sn , z ∈ Rn , t ∈ R.


Note that (In) indeed is a semidefinite program: both (In.a) and (In.c) can be represented by
LMIs, see Examples 18d and 1-17 in Section 3.2.

Proof. Indeed, an ellipsoid (3.7.1) is contained in S if and only if

aTi (Au + c) ≤ bi ∀u : uT u ≤ 1,

or, which is the same, if and only if

kAai k2 + aTi c = max [aTi Au + aTi c] ≤ bi .


u:uT u≤1

Thus, (In.b − c) just express the fact that the ellipsoid {x = Zu + z | uT u ≤ 1} is contained in
S, so that (In) is nothing but the problem of maximizing (a positive power of) the volume of an
ellipsoid over ellipsoids contained in S. 2

We see that if S is a polytope given by a set of linear inequalities, then the problem of the
best inner ellipsoidal approximation of S is an explicit semidefinite program and as such can be
efficiently solved. In contrast to this, if S is a polytope given as a convex hull of finite set:

S = Conv{x1 , ..., xm },

then the problem of the best inner ellipsoidal approximation of S is “computationally in-
tractable” – in this case, it is difficult just to check whether a given candidate ellipsoid is
contained in S.

3.7.2.2 Outer ellipsoidal approximation of a finite set

Let S be a polyhedral set given as a convex hull of a finite set of points:

S = Conv{x1 , ..., xm }.
3.7. EXTREMAL ELLIPSOIDS 259

Proposition 3.7.2 Assume that S is a full-dimensional polytope (i.e., possesses a nonempty


interior). Then the smallest volume ellipsoid containing S is

E = {x | (x − c∗ )T D∗ (x − c∗ ) ≤ 1},

where c∗ , D∗ are given by an optimal solution (t∗ , Z∗ , z∗ , s∗ ) to the semidefinite program

maximize t
s.t.
(a) t ≤ (DetZ)1/n ,
(Out)
(b)   0,
Z
zT

s
(c)  0,
z Z
(d) xTi Zxi − 2xTi z + s ≤ 1, i = 1, ..., m,

with the design variables Z ∈ Sn , z ∈ Rn , t, s ∈ R via the relations

D∗ = Z∗ ; c∗ = Z∗−1 z∗ .

Note that (Out) indeed is a semidefinite program, cf. Proposition 3.7.1.

Proof. Indeed, let us pass in the description (3.7.2) from the “parameters” D, c to the param-
eters Z = D, z = Dc, thus coming to the representation

E = {x | xT Zx − 2xT z + z T Z −1 z ≤ 1}. (!)

The ellipsoid of the latter type contains the points x1 , ..., xm if and only if

xTi Zxi − 2xTi z + z T Z −1 z ≤ 1, i = 1, ..., m,

or, which is the same, if and only if there exists s ≥ z T Z −1 z such that

xTi Zxi − 2xTi z + s ≤ 1, i = 1, ..., m.

Recalling Lemma on the Schur Complement, we see that the constraints (Out.b − d) say exactly
that the ellipsoid (!) contains the points x1 , ..., xm . Since the volume of such an ellipsoid is
(DetZ)−1/2 , (Out) is the problem of maximizing a negative power of the volume of an ellipsoid
containing the finite set {x1 , ..., xm }, i.e., the problem of finding the smallest volume ellipsoid
containing this finite set. It remains to note that an ellipsoid is convex, so that it is exactly the
same – to say that it contains a finite set {x1 , ..., xm } and to say that it contains the convex hull
of this finite set. 2

We see that if S is a polytope given as a convex hull of a finite set, then the problem of
the best outer ellipsoidal approximation of S is an explicit semidefinite program and as such
can be efficiently solved. In contrast to this, if S is a polytope given by a list of inequality
constraints, then the problem of the best outer ellipsoidal approximation of S is “computationally
intractable” – in this case, it is difficult just to check whether a given candidate ellipsoid contains
S.
260 LECTURE 3. SEMIDEFINITE PROGRAMMING

3.7.3 Ellipsoidal approximations of unions/intersections of ellipsoids


Speaking informally, Proposition 3.7.1 deals with inner ellipsoidal approximation of the intersec-
tion of “degenerate” ellipsoids, namely, half-spaces (a half-space is just a very large Euclidean
ball!) Similarly, Proposition 3.7.2 deals with the outer ellipsoidal approximation of the union
of degenerate ellipsoids, namely, points (a point is just a ball of zero radius!). We are about
to demonstrate that when passing from “degenerate” ellipsoids to the “normal” ones, we still
have a possibility to reduce the corresponding approximation problems to explicit semidefinite
programs. The key observation here is as follows:
Proposition 3.7.3 [18] An ellipsoid

E = E(Z, z) ≡ {x = Zu + z | uT u ≤ 1} [Z ∈ Mn,q ]

is contained in the full-dimensional ellipsoid

W = W (Y, y) ≡ {x | (x − y)T Y T Y (x − y) ≤ 1} [Y ∈ Mn,n , DetY 6= 0]

if and only if there exists λ such that


In Y (z − y) YZ
 
 (z − y)T Y T 1−λ 0 (3.7.3)
ZT Y T λIq
as well as if and only if there exists λ such that
Y −1 (Y −1 )T z−y Z
 
 (z − y)T 1−λ 0 (3.7.4)
ZT λIq
Proof. We clearly have

E⊂W
m
uT u ≤ 1 ⇒ (Zu + z − y)T Y T Y (Zu + z − y) ≤ 1
m
uT u ≤ t2 ⇒ (Zu + t(z − y))T Y T Y (Zu + t(z − y)) ≤ t2
m S-Lemma
∃λ ≥ 0 : [t − (Zu + t(z − y)) Y Y (Zu + t(z − y))] − λ[t2 − uT u] ≥ 0 ∀(u, t)
2 T T

m
1 − λ − (z − y)T Y T Y (z − y) −(z − y)T Y T Y Z
 
∃λ ≥ 0 : 0
−Z T Y T Y (z − y) λIq − Z T Y T Y Z
m
1−λ (z − y)T Y T
   
∃λ ≥ 0 : − ( Y (z − y) Y Z )  0
λIq ZT Y T
Now note that in view of Lemma on the Schur Complement the matrix
1−λ (z − y)T Y T
   
− ( Y (z − y) Y Z )
λIq ZT Y T
is positive semidefinite if and only if the matrix in (3.7.3) is so. Thus, E ⊂ W if and only if
there exists a nonnegative λ such that the matrix in (3.7.3), let it be called P (λ), is positive
3.7. EXTREMAL ELLIPSOIDS 261

semidefinite. Since the latter matrix can be positive semidefinite only when λ ≥ 0, we have
proved the first statement of the proposition. To prove the second statement, note that the
matrix in (3.7.4), let it be called Q(λ), is closely related to P (λ):

Y −1
 

Q(λ) = SP (λ)S T , S= 1   0,


Iq

so that Q(λ) is positive semidefinite if and only if P (λ) is so. 2

Here are some consequences of Proposition 3.7.3.

3.7.3.1 Inner ellipsoidal approximation of the intersection of full-dimensional el-


lipsoids

Let
Wi = {x | (x − ci )T Bi2 (x − ci ) ≤ 1} [Bi ∈ Sn++ ],

i = 1, ..., m, be given full-dimensional ellipsoids in Rn ; assume that the intersection W of


these ellipsoids possesses a nonempty interior. Then the problem of the best inner ellipsoidal
approximation of W is the explicit semidefinite program

maximize t
s.t.
t ≤ (DetZ)1/n ,
In Bi (z − ci ) Bi Z (InEll)
 
 (z − ci )T Bi 1 − λi   0, i = 1, ..., m,
ZBi λi In
Z  0

with the design variables Z ∈ Sn , z ∈ Rn , λi , t ∈ R. The largest ellipsoid contained in W =


m
Wi is given by an optimal solution Z∗ , z∗ , t∗ , {λ∗i }) of (InEll) via the relation
T
i=1

E = {x = Z∗ u + z∗ | uT u ≤ 1}.

Indeed, by Proposition 3.7.3 the LMIs


 
In Bi (z − ci ) Bi Z
 (z − ci )T Bi 1 − λi   0, i = 1, ..., m
ZBi λi In

express the fact that the ellipsoid {x = Zu + z | uT u ≤ 1} with Z  0 is contained in


every one of the ellipsoids Wi , i.e., is contained in the intersection W of these ellipsoids.
Consequently, (InEll) is exactly the problem of maximizing (a positive power of) the volume
of an ellipsoid over the ellipsoids contained in W .
262 LECTURE 3. SEMIDEFINITE PROGRAMMING

3.7.3.2 Outer ellipsoidal approximation of the union of ellipsoids


Let
Wi = {x = Ai u + ci | uT u ≤ 1} [Ai ∈ Mn,ki ],
i = 1, ..., m, be given ellipsoids in Rn ; assume that the convex hull W of the union of these
ellipsoids possesses a nonempty interior. Then the problem of the best outer ellipsoidal approx-
imation of W is the explicit semidefinite program

maximize t
s.t.
t ≤ (DetY )1/n ,
In Y ci − z Y Ai (OutEll)
 
 (Y ci − z)T 1 − λi   0, i = 1, ..., m,
ATi Y λi Iki
Y  0

with the design variables Y ∈ Sn , z ∈ Rn , λi , t ∈ R. The smallest ellipsoid containing W =


m
Wi ) is given by an optimal solution (Y∗ , z∗ , t∗ , {λ∗i }) of (OutEll) via the relation
S
Conv(
i=1

E = {x | (x − y∗ )Y∗2 (x − y∗ ) ≤ 1}, y∗ = Y∗−1 z∗ .

Indeed, by Proposition 3.7.3 for Y  0 the LMIs


 
In Y ci − z Y A i
 (Y ci − z)T 1 − λi   0, i = 1, ..., m
ATi Y λi Iki

express the fact that the ellipsoid E = {x | (x − Y −1 z)T Y 2 (x − Y −1 y) ≤ 1} contains every


one of the ellipsoids Wi , i.e., contains the convex hull W of the union of these ellipsoids.
The volume of the ellipsoid E is (DetY )−1 ; consequently, (OutEll) is exactly the problem
of maximizing a negative power (i.e., of minimizing a positive power) of the volume of an
ellipsoid over the ellipsoids containing W .

3.7.4 Approximating sums of ellipsoids


Let us come back to our motivating example, where we were interested to build ellipsoidal
approximation of the set XT of all states x(T ) where a given discrete time invariant linear
system
x(t + 1) = Ax(t) + Bu(t), t = 0, ..., T − 1
x(0) = 0
can be driven in time T by a control u(·) satisfying the norm bound

ku(t)k2 ≤ 1, t = 0, ..., T − 1.

How could we build such an approximation recursively? Let Xt be the set of all states where
the system can be driven in time t ≤ T , and assume that we have already built inner and outer
t and E t
ellipsoidal approximations Ein out of the set Xt :

t t
Ein ⊂ Xt ⊂ Eout .
3.7. EXTREMAL ELLIPSOIDS 263

Let also
E = {x = Bu | uT u ≤ 1}.
Then the set
t+1 t t
Fin = AEin + E ≡ {x = Ay + z | y ∈ Ein , z ∈ E}
clearly is contained in Xt+1 , so that a natural recurrent way to define an inner ellipsoidal ap-
t+1 t+1
proximation of Xt+1 is to take as Ein the largest volume ellipsoid contained in Fin . Similarly,
the set
t+1 t t
Fout = AEout + E ≡ {x = Ay + z | y ∈ Eout , z ∈ E}
clearly covers Xt+1 , and the natural recurrent way to define an outer ellipsoidal approximation
t+1 t+1
of Xt+1 is to take as Eout the smallest volume ellipsoid containing Fout .
t+1 t+1
Note that the sets Fin and Fout are of the same structure: each of them is the arithmetic
sum {x = v + w | v ∈ V, w ∈ W } of two ellipsoids V and W . Thus, we come to the problem
as follows: Given two ellipsoids W, V , find the best inner and outer ellipsoidal approximations
of their arithmetic sum W + V . In fact, it makes sense to consider a little bit more general
problem:

Given m ellipsoids W1 , ..., Wm in Rn , find the best inner and outer ellipsoidal ap-
proximations of the arithmetic sum

W = {x = w1 + w1 + ... + wm | wi ∈ Wi , i = 1, ..., m}

of the ellipsoids W1 , ..., Wm .

In fact, we have posed two different problems: the one of inner approximation of W (let this
problem be called (I)) and the other one, let it be called (O), of outer approximation. It seems
that in general both these problems are difficult (at least when m is not once for ever fixed).
There exist, however, “computationally tractable” approximations of both (I) and (O) we are
about to consider.
In considerations to follow we assume, for the sake of simplicity, that the ellipsoids W1 , ..., Wm
are full-dimensional (which is not a severe restriction – a “flat” ellipsoid can be easily approxi-
mated by a “nearly flat” full-dimensional ellipsoid). Besides this, we may assume without loss
of generality that all our ellipsoids Wi are centered at the origin. Indeed, we have Wi = ci + Vi ,
where ci is the center of Wi and Vi = Wi − ci is centered at the origin; consequently,

W1 + ... + Wm = (c1 + ... + cm ) + (V1 + ... + Vm ),

so that the problems (I) and (O) for the ellipsoids W1 , ..., Wm can be straightforwardly reduced
to similar problems for the centered at the origin ellipsoids V1 , ..., Vm .

3.7.4.1 Problem (O)


Let the ellipsoids W1 , ..., Wm be represented as

Wi = {x ∈ Rn | xT Bi x ≤ 1} [Bi  0].

Our strategy to approximate (O) is very natural: we intend to build a parametric family of
ellipsoids in such a way that, first, every ellipsoid from the family contains the arithmetic sum
W1 + ... + Wm of given ellipsoids, and, second, the problem of finding the smallest volume
264 LECTURE 3. SEMIDEFINITE PROGRAMMING

ellipsoid within the family is a “computationally tractable” problem (specifically, is an explicit


semidefinite program)25) . The seemingly simplest way to build the desired family was proposed
in [18] and is based on the idea of semidefinite relaxation. Let us start with the observation that
an ellipsoid
W [Z] = {x | xT Zx ≤ 1} [Z  0]
contains W1 + ... + Wm if and only if the following implication holds:
n o
{xi ∈ Rn }m i T i 1 m T 1 m
i=1 , [x ] Bi x ≤ 1, i = 1, ..., m ⇒ (x + ... + x ) Z(x + ... + x ) ≤ 1. (∗)

Now let B i be (nm) × (nm) block-diagonal matrix with m diagonal blocks of the size n × n
each, such that all diagonal blocks, except the i-th one, are zero, and the i-th block is the n × n
matrix Bi . Let also M [Z] denote (mn) × (mn) block matrix with m2 blocks of the size n × n
each, every of these blocks being the matrix Z. This is how B i and M [Z] look in the case of
m = 2: " # " # " #
B 1 Z Z
B1 = , B2 = , M [Z] = .
B2 Z Z
Validity of implication (∗) clearly is equivalent to the following fact:
(*.1) For every (mn)-dimensional vector x such that

xT B i x ≡ Tr(B i xx T
|{z}) ≤ 1, i = 1, ..., m,
X[x]

one has
xT M [Z]x ≡ Tr(M [Z]X[x]) ≤ 1.

Now we can use the standard trick: the rank one matrix X[x] is positive semidefinite, so that
we for sure enforce the validity of the above fact when enforcing the following stronger fact:
(*.2) For every (mn) × (mn) symmetric positive semidefinite matrix X such that

Tr(B i X) ≤ 1, i = 1, ..., m,

one has
Tr(M [Z]X) ≤ 1.

We have arrived at the following result.


(D) Let a positive definite n × n matrix Z be such that the optimal value in the
semidefinite program
n o
max Tr(M [Z]X)|Tr(B i X) ≤ 1, i = 1, ..., m, X  0 (SDP)
X

is ≤ 1. Then the ellipsoid

W [Z] = {x | xT Zx ≤ 1}

contains the arithmetic sum W1 + ... + Wm of the ellipsoids Wi = {x | xT Bi x ≤ 1}.


25)
Note that we, in general, do not pretend that our parametric family includes all ellipsoids containing
W1 + ... + Wm , so that the ellipsoid we end with should be treated as nothing more than a “computable surrogate”
of the smallest volume ellipsoid containing the sum of Wi ’s.
3.7. EXTREMAL ELLIPSOIDS 265

We are basically done: the set of those symmetric matrices Z for which the optimal value in
(SDP) is ≤ 1 is SD-representable; indeed, the problem is clearly strictly feasible, and Z affects,
in a linear fashion, the objective of the problem only. On the other hand, the optimal value in an
essentially strictly feasible semidefinite maximization program is a SDr function of the objective
(“semidefinite version” of Proposition 2.4.4). Consequently, the set of those Z for which the
optimal value in (SDP) is ≤ 1 is SDr (as the inverse image, under affine mapping, of the level set
of an SDr function). Thus, the “parameter” Z of those ellipsoids W [Z] which satisfy the premise
in (D) and thus contain W1 + ... + Wm varies in an SDr set Z. Consequently, the problem of
finding the smallest volume ellipsoid in the family {W [Z]}Z∈Z is equivalent to the problem of
maximizing a positive power of Det(Z) over the SDr set Z, i.e., is equivalent to a semidefinite
program.
It remains to build the aforementioned semidefinite program. By the Conic Duality Theorem
the optimal value in the (clearly strictly feasible) maximization program (SDP) is ≤ 1 if and
only if the dual problem
(m )
X X
i
min λi | λi B  M [Z], λi ≥ 0, i = 1, ..., m .
λ
i=1 i

admits a feasible solution with the value of the objective ≤ 1, or, which is clearly the same
(why?), admits a feasible solution with the value of the objective equal 1. In other words,
whenever Z  0 is such that M [Z] is  a convex combination of the matrices B i , the set

W [Z] = {x | xT Zx ≤ 1}

(which is an ellipsoid when Z  0) contains the set W1 + ... + Wm . We have arrived at the
following result (see [18], Section 3.7.4):
Proposition 3.7.4 Given m centered at the origin full-dimensional ellipsoids

Wi = {x ∈ Rn | xT Bi x ≤ 1} [Bi  0],

i = 1, ..., m, in Rn , let us associate with these ellipsoids the semidefinite program


 

 t ≤ Det1/n (Z) 


 m 

λi B i  M [Z]

 P 


 

 i=1
 

max t λi ≥ 0, i = 1, ..., m (Õ)
t,Z,λ 
Z0
 


 

m

 

 P 


 λi = 1 


i=1

where B i is the (mn) × (mn) block-diagonal matrix with blocks of the size n × n and the only
nonzero diagonal block (the i-th one) equal to Bi , and M [Z] is the (mn)×(mn) matrix partitioned
into m2 blocks, every one of them being Z. Every feasible solution (Z, ...) to this program with
positive value of the objective produces ellipsoid

W [Z] = {x | xT Zx ≤ 1}

which contains W1 + ... + Wm , and the volume of this ellipsoid is at most t−n/2 . The smallest
volume ellipsoid which can be obtained in this way is given by (any) optimal solution of (Õ).
266 LECTURE 3. SEMIDEFINITE PROGRAMMING

How “conservative” is (Õ) ? The ellipsoid W [Z ∗ ] given by the optimal solution of (Õ)
contains the arithmetic sum W of the ellipsoids Wi , but not necessarily is the smallest volume
ellipsoid containing W ; all we know is that this ellipsoid is the smallest volume one in certain
subfamily of the family of all ellipsoids containing W . “In the nature” there exists the “true”
smallest volume ellipsoid W [Z ∗∗ ] = {x | xT Z ∗∗ x ≤ 1}, Z ∗∗  0, containing W . It is natural to
ask how large could be the ratio
Vol(W [Z ∗ ])
ϑ= .
Vol(W [Z ∗∗ ])
The answer is as follows:
π n/2

Proposition 3.7.5 One has ϑ ≤ 2 .

Note that the bound stated by Proposition 3.7.5 is not as bad as it looks: the natural way to
compare the “sizes” of two n-dimensional bodies E 0 , E 00 is to look at the ratio of their average
 1/n
Vol(E 0 )
linear sizes Vol (it is natural to assume that shrinking a body by certain factor, say, 2,
(E 00 )
we reduce the “size” of the body exactly by this factor, n

p and not by 2 ). With this approach, the
“level of non-optimality” of W [Z ] is no more than π/2 = 1.253..., i.e., is within 25% margin.
Proof of Proposition 3.7.5: Since Z ∗∗ contains W , the implication (*.1) holds true, i.e., one
has
maxmn
{xT M [Z ∗∗ ]x | xT B i x ≤ 1, i = 1, ..., m} ≤ 1.
x∈R

Since the matrices Bi, i = 1, ..., m, commute and M [Z ∗∗ ]  0, we can apply Proposition 3.8.4
(see Section 3.8.7.3) to conclude that there exist nonnegative µi , i = 1, ..., m, such that
m
π
M [Z ∗∗ ] 
X X
µi B i , µi ≤ .
i=1 i
2
!−1 !−1
Z ∗∗ , t = Det1/n (Z), we get a feasible
P P
It follows that setting λi = µj µi , Z = µj
j j
solution of (Õ). Recalling the origin of Z ∗ , we come to
 n/2

Vol(W [Z ∗ ]) ≤ Vol(W [Z]) =  µj  Vol(W [Z ∗∗ ]) ≤ (π/2)n/2 Vol(W [Z ∗∗ ]),


X

as claimed. 2

Problem (O), the case of “co-axial” ellipsoids. Consider the co-axial case – the one when
there exist coordinates (not necessarily orthogonal) such that all m quadratic forms defining the
ellipsoids Wi are diagonal in these coordinates, or, which is the same, there exists a nonsingular
matrix C such that all the matrices C T Bi C, i = 1, ..., m, are diagonal. Note that the case of
m = 2 always is co-axial – Linear Algebra says that every two homogeneous quadratic forms, at
least one of the forms being positive outside of the origin, become diagonal in a properly chosen
coordinates.
We are about to prove that
3.7. EXTREMAL ELLIPSOIDS 267

(E) In the “co-axial” case, (Õ) yields the smallest in volume ellipsoid containing
W1 + ... + Wm .

Consider the co-axial case. Since we are interested in volume-related issues, and the ratio
of volumes remains unchanged under affine transformations, we can assume w.l.o.g. that the
matrices Bi defining the ellipsoids Wi = {x | xT Bi x ≤ 1} are positive definite and diagonal; let
bi` be the `-th diagonal entry of Bi , ` = 1, ..., n.
By the Fritz John Theorem, “in the nature” there exists a unique smallest volume ellipsoid
W∗ which contains W1 + ... + Wm ; from uniqueness combined with the fact that the sum of our
ellipsoids is symmetric w.r.t. the origin it follows that this optimal ellipsoid W∗ is centered at
the origin:
W∗ = {x | xT Z∗ x ≤ 1}
with certain positive definite matrix Z∗ .
Our next observation is that the matrix Z∗ is diagonal. Indeed, let E be a diagonal matrix
with diagonal entries ±1. Since all Bi ’s are diagonal, the sum W1 + ... + Wm remains invariant
under multiplication by E:

x ∈ W1 + ... + Wm ⇔ Ex ∈ W1 + ... + Wm .

It follows that the ellipsoid E(W∗ ) = {x | xT (E T Z∗ E)x ≤ 1} covers W1 + ... + Wm along with
W∗ and of course has the same volume as W∗ ; from the uniqueness of the optimal ellipsoid it
follows that E(W∗ ) = W∗ , whence E T Z∗ E = Z∗ (why?). Since the concluding relation should
be valid for all diagonal matrices E with diagonal entries ±1, Z∗ must be diagonal.
Now assume that the set

W (z) = {x | xT Diag(z)x ≤ 1} (3.7.5)

given by a nonnegative vector z contains W1 + ... + Wm . Then the following implication holds
true:
n
X n
X
∀{xi` } i=1,...,m : bi` (xi` )2 ≤ 1, i = 1, ..., m ⇒ z` (x1` + x2` + ... + xm 2
` ) ≤ 1. (3.7.6)
`=1,...,n
`=1 `=1

Denoting y`i = (xi` )2 and taking into account that z` ≥ 0, we see that the validity of (3.7.6)
implies the validity of the implication
 
n n m q
y`i y`j  ≤ 1.
X X X X
∀{y`i ≥ 0} i=1,...,m : bi` y`i ≤ 1, i = 1, ..., m ⇒ z`  y`i + 2
`=1,...,n
`=1 `=1 i=1 1≤i<j≤m
(3.7.7)
Now let Y be an (mn) × (mn) symmetric matrix satisfying the relations

Y  0; Tr(Y B i ) ≤ 1, i = 1, ..., m. (3.7.8)

Let us partition Y into m2 square blocks, and let Y`ij be the `-th diagonal entry of the ij-th
Y`ij
 ii
Y`

block of Y . For all i, j with 1 ≤ i < j ≤ m, and all `, 1 ≤ ` ≤ n, the 2 × 2 matrix
Y`ij Y`jj
is a principal submatrix of Y and therefore is positive semidefinite along with Y , whence
q
Y`ij ≤ Y`ii Y`jj . (3.7.9)
268 LECTURE 3. SEMIDEFINITE PROGRAMMING

In view of (3.7.8), the numbers y`i ≡ Y`ii satisfy the premise in the implication (3.7.7), so that
" #
n m q
1 ≥ Y`ii Y`ii Y`jj
P P P
z` +2 [by (3.7.7)]
`=1 "i=1 1≤i<j≤m #
n m
≥ Y`ii + 2 Y`ij [since z ≥ 0 and by (3.7.9)]
P P P
z`
`=1 i=1 1≤i<j≤m
= Tr(Y M [Diag(z)]).
Thus, (3.7.8) implies the inequality Tr(Y M [Diag(z)]) ≤ 1, i.e., the implication
Y  0, Tr(Y B i ) ≤ 1, i = 1, ..., m ⇒ Tr(Y M [Diag(z)]) ≤ 1
holds true. Since the premise in this implication is strictly feasible, the validity of the implication,
by Semidefinite Duality, implies the existence of nonnegative λi , i λi ≤ 1, such that
P
X
M [Diag(z)]  λi B i .
i
Combining our observations, we come to the conclusion as follows:
In the case of diagonal matrices Bi , if the set (3.7.5), given by a nonnegative vector
z, contains W1 + ... + Wm , then the matrix Diag(z) can be extended to a feasible
solution of the problem (Õ). Consequently, in the case in question the approximation
scheme given by (Õ) yields the minimum volume ellipsoid containing W1 + ... + Wm
(since the latter ellipsoid, as we have seen, is of the form (3.7.5) with z ≥ 0).
It remains to note that the approximation scheme associated with (Õ) is affine-invariant, so that
the above conclusion remains valid when we replace in its premise “the case of diagonal matrices
Bi ” with “the co-axial case”.
Remark 3.7.1 In fact, (E) is an immediate consequence of the following fact (which, essentially,
is proved in the above reasoning):
Let A1 , ..., Am , B be symmetric matrices such that the off-diagonal entries of all Ai ’s are
nonpositive, and the off-diagonal entries of B are nonnegative. Assume also that the system of
inequalities
xT Ai x ≤ ai , i = 1, ..., m (S)
is strictly feasible. Then the inequality
xT Bx ≤ b
is a consequence of the system (S) if and only if it is a “linear consequence” of (S), i.e., if and
only if there exist nonnegative weights λi such that
X X
B λ i Ai , λi ai ≤ b.
i i
In other words, in the case in question the optimization program
n o
max xT Bx | xT Ai x ≤ ai , i = 1, ..., m
x
and its standard semidefinite relaxation
max {Tr(BX) | X  0, Tr(Ai X) ≤ ai , i = 1, ..., m}
X
share the same optimal value.
3.7. EXTREMAL ELLIPSOIDS 269

3.7.4.2 Problem (I)


Let us represent the given centered at the origin ellipsoids Wi as

Wi = {x | x = Ai u | uT u ≤ 1} [Det(Ai ) 6= 0].

We start from the following observation:

(F) An ellipsoid E[Z] = {x = Zu | uT u ≤ 1} ([Det(Z) 6= 0]) is contained in the sum


W1 + ... + Wm of the ellipsoids Wi if and only if one has
m
X
∀x : kZ T xk2 ≤ kATi xk2 . (3.7.10)
i=1

Indeed, assume, first, that there exists a vector x∗ such that the inequality in (3.7.10) is
violated at x = x∗ , and let us prove that in this case W [Z] is not contained in the set
W = W1 + ... + Wm . We have

max xT∗ x = max xT∗ Ai u | uT u ≤ 1 = kATi x∗ k2 , i = 1, ..., m,


 
x∈Wi

and similarly
max xT∗ x = kZ T x∗ k2 ,
x∈E[Z]

whence
m
max xT∗ x xT∗ (x1 + ... + xm ) = max xT∗ xi
P
= max
i i
x∈W x ∈Wi i=1 x ∈Wi
m
kATi x∗ k2 < kZ T x∗ k2 = max xT∗ x,
P
=
i=1 x∈E[Z]

and we see that E[Z] cannot be contained in W . Vice versa, assume that E[Z] is not
contained in W , and let y ∈ E[Z]\W . Since W is a convex compact set and y 6∈ W , there
exists a vector x∗ such that xT∗ y > max xT∗ x, whence, due to the previous computation,
x∈W

m
X
kZ T x∗ k2 = max xT∗ x ≥ xT∗ y > max xT∗ x = kATi x∗ k2 ,
x∈E[Z] x∈W
i=1

and we have found a point x = x∗ at which the inequality in (3.7.10) is violated. Thus, E[Z]
is not contained in W if and only if (3.7.10) is not true, which is exactly what should be
proved.

A natural way to generate ellipsoids satisfying (3.7.10) is to note that whenever Xi are n × n
matrices of spectral norms
q q
|Xi | ≡ λmax (XiT Xi ) = λmax (Xi XiT ) = max{kXi xk2 | kxk2 ≤ 1}
x

not exceeding 1, the matrix

Z = Z(X1 , ..., Xm ) = A1 X1 + A2 X2 + ... + Am Xm

satisfies (3.7.10):
m
X m
X m
X
T
kZ xk2 = k[X1T AT1 + ... + T T
Xm Am ]xk2 ≤ kXiT ATi xk2 ≤ |XiT |kATi xk2 ≤ kATi xk2 .
i=1 i=1 i=1
270 LECTURE 3. SEMIDEFINITE PROGRAMMING

Thus, every collection of square matrices Xi with spectral norms not exceeding 1 produces
an ellipsoid satisfying (3.7.10) and thus contained in W , and we could use the largest volume
ellipsoid of this form (i.e., the one corresponding to the largest |Det(A1 X1 + ... + Am Xm )|) as a
surrogate of the largest volume ellipsoid contained in W . Recall that we know how to express a
bound on the spectral norm of a matrix via LMI:
−X T
 
tIn
|X| ≤ t ⇔ 0 [X ∈ Mn,n ]
−X tIn
m
P
(item 16 of Section 3.2). The difficulty, however, is that the matrix Ai Xi specifying the
i=1
ellipsoid E(X1 , ..., Xm ), although being linear in the “design variables” Xi , is not necessarily
symmetric positive semidefinite, and we do not know how to maximize the determinant over
general-type square matrices. We may, however, use the following fact from Linear Algebra:
Lemma 3.7.1 Let Y = S + C be a square matrix represented as the sum of a symmetric matrix
S and a skew-symmetric (i.e., C T = −C) matrix C. Assume that S is positive definite. Then
|Det(Y )| ≥ Det(S).
Proof. We have Y = S + C = S 1/2 (I + Σ)S 1/2 , where Σ = S −1/2 CS −1/2 is skew-symmetric along
with C. We have |Det(Y )| = Det(S)|Det(I + Σ)|; it remains to note that all eigenvalues of the skew-
symmetric matrix Σ are purely imaginary, so that the eigenvalues of I + Σ are ≥ 1 in absolute value,
whence |Det(I + Σ)| ≥ 1. 2

In view of Lemma, it makes sense to impose on X1 , ..., Xm , besides the requirement that
their spectral norms are ≤ 1, also the requirement that the “symmetric part”
m m
" #
1 X X
S(X1 , ..., Xm ) = Ai Xi + XiT Ai
2 i=1 i=1
P
of the matrix Ai Xi is positive semidefinite, and to maximize under these constraints the
i
quantity Det(S(X1 , ..., Xm )) – a lower bound on the volume of the ellipsoid E[Z(X1 , ..., Xm )].
With this approach, we come to the following result:
Proposition 3.7.6 Let Wi = {x = Ai u | uT u ≤ 1}, Ai  0, i = 1, .., m. Consider the
semidefinite program
maximize t
s.t.
  m
1/n
1
t ≤ [XiT Ai + Ai Xi ]
P
(a) Det 2
m
i=1 (Ĩ)
[XiT Ai + Ai Xi ]  0
P
(b)
i=1
−XiT

In
(c)  0, i = 1, ..., m
−Xi In
with design variables X1 , ..., Xm ∈ Mn,n , t ∈ R. Every feasible solution ({Xi }, t) to this problem
produces the ellipsoid
m
X
E(X1 , ..., Xm ) = {x = ( Ai Xi )u | uT u ≤ 1}
i=1
3.7. EXTREMAL ELLIPSOIDS 271

contained in the arithmetic sum W1 + ... + Wm of the original ellipsoids, and the volume of
this ellipsoid is at least tn . The largest volume ellipsoid which can be obtained in this way is
associated with (any) optimal solution to (Ĩ).

In fact, problem (I) is equivalent to the problem


m
X
|Det( Ai Xi )| → max | |Xi | ≤ 1, i = 1, ..., m (3.7.11)
i=1

we have started with, since the latter problem always has an optimal solution {Xi∗ } with
m
Ai Xi∗ . Indeed, let {Xi+ } be an optimal
P
positive semidefinite symmetric matrix G∗ =
i=1
m
Ai Xi+ , as every n × n square matrix, admits
P
solution of the problem. The matrix G+ =
i=1
a representation G+ = G∗ U , where G+ is a positive semidefinite symmetric, and U is an
orthogonal matrix. Setting Xi∗ = Xi U T , we convert {Xi+ } into a new feasible solution of
m
Ai Xi∗ = G∗  0, and Det(G+ ) = Det(G∗ ), so that the new
P
(3.7.11); for this solution
i=1
solution is optimal along with {Xi+ }.

Problem (I), the co-axial case. We are about to demonstrate that in the co-axial case,
when in properly chosen coordinates in Rn the ellipsoids Wi can be represented as

Wi = {x = Ai u | uT u ≤ 1}

with positive definite diagonal matrices Ai , the above scheme yields the best (the largest volume)
ellipsoid among those contained in W = W1 + ... + Wm . Moreover, this ellipsoid can be pointed
out explicitly – it is exactly the ellipsoid E[Z] with Z = Z(In , ..., In ) = A1 + ... + Am !
The announced fact is nearly evident. Assuming that Ai are positive definite and diagonal,
consider the parallelotope
m
X
c = {x ∈ Rn | |xj | ≤ `j =
W [Ai ]jj , j = 1, ..., n}.
i=1

This parallelotope clearly contains W (why?), and the largest volume ellipsoid contained in
W
c clearly is the ellipsoid
n
X
{x | `−2 2
j xj ≤ 1},
j=1

i.e., is nothing else but the ellipsoid E[A1 + ... + Am ]. As we know from our previous
considerations, the latter ellipsoid is contained in W , and since it is the largest volume
c ⊃ W , it is the largest volume ellipsoid contained
ellipsoid among those contained in the set W
in W as well.

Example. In the example to follow we are interested to understand what is the domain DT
on the 2D plane which can be reached by a trajectory of the differential equation
d −0.8147 −0.4163
          
x1 (t) x1 (t) u1 (t) x1 (0) 0
= + , =
dt x2 (t) 0.8167 −0.1853 x2 (t) 0.7071u2 (t) x2 (0) 0
| {z }
A
272 LECTURE 3. SEMIDEFINITE PROGRAMMING
 
u1 (t)
in T sec under a piecewise-constant control u(t) = which switches from one constant
u2 (t)
value to another one every ∆t = 0.01 sec and is subject to the norm bound
ku(t)k2 ≤ 1 ∀t.
The system is stable (the eigenvalues of A are −0.5 ± 0.4909i). In order to build DT , note
that the
 states ofthe system at time instants k∆t, k = 0, 1, 2, ... are the same as the states
x1 (k∆t)
x[k] = of the discrete time system
x2 (k∆t)
 ∆t 
   
1 0 0
Z
x[k + 1] = exp{A∆t} x[k] +  exp{As} ds u[k], x[0] =
 , (3.7.12)
| {z } 0 0.7071 0
S 0
| {z }
B
where u[k] is the value of the control on the “continuous time” interval (k∆t, (k + 1)∆t).
We build the inner Ik and the outer Ok ellipsoidal approximations of the domains Dk = Dk∆t
in a recurrent manner:
• the ellipses I0 and O0 are just the singletons (the origin);
• Ik+1 is the best (the largest in the area) ellipsis contained in the set
SIk + BW, W = {u ∈ R2 | kuk2 ≤ 1},
which is the sum of two ellipses;
• Ok+1 is the best (the smallest in the area) ellipsis containing the set
SOk + BW,
which again is the sum of two ellipses.
Here is the picture we get:

0.6

0.4

0.2

−0.2

−0.4

−0.6

−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8

Outer and inner approximations of the “reachability domains”


D10` = D0.1` sec , ` = 1, 2, ..., 10, for system (3.7.12)
• Ten pairs of ellipses are the outer and inner approximations of the domains
D1 , ..., D10 (look how close the ellipses from a pair are close to each other!);
• Four curves are sample trajectories of the system (dots correspond to time instants
0.1` sec in continuous time, i.e., time instants 10` in discrete time, ` = 0, 1, ..., 10).
3.8. EXERCISES FOR LECTURE 3 273

3.8 Exercises for Lecture 3


Solutions to exercises/parts of exercises colored in cyan can be found in section 6.3.

3.8.1 Around positive semidefiniteness, eigenvalues and -ordering


3.8.1.1 Criteria for positive semidefiniteness
Recall the criterion of positive definiteness of a symmetric matrix:

[Sylvester] A symmetric m × m matrix A = [aij ]m


i,j=1 is positive definite if and only
if all angular minors  
Det [aij ]ki,j=1 , k = 1, ..., m,
are positive.

Exercise 3.1 Prove that a symmetric m × m matrix A is positive semidefinite if and only if all
its principal minors (i.e., determinants of square sub-matrices symmetric w.r.t. the diagonal)
are nonnegative.

Hint: look at the angular minors of the matrices A + In for small positive .

Demonstrate by an example that nonnegativity of angular minors of a symmetric matrix is not


sufficient for the positive semidefiniteness of the matrix.

Exercise 3.2 [Diagonal-dominant matrices] Let a symmetric matrix A = [aij ]m


i,j=1 satisfy the
relation X
aii ≥ |aij |, i = 1, ..., m.
j6=i

Prove that A is positive semidefinite.

3.8.1.2 Variational characterization of eigenvalues


The basic fact about eigenvalues of a symmetric matrix is the following

Variational Characterization of Eigenvalues [Theorem A.7.3] Let A be a sym-


metric m × m matrix and λ(A) = (λ1 (A), ..., λm (A)) be the vector of eigenvalues of
A taken with their multiplicities and arranged in non-ascending order:

λ1 (A) ≥ λ2 (A) ≥ ... ≥ λm (A).

Then for every i = 1, ..., m one has:

λi (A) = min max v T Av,


E∈Ei v∈E,v T v=1

where Ei is the family of all linear subspaces of Rm of dimension m − i + 1.

Singular values of rectangular matrices also admit variational description:


274 LECTURE 3. SEMIDEFINITE PROGRAMMING

Variational Characterization of Singular Values Let A be an m × n matrix,


m ≤ n, and let σ(A) = λ((AAT )1/2 ) be the vector of singular values of A. Then for
every i = 1, ..., m one has:

σi (A) = min max kAvk2 ,


E∈Ei v∈E,v T v=1

where Ei is the family of all linear subspaces of Rn of dimension n − i + 1.

Exercise 3.3 Derive the Variational Characterization of Singular Values from the Variational
Characterization of Eigenvalues.

Exercise 3.4 Derive from the Variational Characterization of Eigenvalues the following facts:
(i) [Monotonicity of the vector of eigenvalues] If A  B, then λ(A) ≥ λ(B);
(ii) The functions λ1 (X), λm (X) of X ∈ Sm are convex and concave, respectively.
(iii) If ∆ is a convex subset of the real axis, then the set of all matrices X ∈ Sm with spectrum
from ∆ is convex.

Recall now the definition of a function of symmetric matrix. Let A be a symmetric m × m


matrix and
k
X
p(t) = pi ti
i=0
be a real polynomial on the axis. By definition,
k
X
p(A) = pi Ai ∈ Sm .
i=0

This definition is compatible with the arithmetic of real polynomials: when you add/multiply
polynomials, you add/multiply the “values” of these polynomials at every fixed symmetric ma-
trix:
(p + q)(A) = p(A) + q(A); (p · q)(A) = p(A)q(A).
A nice feature of this definition is that
(A) For A ∈ Sm , the matrix p(A) depends only on the restriction of p on the spectrum
(set of eigenvalues) of A: if p and q are two polynomials such that p(λi (A)) = q(λi (A))
for i = 1, ..., m, then p(A) = q(A).
Indeed, we can represent a symmetric matrix A as A = U T ΛU , where U is orthogonal and Λ
is diagonal with the eigenvalues of A on its diagonal. Since U U T = I, we have Ai = U T Λi U ;
consequently,
p(A) = U T p(Λ)U,
and since the matrix p(Λ) depends on the restriction of p on the spectrum of A only, the
result follows.
As a byproduct of our reasoning, we get an “explicit” representation of p(A) in terms
of the spectral decomposition A = U T ΛU (U is orthogonal, Λ is diagonal with the
diagonal λ(A)):
(B) The matrix p(A) is just U T Diag(p(λ1 (A)), ..., p(λn (A)))U .
(A) allows to define arbitrary functions of matrices, not necessarily polynomials:
3.8. EXERCISES FOR LECTURE 3 275

Let A be symmetric matrix and f be a real-valued function defined at least at the


spectrum of A. By definition, the matrix f (A) is defined as p(A), where p is a
polynomial coinciding with f on the spectrum of A. (The definition makes sense,
since by (A) p(A) depends only on the restriction of p on the spectrum of A, i.e.,
every “polynomial continuation” p(·) of f from the spectrum of A to the entire axis
results in the same p(A)).
The “calculus of functions of a symmetric matrix” is fully compatible with the usual arithmetic
of functions, e.g:

(f + g)(A) = f (A) + g(A); (µf )(A) = µf (A); (f · g)(A) = f (A)g(A); (f ◦ g)(A) = f (g(A)),

provided that the functions in question are well-defined on the spectrum of the corre-
sponding matrix. And of course the spectral decomposition of f (A) is just f (A) =
U T Diag(f (λ1 (A)), ..., f (λm (A)))U , where A = U T Diag(λ1 (A), ..., λm (A))U is the spectral de-
composition of A.
Note that “Calculus of functions of symmetric matrices” becomes very unusual when we
are trying to operate with functions of several (non-commuting) matrices. E.g., it is generally
not true that exp{A + B} = exp{A} exp{B} (the right hand side matrix may be even non-
symmetric!). It is also generally not true that if f is monotone and A  B, then f (A)  f (B),
etc.
Exercise 3.5 Demonstrate by an example that the relation 0  A  B does not necessarily
imply that A2  B 2 .
By the way, the relation 0  A  B does imply that 0  A1/2  B 1/2 .
Sometimes, however, we can get “weak” matrix versions of usual arithmetic relations. E.g.,
Exercise 3.6 Let f be a nondecreasing function on the real line, and let A  B. Prove that
λ(f (A)) ≥ λ(f (B)).
The strongest (and surprising) “weak” matrix version of a usual (“scalar”) inequality is as
follows.
Let f (t) be a closed convex function on the real line; by definition, it means that f is a
function on the axis taking real values and the value +∞ such that
– the set Dom f of the values of argument where f is finite is convex and nonempty;
– if a sequence {ti ∈ Dom f } converges to a point t and the sequence f (ti ) has a limit, then
t ∈ Dom f and f (t) ≤ lim f (ti ) (this property is called “lower semicontinuity”).
i→∞
0, 0≤t≤1
E.g., the function f (x) = is closed. In contrast to this, the functions
+∞, otherwise

 0,0<t≤1

g(x) = 1, t=0
+∞, for all remaining t

and 
0, 0<t<1
h(x) =
+∞, otherwise
are not closed, although they are convex: a closed function cannot “jump up” at an endpoint
of its domain, as it is the case for g, and it cannot take value +∞ at a point, if it takes values
≤ a < ∞ in a neighbourhood of the point, as it is the case for h.
276 LECTURE 3. SEMIDEFINITE PROGRAMMING

For a convex function f , its Legendre transformation f∗ (also called the conjugate, or the
Fenchel dual of f ) is defined as
f∗ (s) = sup [ts − f (t)] .
t
It turns out that the Legendre transformation of a closed convex function also is closed and
convex, and that twice taken Legendre transformation of a closed convex function is this function.
The Legendre transformation (which, by the way, can be defined for convex functions on Rn
as well) underlies many standard inequalities. Indeed, by definition of f∗ we have
f∗ (s) + f (t) ≥ st ∀s, t; (L)
For specific choices of f , we can derive from the general inequality (L) many useful inequalities.
E.g.,
• If f (t) = 21 t2 , then f∗ (s) = 21 s2 , and (L) becomes the standard inequality
1 1
st ≤ t2 + s2 ∀s, t ∈ R;
2 2
 tp  sq
t≥0
p, q , s≥0
• If 1 < p < ∞ and f (t) = , then f∗ (s) = , with q given by
+∞, t < 0 +∞, s<0
1 1
p + q = 1, and (L) becomes the Young inequality

tp sq 1 1
∀(s, t ≥ 0) : ts ≤ + , 1 < p, q < ∞, + = 1.
p q p q
Now, what happens with (L) if s, t are symmetric matrices? Of course, both sides of (L) still
make sense and are matrices, but we have no hope to say something reasonable about the relation
between these matrices (e.g., the right hand side in (L) is not necessarily symmetric). However,

Exercise 3.7 Let f∗ be a closed convex function with the domain Dom f∗ ⊂ R+ , and let f be
the Legendre transformation of f∗ . Then for every pair of symmetric matrices X, Y of the same
size with the spectrum of X belonging to Dom f and the spectrum of Y belonging to Dom f∗ one
has  
λ(f (X)) ≥ λ Y 1/2 XY 1/2 − f∗ (Y ) 26 )

3.8.1.3 Birkhoff ’s Theorem


Surprisingly enough, one of the most useful facts about eigenvalues of symmetric matrices is the
following, essentially combinatorial, statement (it does not mention the word “eigenvalue” at
all).
Birkhoff ’s Theorem. Consider the set Sm of double-stochastic m × m matrices,
i.e., square matrices [pij ]m
i,j=1 satisfying the relations

pij ≥ 0, i, j = 1, ..., m;
m
P
pij = 1, j = 1, ..., m;
i=1
Pm
pij = 1, i = 1, ..., m.
j=1
26)
In the scalar case, our inequality reads f (x) ≥ y 1/2 xy 1/2 − f∗ (y), which is an equivalent form of (L) when
Dom f∗ ⊂ R+ .
3.8. EXERCISES FOR LECTURE 3 277

A matrix P belongs to Sm if and only if it can be represented as a convex combination


of m × m permutation matrices:
X X
P ∈ Sm ⇔ ∃(λi ≥ 0, λi = 1) : P = λi Πi ,
i i

where all Πi are permutation matrices (i.e., with exactly one nonzero element, equal
to 1, in every row and every column).
For proof, see Section B.2.8.C.1.

An immediate corollary of the Birkhoff Theorem is the following fact:

(C) Let f : Rm → R ∪ {+∞} be a convex symmetric function (symmetry means


that the value of the function remains unchanged when we permute the coordinates
in an argument), let x ∈ Dom f and P ∈ Sm . Then

f (P x) ≤ f (x).

The proof is immediate: by Birkhoff’s Theorem, P x is a convex combination of a number of


permutations xi of x. Since f is convex, we have

f (P x) ≤ max f (xi ) = f (x),


i

the concluding equality resulting from the symmetry of f .

The role of (C) in numerous questions related to eigenvalues is based upon the following
simple

Observation. Let A be a symmetric m × m matrix. Then the diagonal Dg(A) of the


matrix A is the image of the vector λ(A) of the eigenvalues of A under multiplication
by a double stochastic matrix:

Dg(A) = P λ(A) for some P ∈ Sm

Indeed, consider the spectral decomposition of A:

A = U T Diag(λ1 (A), ..., λm (A))U

with orthogonal U = [uij ]. Then


m
X
Aii = u2ji λj (A) ≡ (P λ(A))i ,
j=1

where the matrix P = [u2ji ]m


i,j=1 is double stochastic.

Combining the Observation and (C), we conclude that if f is a convex symmetric function on
Rm , then for every m × m symmetric matrix A one has

f (Dg(A)) ≤ f (λ(A)).
278 LECTURE 3. SEMIDEFINITE PROGRAMMING

Moreover, let Om be the set of all orthogonal m × m matrices. For every V ∈ Om , the matrix
V T AV has the same eigenvalues as A, so that for a convex symmetric f one has

f (Dg(V T AV )) ≤ f (λ(V T AV )) = f (λ(A)),

whence
f (λ(A)) ≥ max f (Dg(V T AV )).
V ∈Om

In fact the inequality here is equality, since for properly chosen V ∈ Om we have Dg(V T AV ) =
λ(A). We have arrived at the following result:
(D) Let f be a symmetric convex function on Rm . Then for every symmetric m × m
matrix A one has
f (λ(A)) = max f (Dg(V T AV )),
V ∈Om
Om being the set of all m × m orthogonal matrices.
In particular, the function
F (A) = f (λ(A))
is convex in A ∈ Sm (as the maximum of a family of convex in A functions FV (A) =
f (Dg(V T AV )), V ∈ Om .)

Exercise 3.8 Let g(t) : R → R ∪ {+∞} be a convex function, and let Fn be the set of all
matrices X ∈ Sn with the spectrum belonging to Dom g. Prove that the function Tr(g(X)) is
convex on Fn .
Hint: Apply (D) to the function f (x1 , ..., xn ) = g(x1 ) + ... + g(xn ).

Exercise 3.9 Let A = [aij ] be a symmetric m × m matrix. Prove that


m m
(i) Whenever p ≥ 1, one has |aii |p ≤ |λi (A)|p ;
P P
i=1 i=1
m
aii ≥ Det(A);
Q
(ii) Whenever A is positive semidefinite,
i=1
(iii) For x ∈ Rm , let the function Sk (x) be the sum of k largest entries of x (i.e., the sum
of the first k entries in the vector obtained from x by writing down the coordinates of x in the
non-ascending order). Prove that Sk (x) is a convex symmetric function of x and derive from
this observation that
Sk (Dg(A)) ≤ Sk (λ(A)).
k
P
Hint: note that Sk (x) = max x il .
1≤i1 <i2 <...<ik ≤m l=1

(iv) [Trace inequality] Whenever A, B ∈ Sm , one has

λT (A)λ(B) ≥ Tr(AB).

1 1
Exercise 3.10 Prove that if A ∈ Sm and p, q ∈ [1, ∞] are such that p + q = 1, then

max Tr(AB) = kλ(A)kp .


B∈Sm :kλ(B)kq =1

1 1
In particular, kλ(·)kp is a norm on Sm , and the conjugate of this norm is kλ(·)kq , p + q = 1.
3.8. EXERCISES FOR LECTURE 3 279
 
X11 X12 ... X1m

X12T X22 ... X2m 
 
Exercise 3.11 Let X =  .. ..  be an n × n symmetric matrix which is
..

 . ··· 
.

.
T
X1m T
X2m ... Xmm
2
partitioned into m blocks Xij in a symmetric, w.r.t. the diagonal, fashion (so that the blocks
Xjj are square), and let  
X11

 X22 

Xb =
.. .

 . 

Xmm
1) Let F : Sn → R ∪ {+∞} be a convex “rotation-invariant” function: for all Y ∈ Sn and
all orthogonal matrices U one has F (U T Y U ) = F (Y ). Prove that
b ≤ F (X).
F (X)
b as a convex combination of the rotations U T XU , U T U = I,
Hint: Represent the matrix X
of X.

2) Let f : Rn → R ∪ {+∞} be a convex symmetric w.r.t. permutations of the entries in the


argument function, and let F (Y ) = f (λ(Y )), Y ∈ Sn . Prove that
b ≤ F (X).
F (X)

3) Let g : R → R ∪ {+∞} be convex function on the real line which is finite on the set of
eigenvalues of X, and let Fn ⊂ Sn be the set of all n × n symmetric matrices with all eigenvalues
belonging to the domain of g. Assume that the mapping

Y 7→ g(Y ) : Fn → Sn

is -convex:

g(λ0 Y 0 + λ00 Y 00 )  λ0 g(Y 0 ) + λ00 g(Y 00 ) ∀(Y 0 , Y 00 ∈ Fn , λ0 , λ00 ≥ 0, λ0 + λ00 = 1).

Prove that
(g(X))ii  g(Xii ), i = 1, ..., m,
where the partition of g(X) into the blocks (g(X))ij is identical to the partition of X into the
blocks Xij .

Exercise 3.11 gives rise to a number of interesting inequalities. Let X, X


b be the same as in the
Exercise, and let [Y ] denote the northwest block, of the same size as X11 , of an n × n matrix Y .
Then
m 1/p
kλ(Xii )kpp ≤ kλ(X)kp , 1 ≤ p < ∞
P
1.
i=1
[Exercise 3.11.2), f (x) = kxkp ];
m
2. If X  0, then Det(X) ≤
Q
Det(Xii )
i=1

[Exercise 3.11.2), f (x) = −(x1 ...xn )1/n for x ≥ 0];


280 LECTURE 3. SEMIDEFINITE PROGRAMMING

3. [X 2 ]  X11
2

[This inequality is nearly evident; it follows also from Exercise 3.11.3) with g(t) = t2 (the
-convexity of g(Y ) is stated in Exercise 3.21.1))];
−1
4. If X  0, then X11  [X −1 ]
[Exercise 3.11.3) with g(t) = t−1 for t > 0; the -convexity of g(Y ) on Sn++ is stated by
Exercise 3.21.2)];
1/2
5. For every X  0, [X 1/2 ]  X11

[Exercise 3.11.3) with g(t) = − t; the -convexity of g(Y ) is stated by Exercise 3.21.4)].
Extension: If X  0, then for every α ∈ (0, 1) one has [X α ]  X11
α

[Exercise 3.11.3) with g(t) = −tα ; the function −Y α of Y  0 is known to be -convex];

6. If X  0, then [ln(X)]  ln(X11 )


[Exercise 3.11.3) with g(t) = − ln t, t > 0; the -convexity of g(Y ) is stated by Exercise
3.21.5)].

Exercise 3.12 1) Let A = [aij ]i,j  0, let α ≥ 0, and let B ≡ [bij ]i,j = Aα . Prove that

≤ aαii , α≤1

bii
≥ aαii , α≥1

2) Let A = [aij ]i,j  0, and let B ≡ [bij ]i,j = A−1 . Prove that bii ≥ a−1
ii .
3) Let [A] denote the northwest 2 × 2 block of a square matrix. Which of the implications

(a) A  0 ⇒ [A4 ]  [A]4


(b) A  0 ⇒ [A4 ]1/4  [A]

are true?

3.8.1.4 Semidefinite representations of functions of eigenvalues


The goal of the subsequent series of exercises is to prove Proposition 3.2.1.
We start with a description (important by its own right) of the convex hull of permutations
of a given vector. Let x ∈ Rm , and let X[x] be the set of all convex combinations of m! vectors
obtained from x by all permutations of the coordinates.

Claim: [“Majorization principle”] X[x] is exactly the solution set of the following
system of inequalities in variables y ∈ Rm :

Sj (y) ≤ Sj (x), j = 1, ..., m − 1


(+)
y1 + ... + ym = x1 + ... + xm

(recall that Sj (y) is the sum of the largest j entries of a vector y).

Exercise 3.13 [Easy part of the claim] Let Y be the solution set of (+). Prove that Y ⊃ X[x].

Hint: Use (C) and the convexity of the functions Sj (·).


3.8. EXERCISES FOR LECTURE 3 281

Exercise 3.14 [Difficult part of the claim] Let Y be the solution set of (+). Prove that Y ⊂
X[x].

Sketch of the proof: Let y ∈ Y . We should prove that y ∈ X[x]. By symmetry, we may
assume that the vectors x and y are ordered: x1 ≥ x2 ≥ ... ≥ xm , y1 ≥ y2 ≥ ... ≥ ym .
Assume that y 6∈ X[x], and let us lead this assumption to a contradiction.
1) Since X[x] clearly is a convex compact set and y 6∈ X[x], there exists a linear functional
Pm
c(z) = ci zi which separates y and X[x]:
i=1

c(y) > max c(z).


z∈X[x]

Prove that such a functional can be chosen “to be ordered”: c1 ≥ c2 ≥ ... ≥ cm .


2) Verify that
m
X m−1
X i
X m
X
c(y) ≡ ci yi = (ci − ci+1 ) y j + cm yj
i=1 i=1 j=1 j=1

(Abel’s formula – a discrete version of integration by parts). Use this observation along with
“orderedness” of c(·) and the inclusion y ∈ Y to conclude that c(y) ≤ c(x), thus coming to
the desired contradiction.

Exercise 3.15 Use the Majorization principle to prove Proposition 3.2.1.

The next pair of exercises is aimed at proving Proposition 3.2.2.

Exercise 3.16 Let x ∈ Rm , and let X+ [x] be the set of all vectors x0 dominated by a vector
form X[x]:
X+ [x] = {y | ∃z ∈ X[x] : y ≤ z}.

1) Prove that X+ [x] is a closed convex set.


2) Prove the following characterization of X+ [x]:

X+ [x] is exactly the set of solutions of the system of inequalities Sj (y) ≤ Sj (x),
j = 1, ..., m, in variables y.

Hint: Modify appropriately the constriction outlined in Exercise 3.14.

Exercise 3.17 Derive Proposition 3.2.2 from the result of Exercise 3.16.2).

3.8.1.5 Cauchy’s inequality for matrices


The standard Cauchy’s inequality says that
X sX sX
| x i yi | ≤ x2i yi2 (3.8.1)
i i i

for reals xi , yi , i = 1, ..., n; this inequality is exact in the sense that for every collection x1 , ..., xn
there exists a collection y1 , ..., yn with yi2 = 1 which makes (3.8.1) an equality.
P
i
282 LECTURE 3. SEMIDEFINITE PROGRAMMING

Exercise 3.18 (i) Prove that whenever Xi , Yi ∈ Mp,q , one has


" #1/2  !
X X X
σ( XiT Yi ) ≤ λ XiT Xi  kλ YiT Yi k1/2
∞ (∗)
i i i

where σ(A) = λ([AAT ]1/2 ) is the vector of singular values of a matrix A arranged in the non-
ascending order.
Prove that for every collection X1 , ..., Xn ∈ Mp,q there exists a collection Y1 , ..., Yn ∈ Mp,q
with YiT Yi = Iq which makes (∗) an equality.
P
i
(ii) Prove the following “matrix version” of the Cauchy inequality: whenever Xi , Yi ∈ Mp,q ,
one has   " #1/2
X X X
| Tr(XiT Yi )| ≤ Tr  XiT Xi  kλ( YiT Yi )k∞
1/2
, (∗∗)
i i i

and for every collection X1 , ..., Xn ∈ Mp,q there exists a collection Y1 , ..., Yn ∈ Mp,q with
P T
Yi Yi = Iq which makes (∗∗) an equality.
i

Here is another exercise of the same flavour:

Exercise 3.19 For nonnegative reals a1 , ..., am and a real α > 1 one has

m
!1/α m
X X
aαi ≤ ai .
i=1 i=1

Both sides of this inequality make sense when the nonnegative reals ai are replaced with positive
semidefinite n × n matrices Ai . What happens with the inequality in this case?
Consider the following four statements (where α > 1 is a real and m, n > 1):
1)
m
!1/α m
X X
∀(Ai ∈ Sn+ ) : Aαi  Ai .
i=1 i=1

2)
 !1/α 
m m
!
X X
∀(Ai ∈ Sn+ ) : λmax  Aαi  ≤ λmax Ai .
i=1 i=1

3)
 !1/α 
m m
!
X X
∀(Ai ∈ Sn+ ) : Tr  Aαi  ≤ Tr Ai .
i=1 i=1

4)
 !1/α 
m m
!
X X
∀(Ai ∈ Sn+ ) : Det  Aαi  ≤ Det Ai .
i=1 i=1

Among these 4 statements, exactly 2 are true. Identify and prove the true statements.
3.8. EXERCISES FOR LECTURE 3 283

3.8.1.6 -convexity of some matrix-valued functions


Consider a function F (x) defined on a convex set X ⊂ Rn and taking values in Sm . We say
that such a function is -convex, if

F (αx + (1 − α)y)  αF (x) + (1 − α)F (y)

for all x, y ∈ X and all α ∈ [0, 1]. F is called -concave, if −F is -convex.


A function F : Dom F → Sm defined on a set Dom F ⊂ Sk is called -monotone, if

x, y ∈ Dom F, x  y ⇒ F (x)  F (y);

F is called -antimonotone, if −F is -monotone.

Exercise 3.20 1) Prove that a function F : X → Sm , X ⊂ Rn , is -convex if and only if its


“epigraph”
{(x, Y ) ∈ Rn → Sm | x ∈ X, F (x)  Y }
is a convex set.
2) Prove that a function F : X → Sm with convex domain X ⊂ Rn is -convex if and only
if for every A ∈ Sm+ the function Tr(AF (x)) is convex on X.
3) Let X ⊂ Rn be a convex set with a nonempty interior and F : X → Sm be a function
continuous on X which is twice differentiable in int X. Prove that F is -convex if and only if
the second directional derivative of F

d2

2
D F (x)[h, h] ≡ 2 F (x + th)
dt t=0

is  0 for every x ∈ int X and every direction h ∈ Rn .


4) Let F : domF → Sm be defined and continuously differentiable on an open convex subset
of Sk . Prove that the necessary and sufficient condition for F to be -monotone is the validity
of the implication
h ∈ Sk+ , x ∈ Dom F ⇒ DF (x)[h]  0.

5) Let F be -convex and S ⊂ Sm be a convex set which is -antimonotone, i.e. whenever


Y 0  Y and Y ∈ S, one has Y 0 ∈ S. Prove that the set F −1 (S) = {x ∈ X | F (x) ∈ S} is
convex.
6) Let G : Dom G → Sk and F : Dom F → Sm , let G(Dom G) ⊂ Dom F , and let H(x) =
F (G(x)) : Dom G → Sm .
a) Prove that if G and F are -convex and F is -monotone, then H is -convex.
b) Prove that if G and F are -concave and F is -monotone, then H is -concave.
7) Let Fi : G → Sm , and assume that for every x ∈ G exists

F (x) = lim Fi (x).


i→∞

Prove that if all functions from the sequence {Fi } are (a) -convex, or (b) -concave, or (c)
-monotone, or (d) -antimonotone, then so is F .

The goal of the next exercise is to establish the -convexity of several matrix-valued functions.
284 LECTURE 3. SEMIDEFINITE PROGRAMMING

Exercise 3.21 Prove that the following functions are -convex:


1) F (x) = xxT : Mp,q → Sp ;
2) F (x) = x−1 : int Sm m
+ → int S+ ;
3) F (u, v) = u v u : M × int Sp+ → Sq ;
T −1 p,q

Prove that the following functions are -concave and monotone:


4) F (x) = x1/2 : Sm+ →S ;
m

5) F (x) = ln x : int S+ → Sm ;
m
 −1
6) F (x) = Ax−1 AT : int Sn+ → Sm , provided that A is an m × n matrix of rank m.

3.8.2 SD representations of epigraphs of convex polynomials


Mathematically speaking, the central question concerning the “expressive abilities” of Semidef-
inite Programming is how wide is the family of convex sets which are SDr. By definition, an
SDr set is the projection of the inverse image of Sm
+ under affine mapping. In other words, every
SDr set is a projection of a convex set given by a number of polynomial inequalities (indeed,
the cone Sm+ is a convex set given by polynomial inequalities saying that all principal minors of
matrix are nonnegative). Consequently, the inverse image of Sm + under an affine mapping is also
a convex set given by a number of (non-strict) polynomial inequalities. And it is known that
every projection of such a set is also given by a number of polynomial inequalities (both strict
and non-strict). We conclude that
A SD-representable set always is a convex set given by finitely many polynomial
inequalities (strict and non-strict).
A natural (and seemingly very difficult) question is whether the inverse is true – whether a
convex set given by a number of polynomial inequalities is always SDr. This question can be
simplified in many ways – we may fix the dimension of the set, we may assume the polynomials
participating in inequalities to be convex, we may fix the degrees of the polynomials, etc.; to
the best of our knowledge, all these questions are open.
The goal of the subsequent exercises is to answer affirmatively the simplest question of the
above series:
Let π(x) be a convex polynomial of one variable. Then its epigraph
{(t, x) ∈ R2 | t ≥ π(x)}
is SDr.
Let us fix a nonnegative integer k and consider the curve
p(x) = (1, x, x2 , ..., x2k )T ∈ R2k+1 .
Let Πk be the closure of the convex hull of values of the curve. How can one describe Πk ?
A convenient way to answer this question is to pass to a matrix representation of all objects
involved. Namely, let us associate with a vector ξ = (ξ0 , ξ1 , ..., ξ2k ) ∈ R2k+1 the (k + 1) × (k + 1)
symmetric matrix
· · · ξk
 
ξ0 ξ1 ξ2 ξ3
ξ
 1 ξ2 ξ3 ξ4 · · · ξk+1  
 ξ2

ξ3 ξ4 ξ5 · · · ξk+2  
M(ξ) =   ξ3 ξ ξ ξ · · · ξ
,
4 5 6 k+3 
 .. .. .. .. .. .. 
 
 . . . . . . 
ξk ξk+1 ξk+2 ξk+3 ... ξ2k
3.8. EXERCISES FOR LECTURE 3 285

so that
[M(ξ)]ij = ξi+j , i, j = 0, ..., k.
The transformation ξ 7→ M(ξ) : R2k+1 → Sk+1 is a linear embedding; the image of Πk under
this embedding is the closure of the convex hull of values of the curve

P (x) = M(p(x)).

It follows that the image Πb k of Πk under the mapping M possesses the following properties:
b k belongs to the image of M, i.e., to the subspace Hk of S2k+1 comprised of Hankel
(i) Π
matrices – matrices with entries depending on the sum of indices only:
n o
Hk = X ∈ S2k+1 |i + j = i0 + j 0 ⇒ Xij = Xi0 j 0 ;

(ii) Πb k ⊂ Sk+1 (indeed, every matrix M(p(x)) is positive semidefinite);


+
(iii) For every X ∈ Πb k one has X00 = 1.
It turns out that properties (i) – (iii) characterize Π
b k:

(G) A symmetric (k + 1) × (k + 1) matrix X belongs to Π b k if and only if it possesses


the properties (i) – (iii): its entries depend on sum of indices only (i.e., X ∈ Hk ), X
is positive semidefinite and X00 = 1.
(G) is a particular case of the classical results related to the so called “moment problem”. The
goal of the subsequent exercises is to give a simple alternative proof of this statement.
Note that the mapping M∗ : Sk+1 → R2k+1 conjugate to the mapping M is as follows:
l
(M∗ X)l =
X
Xi,l−i , l = 0, 1, ..., 2k,
i=0

and we know something about this mapping: Example 21a of this Lecture says that
(H) The image of the cone Sk+1 + under the mapping M∗ is exactly the cone of
coefficients of polynomials of degree ≤ 2k which are nonnegative on the entire real
line.

Exercise 3.22 Derive (G) from (H).

(G), among other useful things, implies the result we need:


(I) Let π(x) = π0 + π1 x + π2 x2 + ... + π2k x2k be a convex polynomial of degree 2k.
Then the epigraph of π is SDr:

{(t, x) ∈ R2 | t ≥ p(x)} = X [π],


where
 1 x x2 x3 ... xk 
x x2 x3 x4 ... xk+1 

{(t, x)|∃x , ..., x


 x2 x3 x4 x5 ... xk+2 
X [π] = :  x3 xk+3   0,
 
2 2k x4 x5 x6 ...
 
..
.
 
... ... ... ... ...
xk xk+1 xk+2 xk+3 ... x2k
π0 + π1 x + π2 x2 + π3 x3 + ... + π2k x2k ≤ t }
286 LECTURE 3. SEMIDEFINITE PROGRAMMING

Exercise 3.23 Prove (I).

Note that the set X [π] makes sense for an arbitrary polynomial π, not necessary for a convex
one. What is the projection of this set onto the (t, x)-plane? The answer is surprisingly nice:
this is the convex hull of the epigraph of the polynomial π!

Exercise 3.24 Let π(x) = π0 + π1 x + ... + π2k x2k with π2k > 0, and let

G[π] = Conv{(t, x) ∈ R2 | t ≥ p(x)}

be the convex hull of the epigraph of π (the set of all convex combinations of points from the
epigraph of π).
1) Prove that G[π] is a closed convex set.
2) Prove that
G[π] = X [π].

3.8.3 Around the Lovasz capacity number and semidefinite relaxations of


combinatorial problems
Recall that the Lovasz capacity number Θ(Γ) of an n-node graph Γ is the optimal value of the
following semidefinite program:

min {λ : λIn − L(x)  0} (L)


λ,x

where the symmetric n × n matrix L(x) is defined as follows:

• the dimension of x is equal to the number of arcs in Γ, and the coordinates of x are indexed
by these arcs;

• the element of L(x) in an “empty” cell ij (one for which the nodes i and j are not linked
by an arc in Γ) is 1;

• the elements of L(x) in a pair of symmetric “non-empty” cells ij, ji (those for which
the nodes i and j are linked by an arc) are equal to the coordinate of x indexed by the
corresponding arc.

As we remember, the importance of Θ(Γ) comes from the fact that Θ(Γ) is a computable upper
bound on the stability number α(Γ) of the graph. We have seen also that the Shore semidefinite
relaxation of the problem of finding the stability number of Γ leads to a “seemingly stronger”
upper bound on α(Γ), namely, the optimal value σ(Γ) in the semidefinite program

− 21 (e + µ)T
   
λ
min λ : 0 (Sh)
λ,µ,ν − 12 (e + µ) A(µ, ν)

where e = (1, ..., 1)T ∈ Rn and A(µ, ν) is the matrix as follows:

• the dimension of ν is equal to the number of arcs in Γ, and the coordinates of ν are indexed
by these arcs;

• the diagonal entries of A(µ, ν) are µ1 , ..., µn ;

• the off-diagonal entries of A(µ, ν) corresponding to “empty cells” are zeros;


3.8. EXERCISES FOR LECTURE 3 287

• the off-diagonal entries of A(µ, ν) in a pair of symmetric “non-empty” cells ij, ji are equal
to the coordinate of ν indexed by the corresponding arc.

We have seen that (L) can be obtained from (Sh) when the variables µi are set to 1, so that
σ(Γ) ≤ Θ(Γ). Thus,
α(Γ) ≤ σ(Γ) ≤ Θ(Γ). (3.8.2)

Exercise 3.25 1) Prove that if (λ, µ, ν) is a feasible solution to (Sh), then there exists a sym-
metric n × n matrix A such that λIn − A  0 and at the same time the diagonal entries of
A and the off-diagonal entries in the “empty cells” are ≥ 1. Derive from this observation that
the optimal value in (Sh) is not less than the optimal value Θ0 (Γ) in the following semidefinite
program:
min {λ : λIn − X  0, Xij ≥ 1 whenever i, j are not adjacent in Γ} (Sc).
λ,x

2) Prove that Θ0 (Γ) ≥ α(Γ).

Hint: Demonstrate that if all entries of a symmetric k × k matrix are ≥ 1, then the maximum
eigenvalue of the matrix is at least k. Derive from this observation and the Interlacing
Eigenvalues Theorem (Exercise 3.4.(ii)) that if a symmetric matrix contains a principal k × k
submatrix with entries ≥ 1, then the maximum eigenvalue of the matrix is at least k.

The upper bound Θ0 (Γ) on the stability number of Γ is called the Schrijver capacity of graph Γ.
Note that we have
α(Γ) ≤ Θ0 (Γ) ≤ σ(Γ) ≤ Θ(Γ).
A natural question is which inequalities in this chain may happen to be strict. In order to answer
it, we have computed the quantities in question for about 2,000 random graphs with number of
nodes varying 8 to 20. In our experiments, the stability number was computed – by brute force
– for graphs with ≤ 12 nodes; for all these graphs, the integral part of Θ(Γ) was equal to α(Γ).
Furthermore, Θ(Γ) was non-integer in 156 of our 2,000 experiments, and in 27 of these 156 cases
the Schrijver capacity number Θ0 (Γ) was strictly less than Θ(Γ). The quantities Θ0 (·), σ(·), Θ(·)
for 13 of these 27 cases are listed in the table below:

Graph # # of nodes α Θ0 σ Θ
1 20 ? 4.373 4.378 4.378
2 20 ? 5.062 5.068 5.068
3 20 ? 4.383 4.389 4.389
4 20 ? 4.216 4.224 4.224
5 13 ? 4.105 4.114 4.114
6 20 ? 5.302 5.312 5.312
7 20 ? 6.105 6.115 6.115
8 20 ? 5.265 5.280 5.280
9 9 3 3.064 3.094 3.094
10 12 4 4.197 4.236 4.236
11 8 3 3.236 3.302 3.302
12 12 4 4.236 4.338 4.338
13 10 3 3.236 3.338 3.338
288 LECTURE 3. SEMIDEFINITE PROGRAMMING

Graphs # 13 (left) and # 8 (right); all nodes are on circumferences.

Exercise 3.26 Compute the stability numbers of the graphs # 8 and # 13.

Exercise 3.27 Prove that σ(Γ) = Θ(Γ).

The chromatic number ξ(Γ) of a graph Γ is the minimal number of colours such that one can
colour the nodes of the graph in such a way that no two adjacent (i.e., linked by an arc) nodes
get the same colour27) . The complement Γ̄ of a graph Γ is the graph with the same set of nodes,
and two distinct nodes in Γ̄ are linked by an arc if and only if they are not linked by an arc in
Γ.
Lovasz proved that for every graph

Θ(Γ) ≤ ξ(Γ̄) (∗)

so that
α(Γ) ≤ Θ(Γ) ≤ ξ(Γ̄)

(Lovasz’s Sandwich Theorem).

Exercise 3.28 Prove (*).

Hint: Let us colour the vertices of Γ in k = ξ(Γ̄) colours in such a way that no two vertices
of the same colour are adjacent in Γ̄, i.e., every two nodes of the same colour are adjacent
in Γ. Set λ = k, and let x be such that

−(k − 1), i 6= j, i, j are of the same colour
[L(x)]ij =
1, otherwise

Prove that (λ, x) is a feasible solution to (L).


27)
E.g., when colouring a geographic map, it is convenient not to use the same colour for a pair of countries
with a common border. It was observed that to meet this requirement for actual maps, 4 colours are sufficient.
The famous “4-colour” Conjecture claims that this is so for every geographic map. Mathematically, you can
represent a map by a graph, where the nodes represent the countries, and two nodes are linked by an arc if and
only if the corresponding countries have common border. A characteristic feature of such a graph is that it is
planar – you may draw it on 2D plane in such a way that the arcs will not cross each other, meeting only at the
nodes. Thus, mathematical form of the 4-colour Conjecture is that the chromatic number of any planar graph is
at most 4. This is indeed true, but it took about 100 years to prove the conjecture!
3.8. EXERCISES FOR LECTURE 3 289

Now let us switch from the Lovasz capacity number to semidefinite relaxations of combinatorial
problems, specifically to those of maximizing a quadratic form over the vertices of the unit cube,
and over the entire cube:
n o
(a) max xT Ax : x ∈ Vrt(Cn ) = {x ∈ Rn | xi = ±1 ∀i}
x n o (3.8.3)
(b) max xT Ax : x ∈ Cn = {x ∈ Rn | −1 ≤ xi ≤ 1, ∀i}
x

The standard semidefinite relaxations of the problems are, respectively, the problems

(a) max {Tr(AX) : X  0, Xii = 1, i = 1, ..., n} ,


X (3.8.4)
(b) max {Tr(AX) : X  0, Xii ≤ 1, i = 1, ..., n} ;
X

the optimal value of a relaxation is an upper bound for the optimal value of the respective
original problem.

Exercise 3.29 Let A ∈ Sn . Prove that

max xT Ax ≥ Tr(A).
x:xi =±1, i=1,...,n

Develop an efficient algorithm which, given A, generates a point x with coordinates ±1 such that
xT Ax ≥ Tr(A).

Exercise 3.30 Prove that if the diagonal entries of A are nonnegative, then the optimal values
in (3.8.4.a) and (3.8.4.b) are equal to each other. Thus, in the case in question, the relaxations
“do not understand” whether we are maximizing over the vertices of the cube or over the entire
cube.

Exercise 3.31 Prove that the problems dual to (3.8.4.a, b) are, respectively,

(a) min {Tr(Λ) : Λ  A, Λ is diagonal} ,


Λ (3.8.5)
(b) min {Tr(Λ) : Λ  A, Λ  0, Λ is diagonal} ;
Λ

the optimal values in these problems are equal to those of the respective problems in (3.8.4) and
are therefore upper bounds on the optimal values of the respective combinatorial problems from
(3.8.3).

The latter claim is quite transparent, since the problems (3.8.5) can be obtained as follows:
• In order to bound from above the optimal value of a quadratic form xT Ax on a given set
S, we look at those quadratic forms xT Λx which can be easily maximized over S. For the case
of S = Vrt(Cn ) these are quadratic forms with diagonal matrices Λ, and for the case of S = Cn
these are quadratic forms with diagonal and positive semidefinite matrices Λ; in both cases, the
respective maxima are merely Tr(Λ).
• Having specified a family F of quadratic forms xT Λx “easily optimizable over S”, we then
look at those forms from F which dominate everywhere the original quadratic form xT Ax, and
take among these forms the one with the minimal max xT Λx, thus coming to the problem
x∈S
 
min max xT Λx : Λ  A, Λ ∈ F . (!)
Λ x∈S
290 LECTURE 3. SEMIDEFINITE PROGRAMMING

It is evident that the optimal value in this problem is an upper bound on max xT Ax. It is also
x∈S
immediately seen that in the case of S = Vrt(Cn ) the problem (!), with F specified as the set
D of all diagonal matrices, is equivalent to (3.8.5.a), while in the case of S = Cn (!), with F
specified as the set D+ of positive semidefinite diagonal matrices, is nothing but (3.8.5.b).
Given the direct and quite transparent road leading to (3.8.5.a, b), we can try to move a
little bit further along this road. To this end observe that there are trivial upper bounds on the
maximum of an arbitrary quadratic form xT Λx over Vrt(Cn ) and Cn , specifically:
X X
max xT Λx ≤ Tr(Λ) + |Λij |, max xT Λx ≤ |Λij |.
x∈Vrt(Cn ) i6=j
x∈Cn
i,j

For the above families D, D+ of matrices Λ for which xT Λx is “easily optimizable” over Vrt(Cn ),
respectively, Cn , the above bounds are equal to the precise values of the respective maxima. Now
let us update (!) as follows: we eliminate the restriction Λ ∈ F, replacing simultaneously the
objective max xT Λx with its upper bound, thus coming to the pair of problems
x∈S
( )
|Λij | : Λ  A
P
(a) min Tr(Λ) + [S = Vrt(Cn )]
Λ i6=j
( ) (3.8.6)
|Λij | : Λ  A
P
(b) min [S = Cn ]
Λ i,j

From the origin of the problems it is clear that they still yield upper bounds on the optimal
values of the respective problems (3.8.3.a, b), and that these bounds are at least as good as the
bounds yielded by the standard relaxations (3.8.4.a, b):

(a) Opt(3.8.3.a) ≤ Opt(3.8.6.a) ≤ Opt(3.8.4.a) = Opt(3.8.5.a),


|{z}
(∗)
(3.8.7)
(b) Opt(3.8.3.b) ≤ Opt(3.8.6.b) ≤ Opt(3.8.4.b) = Opt(3.8.5.b),
|{z}
(∗∗)

where Opt(·) means the optimal value of the corresponding problem.


Indeed, consider the problem (3.8.6.a). Whenever Λ is a feasible solution of this problem,
the quadratic form xT Λx dominates everywhere the form xT Ax, so that max xT Ax ≤
x∈Vrt(Cn )
max xT Λx; the latter quantity, in turn, is upper-bounded by Tr(Λ) +
P
|Λij |, whence
x∈Vrt(Cn ) i6=j
the value of the objective of the problem (3.8.6.a) at every feasible solution of the problem
upper-bounds the quantity max xT Ax. Thus, the optimal value in (3.8.6.a) is an upper
x∈Vrt(Cn )
bound on the maximum of xT Ax over the vertices of the cube Cn . At the same time,
when passing from the (dual form of the) standard relaxation (3.8.5.a) to our new bounding
problem (3.8.6.a), we only extend the feasible set and do not vary the objective at the “old”
feasible set; as a result of such a modification, the optimal value may only decrease. Thus,
the upper bound on the maximum of xT Ax over Vrt(Cn ) yielded by (3.8.6.a) is at least as
good as those (equal to each other) bounds yielded by the standard relaxations (3.8.4.a),
(3.8.5.a), as required in (3.8.7.a). Similar reasoning proves (3.8.7.b).

Note that problems (3.8.6) are equivalent to semidefinite programs and thus are of the same sta-
tus of “computational tractability” as the standard SDP relaxations (3.8.5) of the combinatorial
3.8. EXERCISES FOR LECTURE 3 291

problems in question. At the same time, our new bounding problems are more difficult than the
standard SDP relaxations. Can we justify this by getting an improvement in the quality of the
bounds?
Exercise 3.32 Find out whether the problems (3.8.6.a, b) yield better bounds than the respective
problems (3.8.5.a, b), i.e., whether the inequalities (*), (**) in (3.8.7) can be strict.
Hint: Look at the problems dual to (3.8.6.a, b).

Exercise 3.33 Let D be a given subset of Rn+ . Consider the following pair of optimization
problems: n o
max xT Ax : (x21 , x22 , ..., x2n )T ∈ D (P )
x
max {Tr(AX) : X  0, Dg(X) ∈ D} (R)
X

(Dg(X) is the diagonal of a square matrix X). Note that when D = {(1, ..., 1)T }, (P ) is the
problem of maximizing a quadratic form over the vertices of Cn , while (R) is the standard
semidefinite relaxation of (P ); when D = {x ∈ Rn | 0 ≤ xi ≤ 1 ∀i}, (P ) is the problem of
maximizing a quadratic form over the cube Cn , and (R) is the standard semidefinite relaxation
of the latter problem.
1) Prove that if D is semidefinite-representable, then (R) can be reformulated as a semidefi-
nite program.
2) Prove that (R) is a relaxation of (P ), i.e., that

Opt(P ) ≤ Opt(R).

3) [Nesterov] Let A  0. Prove that then


π
Opt(P ) ≤ Opt(R) ≤ Opt(P ).
2
π
Hint: Use Nesterov’s 2 Theorem (Theorem 3.4.2).

Exercise 3.34 Let A ∈ Sm


+ . Prove that
m
2 X
max{xT Ax | xi = ±1, i = 1, ..., m} = max{ aij asin(Xij ) | X  0, Xii = 1, i = 1, ..., m}.
π i,j=1

3.8.4 Around operator norms


Exercise 3.35 1) Let T ⊂ Rn be a convex compact wet with a nonempty interior, and

T = cl{[x; t] ∈ Rn × R : t > 0, t−1 x ∈ T }

be the closed conic hull of T . Prove that T is a regular cone such that

T = {x : [x; 1] ∈ T,

and the cone dual to T is


T∗ = {[g; s] : s ≥ φT (−g)k,
where
φT (y) = max xT y
x∈T
is the support function of T .
292 LECTURE 3. SEMIDEFINITE PROGRAMMING

2) Let T be a convex compact subset of Rm


+ with int T 6= ∅, and let

X[T ] = {X ∈ Sm
+ : Dg(X) ∈ T },

where, as always, Dg(X) is the vector of diagonal entries of matrix X. Prove that X[T ]
is a convex compact subset of Sn+ , the closed conic hull of X[T ] is the regular cone

X[T ] = {(X, t) ∈ Sm
+ × R : [Dg(X); t] ∈ T},

where T is the closed conic hull of T , and the cone dual to X[T ] is

X∗ [T ] = {Y ∈ Sm : ∃h ∈ Rm : Diag{h} + Y  0, r ≥ φT (h)},

with the same φT (·) as in 1).

3) Under the premise and in the notation of Theorem 3.4.3, let T posses a nonempty interior.
Prove that
s∗ (A) = − minu {φT (−u) : A  Diag{u}} (a)
(!)
s∗ (A) = minw {φT (w) : Diag{w}  A} (b)

" #
B
4) In the situation of 3), let matrix A from 3) be of special structure: A = with
BT
p × q matrix B. Prove that in this case

a. One has s∗ (A) = −s∗ (A) and m∗ (A) = −m∗ (A)


b. One has m∗ (A) = maxx∈T |xT Ax| ≤ s∗ (A) = minh {φT (h) : Diag{h}  A} ≤
π ∗
4−π m (A)

c. Derive from Theorem 3.4.3 the following “partial refinement” of Theorem 3.4.6:

Theorem 3.8.1 Consider a special case of the situation considered in Theorem 3.4.6,
specifically, the one where in (3.4.29) K = dim z, z T Rk z = zk2 , k ≤ K, and similarly
L = dim w and wT S` w = w`2 , ` ≤ L. Then the efficiently computable upper bound
( " # )
Diag{µ} 12 QT CP
Φπ→θ (C) = minλ≥0,µ≥0 φR (λ) + φS (µ) : 1 T T 0 (a)
2 P C Q Diag{λ}
(3.8.8)
(cf. (3.4.30)) on the operator norm Φπ→θ (C) = max θ(Cx) of a matrix C is tight
x:π(x)≤1
within absolute constant factor:

π
Φπ→θ (C) ≤ Φπ→θ (C) ≤ Φπ→θ (C), (3.8.9)
4−π

cf. (3.4.31).
3.8. EXERCISES FOR LECTURE 3 293

3.8.5 Around Lyapunov Stability Analysis

A natural mathematical model of a swing is the linear time invariant dynamic system

y 00 (t) = −ω 2 y(t) − 2µy 0 (t) (S)

with positive ω 2 and 0 ≤ µ < ω (the term 2µy 0 (t) represents friction). A general solution to this
equation is
q
y(t) = a cos(ω 0 t + φ0 ) exp{−µt}, ω 0 = ω 2 − µ2 ,

with free parameters a and φ0 , i.e., this is a decaying oscillation. Note that the equilibrium

y(t) ≡ 0

is stable – every solution to (S) converges to 0, along with its derivative, exponentially fast.
After stability is observed, an immediate question arises: how is it possible to swing on
a swing? Everybody knows from practice that it is possible. On the other hand, since the
equilibrium is stable, it looks as if it was impossible to swing, without somebody’s assistance,
for a long time. The reason which makes swinging possible is highly nontrivial – parametric
resonance. A swinging child does not sit on the swing in a once forever fixed position; what he
does is shown below:

A swinging child

As a result, the “effective length” of the swing l – the distance from the point where the rope
is fixed to the center of gravity of the system – is varying with time: l = l(t). Basic mechanics
says that ω 2 = g/l, g being the gravity acceleration. Thus, the actual swing is a time-varying
linear dynamic system:

y 00 (t) = −ω 2 (t)y(t) − 2µy 0 (t), (S0 )

and it turns out that for properly varied ω(t) the equilibrium y(t) ≡ 0 is not stable. A swinging
child is just varying l(t) in a way which results in an unstable dynamic system (S0 ), and this
294 LECTURE 3. SEMIDEFINITE PROGRAMMING

instability is in fact what the child enjoys...


0.04 0.1

0.08
0.03

0.06

0.02
0.04

0.02
0.01

0
−0.02

−0.04
−0.01

−0.06

−0.02
−0.08

−0.03 −0.1
0 2 4 6 8 10 12 14 16 18 0 2 4 6 8 10 12 14 16 18

g
y 00 (t) = − l+h sin(2ωt) y(t) − 2µy 0 (t), y(0) = 0, y 0 (0) = 0.1
h i
m 1
p
l = 1 [m], g = 10 [ sec 2 ], µ = 0.15[ sec ], ω = g/l
Graph of y(t)
left: h = 0.125: this child is too small; he should grow up...
right: h = 0.25: this child can already swing...

Exercise 3.36 Assume that you are given parameters l (“nominal length of the swing rope”),
h > 0 and µ > 0, and it is known that a swinging child can vary the “effective length” of the rope
within the bounds l ± h, i.e., his/her movement is governed by the uncertain linear time-varying
system
g g
 
00 0
y (t) = −a(t)y(t) − 2µy (t), a(t) ∈ , .
l+h l−h
Try to identify the domain in the 3D-space of parameters l, µ, h where the system is stable, as
well as the domain where its stability can be certified by a quadratic Lyapunov function. What
is “the difference” between these two domains?

3.8.6 Around ellipsoidal approximations


Exercise 3.37 Prove the Löwner – Fritz John Theorem (Theorem 3.7.1).

3.8.6.1 More on ellipsoidal approximations of sums of ellipsoids


The goal of two subsequent exercises is to get in an alternative way the problem (Õ) “generating”
a parametric family of ellipsoids containing the arithmetic sum of m given ellipsoids (Section
3.7.4).
Exercise 3.38 Let Pi be nonsingular, and Λi be positive definite n × n matrices, i = 1, ..., m.
Prove that for every collection x1 , ..., xm of vectors from Rn one has
"m #−1 m
[PiT ]−1 Λ−1 −1
X X
[x1 + ... + xm ]T i Pi [x1 + ... + xm ] ≤ [xi ]T Pi Λi PiT xi . (3.8.10)
i=1 i=1

Hint: Consider the (nm + n) × (nm + n) symmetric matrix


P1 Λ1 P1T
 
In
 . .. .. 
 . 
A=
 
T
 P m Λ m P m I n


m
T −1 −1 −1
 P 
In ··· In [Pi ] Λi Pi
i=1
3.8. EXERCISES FOR LECTURE 3 295

and apply twice the Schur Complement Lemma: first time – to prove that the matrix is
positive semidefinite, and the second time – to get from the latter fact the desired inequality.

Exercise 3.39 Assume you are given m full-dimensional ellipsoids centered at the origin

Wi = {x ∈ Rn | xT Bi x ≤ 1}, i = 1, ..., m [Bi  0]

in Rn .
1) Prove that for every collection Λ of positive definite n × n matrices Λi such that
X
λmax (Λi ) ≤ 1
i

the ellipsoid
"m #−1
T
X −1/2 −1 −1/2
EΛ = {x | x Bi Λi Bi x ≤ 1}
i=1
contains the sum W1 + ... + Wm of the ellipsoids Wi .
2) Prove that in order to find the smallest volume ellipsoid in the family {EΛ }Λ defined in
1) it suffices to solve the semidefinite program

maximize t s.t.
(a) t ≤ Det1/n (Z),
−1/2 −1/2 −1/2 −1/2 −1/2 −1/2
   
Λ1 B1 ZB1 B1 ZB2 ... B1 ZBm
  −1/2 −1/2 −1/2 −1/2 −1/2 −1/2 

 Λ2   B2 ZB1 B2 ZB2 ... B2 ZBm 
(b)  ,


..   .. .. .. ..

 .  
 . . . . 

Λm −1/2 −1/2 −1/2 −1/2 −1/2 −1/2
Bm ZB1 Bm ZB2 ... Bm ZBm
(c) Z  0,
(d) Λi  λi In , i = 1, ..., m,
m
λi ≤ 1
P
(e)
i=1
(3.8.11)
in variables Z, Λi ∈ Sn , t, λi ∈ R; the smallest volume ellipsoid in the family {EΛ }Λ is EΛ∗ ,
where Λ∗ is the “Λ-part” of an optimal solution of the problem.
Hint: Use example 20c.

3) Demonstrate that the optimal value in (3.8.11) remains unchanged when the matrices Λi
are further restricted to be scalar: Λi = λi In . Prove that with this additional constraint problem
(3.8.11) becomes equivalent to problem (Õ) from Section 3.7.4.

Remark 3.8.1 Exercise 3.39 demonstrates that the approximating scheme for solving problem
(O) presented in Proposition 3.7.4 is equivalent to the following one:
Given m positive reals λi with unit sum, one defines the ellipsoid E(λ) = {x |
P −1 −1 −1
m 
xT λi Bi x ≤ 1}. This ellipsoid contains the arithmetic sum W of the
i=1
ellipsoids {x | xT Bi x ≤ 1}, and in order to approximate the smallest volume ellipsoid
containing W , we merely minimize Det(E(λ)) over λ varying in the standard simplex
{λ ≥ 0, λi = 1}.
P
i
296 LECTURE 3. SEMIDEFINITE PROGRAMMING

In this form, the approximation scheme in question was proposed by Schweppe (1975).

Exercise 3.40 Let Ai be nonsingular n × n matrices, i = 1, ..., m, and let Wi = {x = Ai u |


uT u ≤ 1} be the associated ellipsoids in Rn . Let ∆m = {λ ∈ Rm
+ |
P
λi = 1}. Prove that
i
1) Whenever λ ∈ ∆m and A ∈ Mn,n is such that
m
λ−1
X
T T
AA  F (λ) ≡ i Ai Ai ,
i=1

the ellipsoid E[A] = {x = Au | uT u ≤ 1} contains W = W1 + ... + Wm .

Hint: Use the result of Exercise 3.39.1)

2) Whenever A ∈ Mn,n is such that

AAT  F (λ) ∀λ ∈ ∆m ,

the ellipsoid E[A] is contained in W1 + ... + Wm , and vice versa.


Hint: Note that
m
!2 m
X X α2 i
|αi | = min
i=1
λ∈∆m
i=1
λi

and use statement (F) from Section 3.7.4.

3.8.6.2 “Simple” ellipsoidal approximations of sums of ellipsoids


Let Wi = {x = Ai u | uT u ≤ 1}, i = 1, ..., m, be full-dimensional ellipsoids in Rn (so that Ai
are nonsingular n × n matrices), and let W = W1 + ... + Wm be the arithmetic sum of these
ellipsoids. Observe that W is the image of the set
 
u[1]
 
 
 .. 
 
nm T
B = u =  .  ∈ R | u [i]u[i] ≤ 1, i = 1, ..., m
 
u[m]
 

under the linear mapping


m
X
u 7→ Au = Ai u[i] : Rnm → Rn .
i=1

It follows that

Whenever an nm-dimensional ellipsoid W contains B, the set A(W), which is an


n-dimensional ellipsoid (why?) contains W , and whenever W is contained in B, the
ellipsoid A(W) is contained in W .

In view of this observation, we can try to approximate W from inside and from outside by the
ellipsoids W− ≡ A(W− ) and W + = A(W + ), where W− and W + are, respectively, the largest
and the smallest volume nm-dimensional ellipsoids contained in/containing B.
3.8. EXERCISES FOR LECTURE 3 297

Exercise 3.41 1) Prove that


m
W− = {u ∈ Rnm | uT [i]u[i] ≤ 1},
P
i=1
m
W+ = {u ∈ Rnm | uT [i]u[i] ≤ m},
P
i=1

so that m m
W ⊃ W− ≡ {x = Ai u[i] | uT [i]u[i] ≤ 1},
P P
i=1 i=1
m m √
W ⊂ W+ ≡ {x = Ai u[i] | uT [i]u[i] ≤ m} =
P P
mW− .
i=1 i=1
2) Prove that W− can be represented as

W− = {x = Bu | u ∈ Rn , uT u ≤ 1}

with matrix B  0 representable as


m
X
B= Ai Xi
i=1
with square matrices Xi of norms |Xi | ≤ 1.
Derive from this observation that the “level of conservativeness” of the inner ellipsoidal

approximation of W given by Proposition 3.7.6 is at most m: if W∗ is this inner ellipsoidal
approximation and W∗∗ is the largest volume ellipsoid contained in W , then
1/n 1/n
Vol(W∗∗ ) Vol(W ) √
 
≤ ≤ m.
Vol(W∗ ) Vol(W∗ )

3.8.6.3 Invariant ellipsoids


Exercise 3.42 Consider a discrete time controlled dynamic system
x(t + 1) = Ax(t) + bu(t), t = 0, 1, 2, ...
x(0) = 0,
where x(t) ∈ Rn is the state vector and u(t) ∈ [−1, 1] is the control at time t. An ellipsoid
centered at the origin
W = {x | xT Zx ≤ 1} [Z  0]
is called invariant, if
x ∈ W ⇒ Ax ± b ∈ W.
Prove that
1) If W is an invariant ellipsoid and x(t) ∈ W for some t, then x(t0 ) ∈ W for all t0 ≥ t.
2) Assume that the vectors b, Ab, A2 b, ..., An−1 b are linearly independent. Prove that an
invariant ellipsoid exists if and only if A is stable (the absolute values of all eigenvalues of A
are < 1).
3) Assuming that A is stable, prove that an ellipsoid {x | xT Zx ≤ 1} [Z  0] is invariant if
and only if there exists λ ≥ 0 such that
1 − bT Zb − λ −bT ZA
 
 0.
−AT Zb λZ − AT ZA
How could one use this fact to approximate numerically the smallest volume invariant ellipsoid?
298 LECTURE 3. SEMIDEFINITE PROGRAMMING

3.8.6.4 Greedy infinitesimal ellipsoidal approximations


Consider a linear time-varying controlled system
d
x(t) = A(t)x(t) + B(t)u(t) + v(t) (3.8.12)
dt
with continuous matrix-valued functions A(t), B(t), continuous vector-valued function v(·) and
norm-bounded control:
ku(·)k2 ≤ 1. (3.8.13)
Assume that the initial state of the system belongs to a given ellipsoid:

x(0) ∈ E(0) = {x | (x − x0 )T G0 (x − x0 ) ≤ 1} [G0 = [G0 ]T  0]. (3.8.14)

Our goal is to build, in an “on-line” fashion, a system of ellipsoids

E(t) = {x | (x − xt )T Gt (x − xt ) ≤ 1} [Gt = GTt  0] (3.8.15)

in such a way that if u(·) is a control satisfying (3.8.13) and x(0) is an initial state satisfying
(3.8.14), then for every t ≥ 0 it holds
x(t) ∈ E(t).
We are interested to minimize the volumes of the resulting ellipsoids.
There is no difficulty with the path xt of centers of the ellipsoids: it “obviously” should
satisfy the requirements
d
xt = A(t)xt + v(t), t ≥ 0; x0 = x0 . (3.8.16)
dt
Let us take this choice for granted and focus on how should we define the positive definite
matrices Gt . Let us look for a continuously differentiable matrix-valued function Gt , taking
values in the set of positive definite symmetric matrices, with the following property:
(L) For every t ≥ 0 and every point xt ∈ E(t) (see (3.8.15)), every trajectory x(τ ),
τ ≥ t, of the system
d
x(τ ) = A(τ )x(τ ) + B(τ )u(τ ) + v(τ ), x(t) = xt

with ku(·)k2 ≤ 1 satisfies x(τ ) ∈ E(τ ) for all τ ≥ t.
Note that (L) is a sufficient (but in general not necessary) condition for the system of ellipsoids
E(t), t ≥ 0, “to cover” all trajectories of (3.8.12) – (3.8.13). Indeed, when formulating (L), we
act as if we were sure that the states x(t) of our system run through the entire ellipsoid E(t),
which is not necessarily the case. The advantage of (L) is that this condition can be converted
into an “infinitesimal” form:
Exercise 3.43 Prove that if Gt  0 is continuously differentiable and satisfies (L), then
  d
∀ t ≥ 0, x, u : xT Gt x = 1, uT u ≤ 1 : Gt + AT (t)Gt + Gt A(t)]x ≤ 0.
2uT B T (t)Gt x + xT [
dt
(3.8.17)
Vice versa, if Gt is a continuously differentiable function taking values in the set of positive
definite symmetric matrices and satisfying (3.8.17) and the initial condition G0 = G0 , then the
associated system of ellipsoids {E(t)} satisfies (L).
3.8. EXERCISES FOR LECTURE 3 299

The result of Exercise 3.43 provides us with a kind of description of the families of ellipsoids
{E(t)} we are interested in. Now let us take care of the volumes of these ellipsoids. The latter
can be done via a “greedy” (locally optimal) policy: given E(t), let us try to minimize, under
restriction (3.8.17), the derivative of the volume of the ellipsoid at time t. Note that this locally
optimal policy does not necessary yield the smallest volume ellipsoids satisfying (L) (achieving
“instant reward” is not always the best way to happiness); nevertheless, this policy makes sense.
We have 2 ln vol(Et ) = − ln Det(Gt ), whence

d d
2 ln vol(E(t)) = −Tr(G−1
t Gt );
dt dt
d
thus, our greedy policy requires to choose Ht ≡ dt Gt as a solution to the optimization problem

max Tr(G−1 T T T d T
t H) : 2u B (t)Gt x + x [ dt Gt − A (t)Gt − Gt A(t)]x ≤ 0
H=H T
 
∀ x, u : xT G tx = 1, uT u ≤1 .

Exercise 3.44 Prove that the outlined greedy policy results in the solution Gt to the differential
equation
q
d
= −AT (t)Gt − Gt A(t) −
q n T − Tr(Gt B(t)B T (t)) G , t ≥ 0;
dt Gt Tr(Gt B(t)B T (t)) Gt B(t)B (t)Gt n t

G0 = G0 .

Prove that the solution to this equation is symmetric and positive definite for all t > 0, provided
that G0 = [G0 ]T  0.

Exercise 3.45 Modify the previous reasoning to demonstrate that the “locally optimal” policy
for building inner ellipsoidal approximation of the set

X(t) = {x(t) | ∃x0 ∈ E(0) ≡ {x | (x − x0 )T G0 (x − x0 ) ≤ 1}, ∃u(·), ku(·)k2 ≤ 1 :


d 0
dτ x(τ ) = A(τ )x(τ ) + B(τ )u(τ ) + v(τ ), 0 ≤ τ ≤ t, x(0) = x }

results in the family of ellipsoids

E(t) = {x | (x − xt )T Wt (x − xt ) ≤ 1},

where xt is given by (3.8.16) and Wt is the solution of the differential equation

d 1/2 1/2 1/2 1/2


Wt = −AT (t)Wt − Wt A(t) − 2Wt (Wt B(t)B T (t)Wt )1/2 Wt , t ≥ 0; W 0 = G0 .
dt

3.8.7 Around S-Lemma


The S-Lemma is a kind of the Theorem on Alternative, more specifically, a “quadratic” analog
of the Homogeneous Farkas Lemma:
300 LECTURE 3. SEMIDEFINITE PROGRAMMING

Homogeneous Farkas Lemma: A homogeneous linear inequality aT x ≥ 0 is a conse-


quence of a system of homogeneous linear inequalities bTi x ≥ 0, i = 1, ..., m, if and
only if it is a “linear consequence” of the system, i.e., if and only if
X
∃(λ ≥ 0) : a= λi bi .
i

S-Lemma: A homogeneous quadratic inequality xT Ax ≥ 0 is a consequence of


a strictly feasible system of homogeneous quadratic inequalities xT Bi x ≥ 0, i =
1, ..., m, with m = 1 if and only if it is a “linear consequence” of the system and a
trivial – identically true – quadratic inequality, i.e., if and only if
X
∃(λ ≥ 0, ∆  0) : A= λi Bi + ∆.
i

We see that the S-Lemma is indeed similar to the Farkas Lemma, up to a (severe!) restriction
that now the system in question must contain a single quadratic inequality (and up to the mild
“regularity assumption” of strict feasibility).
The Homogeneous Farkas Lemma gives rise to the Theorem on Alternative for systems
of linear inequalities; and as a matter of fact, this Lemma is the basis of the entire Convex
Analysis and the reason why Convex Programming problems are “easy” (see Lecture 4). The
fact that a similar statement for quadratic inequalities – i.e., S-Lemma – fails to be true for
a “multi-inequality” system is very unpleasant and finally is the reason for the existence of
“simple-looking” computationally intractable (NP-complete) optimization problems.
Given the crucial role of different “Theorems on Alternative” in Optimization, it is vitally
important to understand extent to which the “linear” Theorem on Alternative can be generalized
onto the case of non-linear inequalities. The “standard” generalization of this type is as follows:

The Lagrange Duality Theorem (LDT): Let f0 be a convex, and f1 , ..., fm be concave
functions on Rm such that the system of inequalities

fi (x) ≥ 0, i = 1, ..., m (S)

is strictly feasible (i.e., fi (x̄) > 0 for some x̄ and all i = 1, ..., m). The inequality

f0 (x) ≥ 0

is a consequence of the system (S) if and only if it can be obtained, in a linear fashion,
from (S) and a “trivially” true – valid on the entire Rn – inequality, i.e., if and only
if there exist m nonnegative weights λi such that
m
X
f0 (x) ≥ λi fi (x) ∀x.
i=1

The Lagrange Duality Theorem plays the central role in “computationally tractable Optimiza-
tion”, i.e., in Convex Programming (for example, the “plain” (not refined!) Conic Duality
Theorem from Lecture 1 is just a reformulation of the LDT). This theorem, however, imposes
severe convexity-concavity restrictions on the inequalities in question. E.g., in the case when all
the inequalities are homogeneous quadratic, LDT is “empty”. Indeed, a homogeneous quadratic
3.8. EXERCISES FOR LECTURE 3 301

function xT Bx is concave if and only if B  0, and is convex if and only if B  0. It follows


that in the case of fi = xT Ai x, i = 0, ..., m, the premise in the LDT is “empty” (a system
of homogeneous quadratic inequalities xT Ai x ≥ 0 with Ai  0, i = 1, ..., m, simply cannot be
strictly feasible), and the conclusion in the LDT is trivial (if f0 (x) = xT A0 x with A0  0, then
m
f0 (x) ≥ 0 × fi (x), whatever are fi ’s). Comparing the S-Lemma to the LDT, we see that
P
i=1
the former statement is, in a sense, “complementary” to the second one: the S-Lemma, when
applicable, provides us with information which definitely cannot be extracted from the LDT.
Given this “unique role” of the S-Lemma, it surely deserves the effort to understand what are
possibilities and limitations of extending the Lemma to the case of “multi-inequality system”,
i.e., to address the question as follows:

(SL.?) We are given a homogeneous quadratic inequality

xT Ax ≥ 0 (I)

along with a strictly feasible system of homogeneous quadratic inequalities

xT Bi x ≥ 0, i = 1, ..., m. (S)

Consider the following two statements:


(i) (I) is a consequence of (S), i.e., (I) is satisfied at every solution of (S).
(ii) (I) is a “linear consequence” of (S) and a trivial – identically true – homogeneous
quadratic inequality:
m
X
∃(λ ≥ 0, ∆  0) : A= λi Bi + ∆.
i=1

What is the “gap” between (i) and (ii)?

One obvious fact is expressed in the following

Exercise 3.46 [“Inverse” S-Lemma] Prove the implication (ii)⇒(i).

In what follows, we focus on less trivial results concerning the aforementioned “gap”.

3.8.7.1 A straightforward proof of the standard S-Lemma


The goal of the subsequent exercises is to work out a straightforward proof of the S-Lemma
instead of the “tricky”, although elegant, proof presented in Lecture 3. The “if” part of the
Lemma is evident, and we focus on the “only if” part. Thus, we are given two quadratic
forms xT Ax and xT Bx with symmetric matrices A, B such that x̄T Ax̄ > 0 for some x̄ and the
implication
xT Ax ≥ 0 ⇒ xT Bx ≥ 0 (⇒)
is true. Our goal is to prove that

(SL.A) There exists λ ≥ 0 such that B  λA.

The main tool we need is the following


302 LECTURE 3. SEMIDEFINITE PROGRAMMING

Theorem 3.8.2 [General Helley Theorem] Let {Aα }α∈I be a family of closed convex sets in Rn
such that

1. Every n + 1 sets from the family have a point in common;

2. There is a finite sub-family of the family such that the intersection of the sets from the
sub-family is bounded.

Then all sets from the family have a point in common.

Exercise 3.47 Prove the General Helley Theorem.

Exercise 3.48 Show that (SL.A) is a corollary of the following statement:

(SL.B) Let xT Ax, xT Bx be two quadratic forms such that x̄T Ax̄ > 0 for certain x̄
and
xT Ax ≥ 0, x 6= 0 ⇒ xT Bx > 0 (⇒0 )
Then there exists λ ≥ 0 such that B  λA.

Exercise 3.49 Given data A, B satisfying the premise of (SL.B), define the sets

Qx = {λ ≥ 0 : xT Bx ≥ λxT Ax}.

1) Prove that every one of the sets Qx is a closed nonempty convex subset of the real line;
2) Prove that at least one of the sets Qx is bounded;
3) Prove that every two sets Qx0 , Qx00 have a point in common.
4) Derive (SL.B) from 1) – 3), thus concluding the proof of the S-Lemma.

3.8.7.2 S-Lemma with a multi-inequality premise


The goal of the subsequent exercises is to present a number of cases when, under appropriate
additional assumptions on the data (I), (S), of the question (SL.?), statements (i) and (ii) are
equivalent, even if the number m of homogeneous quadratic inequalities in (S) is > 1.
Our first exercise demonstrates that certain additional assumptions are definitely necessary.

Exercise 3.50 Demonstrate by example that if xT Ax, xT Bx, xT Cx are three quadratic forms
with symmetric matrices such that

∃x̄ : x̄T Ax̄ > 0, x̄T B x̄ > 0


(3.8.18)
xT Ax ≥ 0, xT Bx ≥ 0 ⇒ xT Cx ≥ 0,

then not necessarily there exist λ, µ ≥ 0 such that C  λA + µB.


Hint: Clearly there do not exist nonnegative λ, µ such that C  λA + µB when

Tr(A) ≥ 0, Tr(B) ≥ 0, Tr(C) < 0. (3.8.19)

Thus, to build the required example it suffices to find A, B, C satisfying both (3.8.18) and
(3.8.19).
3.8. EXERCISES FOR LECTURE 3 303

Seemingly the simplest way to ensure (3.8.18) is to build 2 × 2 matrices A, B, C such that the
associated quadratic forms fA (x) = xT Ax, fB (x) = xT Bx, fC (x) = xT Cx are as follows:
• The set XA = {x | fA (x) ≥ 0} is the union of an angle D symmetric w.r.t. the x1 -axis
and the angle −D: fA (x) = λ2 x21 − x22 with λ > 0;
• The set XB = {x | fB (x) ≥ 0} looks like a clockwise rotation of XA by a small angle:
fB (x) = (µx − y)(νx + y) with 0 < µ < λ and ν > λ;
• The set XC = {x | xT Cx ≥ 0} is the intersection of XA and XB : fC (x) = (µx−y)(νx+y).

Surprisingly, there exists a “semi-extension” of the S-Lemma to the case of m = 2 in (SL.?):

(SL.C) Let n ≥ 3, and let A, B, C be three symmetric n × n matrices such that


(i) certain linear combination of the matrices A, B, C is positive definite,
and
(ii) the system of inequalities (
xT Ax ≥ 0
(3.8.20)
xT Bx ≥ 0

is strictly feasible, i.e., ∃x̄: x̄T Ax̄ > 0, x̄T B x̄ > 0.

Then the inequality

xT Cx ≥ 0
is a consequence of the system (3.8.20) if and only if there exist nonnegative λ, µ
such that
C  λA + µB.

The proof of (SL.C) uses a nice convexity result which is interesting by its own right:

(SL.D) [B. Polyak] Let n ≥ 3, and let fi (x) = xT Ai x, i = 1, 2, 3, be three homoge-


neous quadratic forms on Rn (here Ai , i = 1, 2, 3, are symmetric n × n matrices).
Assume that certain linear combination of thematrices Ai is positive definite. Then
f1 (x)
the image of Rn under the mapping F (x) =  f2 (x)  is a closed convex set.
f3 (x)

Exercise 3.51 Derive (SL.C) from (SL.D).


Hint for the only nontrivial “only if” part of (SL.C): By (SL.C.i) and (SL.D), the set
 T 
x Ax
Y = {y ∈ R3 | ∃x : y =  xT Bx }
xT Cx

is a closed and convex set in R3 . Prove that if xT Bx ≥ 0 for every solution of (3.8.20),
then Y does not intersect the convex set Z = {(y = (y1 , y2 , y3 )T | y1 , y2 ≥ 0, y3 < 0}.
Applying the Separation Theorem, conclude that there exist nonnegative weights θ, λ, µ,
not all of them zero, such that the matrix θC − λA − µB is positive definite. Use (SL.C.ii)
to demonstrate that θ > 0.
304 LECTURE 3. SEMIDEFINITE PROGRAMMING

Now let us prove (SL.D). We start with a number of simple topological facts. Recall that a
metric space X is called connected, if there does not exist a pair of nonempty open sets V, U ⊂ X
such that U ∩ V = ∅ and U ∪ V = X. The simplest facts about connectivity are as follows:

(C.1) If a metric space Y is linearly connected: for every two points x, y ∈ Y there exists a
continuous curve linking x and y, i.e., a continuous function γ : [0, 1] → Y such that
γ(0) = x and γ(1) = y, then Y is connected. In particular, a line segment in Rk , same as
every other convex subset of Rk is connected (from now on, a set Y ⊂ Rk is treated as a
metric space, the metric coming from the standard metric on Rk );

(C.2) Let F : Y → Z be a continuous mapping from a connected metric space to a metric space
Z. Then the image F (Y ) of the mapping (regarded as a metric space, the metric coming
from Z) is connected.

We see that the connectivity of a set Y ∈ Rn is a much weaker property than the convexity.
There is, however, a simple case where these properties are equivalent: the one-dimensional case
k = 1:

Exercise 3.52 Prove that a set Y ⊂ R is connected if and only if it is convex.

To proceed, recall the notion of the n-dimensional projective space Pn . A point in this space
is a line in Rn+1 passing through the origin. In order to define the distance between two
points of this type, i.e., between two lines `, `0 in Rn+1 passing through the origin, we take the
intersections of the lines with the unit Euclidean sphere in Rn+1 ; let the first intersection be
comprised of the points ±e, and the second – of the points ±e0 . The distance between ` and `0 is,
by definition, min{ke + e0 k2 , ke − e0 k2 } (it is clear that the resulting quantity is well-defined and
that it is a metric). Note that there exists a natural mapping Φ (“the canonical projection”) of
the unit sphere S n ⊂ Rn+1 onto Pn – the mapping which maps a unit vector e ∈ S n onto the
line spanned by e. It is immediately seen that this mapping is continuous and maps points ±e,
e ∈ S n , onto the same point of Pn . In what follows we will make use of the following simple
facts:

Proposition 3.8.1 Let Y ⊂ S n be a set with the following property: for every two points
x, x0 ∈ Y there exists a point y ∈ Y such that both x, x0 can be linked by continuous curves in Y
with the set {y; −y} (i.e., we can link in Y (1) both x and x0 with y, or (2) x with y, and x0 with
−y, or (3) both x and x0 with −y, or (4) x with −y, and x0 with y). Then the set Φ(Y ) ⊂ Pn
is linearly connected (and thus – connected).

Proposition 3.8.2 Let F : Y → Rk be a continuous mapping defined on a central-symmetric


subset (Y = −Y ) of the unit sphere S n ⊂ Rn+1 , and let the mapping be even: F (y) = F (−y)
for every y ∈ Y . Let Z = Φ(Y ) be the image of Y in Pn under the canonical projection, and let
the mapping G : Z → Rk be defined as follows: in order to specify G(z) for z ∈ Z, we choose
somehow a point y ∈ Y such that Φ(y) = z and set G(z) = F (y). Then the mapping G is
well-defined and continuous on Z.

Exercise 3.53 Prove Proposition 3.8.1.

Exercise 3.54 Prove Proposition 3.8.2.


3.8. EXERCISES FOR LECTURE 3 305

The key argument in the proof of (SL.D) is the following fact:


Proposition 3.8.3 Let f (x) = xT Qx be a homogeneous quadratic form on Rn , n ≥ 3. Assume
that the set Y = {x ∈ S n−1 : f (x) = 0} is nonempty. Then the set Y is central-symmetric, and
its image Z under the canonical projection Φ : S n−1 → Pn−1 is connected.
The goal of the next exercise is to prove Proposition 3.8.3. In what follows f, Y, Z are as in the
Proposition. The relation Y = −Y is evident, so that all we need to prove is the connectedness
of Z. W.l.o.g. we may assume that
n
X
f (x) = λi x2i , λ1 ≥ λ2 ≥ ... ≥ λn ;
i=1

since Y is nonempty, we have λ1 ≥ 0, λn ≤ 0. Replacing, if necessary, f with −f (which does


not vary Y ), we may further assume that λ1 ≥ |λn |. The case λ1 = λn = 0 is trivial, since in this
case f ≡ 0, whence Y = S n−1 ; thus, Y (and therefore – Z, see (C.2)) is connected. Thus, we
may assume that λ1 ≥ |λn | and λ1 > 0 ≥ λn . Finally, it is convenient to set θ1 = λ1 , θ2 = −λn ;
reordering the coordinates of x, we come to the situation as follows:
n
(a) f (x) = θ1 x21 − θ2 x22 + θi x2i ,
P
i=3
(b) θ1 ≥ θ2 ≥ 0, θ1 + θ2 > 0; (3.8.21)
(c) −θ2 ≤ θi ≤ θ1 , i = 3, ..., n.

Exercise 3.55 1) Let x ∈ Y . Prove that x can be linked in Y by a continuous curve with a
point x0 such that the coordinates of x0 with indices 3, 4, ..., n vanish.
Hint: Setting d = (0, 0, x3 , ..., xn )T , prove that there exists a continuous curve µ(t), 0 ≤ t ≤ 1,
in Y such that
µ(t) = (x1 (t), x2 (t), 0, 0, ..., 0)T + (1 − t)d, 0 ≤ t ≤ 1,
and x1 (0) = x1 , x2 (0) = x2 .

2) Prove that there exists a point z + = (z1 , z2 , z3 , 0, 0, ..., 0)T ∈ Y such that
(i) z1 z2 = 0;
(ii) given a point u = (u1 , u2 , 0, 0, ..., 0)T ∈ Y , you can either (ii.1) link u by continuous
curves in Y both to z + and to z̄ + = (z1 , z2 , −z3 , 0, 0, ..., 0)T ∈ Y , or (ii.2) link u both to z − =
(−z1 , −z2 , z3 , 0, 0, ..., 0)T and z̄ − = (−z1 , −z2 , −z3 , 0, 0, ..., 0)T (note that z + = −z̄ − , z̄ + = −z − ).
Hint: Given a point u ∈ Y , u3 = u4 = ... = un = 0, build a continuous curve µ(t) ∈ Y of the
type
µ(t) = (x1 (t), x2 (t), t, 0, 0, ..., 0)T ∈ Y
such that µ(0) = u and look what can be linked with u by such a curve.

3) Conclude from 1-2) that Y satisfies the premise of Proposition 3.8.1 and thus complete
the proof of Proposition 3.8.3.
Now we are ready to prove (SL.D).
Exercise 3.56 Let Ai , i = 1, 2, 3, satisfy the premise of (SL.D).
1) Demonstrate that in order to prove (SL.D), it suffices to prove the statement in the
particular case A1 = I.
306 LECTURE 3. SEMIDEFINITE PROGRAMMING

Hint: The validity status of the conclusion in (SL.D) remains the same when we replace our
3
P
initial quadratic forms fi (x), i = 1, 2, 3, by the forms gi (x) = cij fj (x), i = 1, 2, 3, pro-
j=1
vided that the matrix [cij ] is nonsingular. Taking into account the premise in (SL.D),
we can choose such a transformation to get, as g1 , a positive definite quadratic form.
W.l.o.g. we may therefore assume from the very beginning that A1  0. Now, passing
from the quadratic forms given by the matrices A1 , A2 , A3 to those given by the matrices
−1/2 −1/2 −1/2 −1/2
I, A1 A2 A1 , A1 A3 A1 , we do not vary the set H at all. Thus, we can restrict
ourselves with the case A1 = I.

2) Assuming A1 = I, prove that the set

H1 = {(v1 , v2 )T ∈ R2 | ∃x ∈ Sn−1 : v1 = f2 (x), v3 = f3 (x)}

is convex.

Hint: Prove that the intersection of H1 with every line ` ⊂ R2 is the image of a connected
set in Pn−1 under a continuous mapping and is therefore connected by (C.2). Then apply
the result of Exercise 3.52.

3) Assuming A1 = I, let H e 1 = {(1, v1 , v2 )T ∈ R3 | (v1 , v2 )T ∈ H1 }, and let H = F (Rn ),


T
F (x) = (f1 (x), f2 (x), f3 (x)) . Prove that H is the closed convex hull of H e 1:

H = cl {y | ∃t > 0, u ∈ H
e 1 : y = tu}.

Use this fact and the result of 2) to prove that H is closed and convex, thus completing the proof
of (SL.D).

Note that the restriction n ≥ 3 in (SL.D) and (SL.C) is essential:

Exercise 3.57 Demonstrate by example that (SL.C) not necessarily remains valid when skip-
ping the assumption “n ≥ 3” in the premise.

Hint: An example can be obtained via the construction outlined in the Hint to Exercise 3.50.

In order to extend (SL.C) to the case of 2 × 2 matrices, it suffices to strengthen a bit the
premise:

(SL.E) Let A, B, C be three 2 × 2 symmetric matrices such that


(i) certain linear combination of the matrices A, B is positive definite,
and
(ii) the system of inequalities (3.8.20) is strictly feasible.
Then the inequality

xT Cx ≥ 0
is a consequence of the system (3.8.20) if and only if there exist nonnegative λ, µ
such that
C  λA + µB.
3.8. EXERCISES FOR LECTURE 3 307

Exercise 3.58 Let A, B, C be three 2×2 symmetric matrices such that the system of inequalities
xT Ax ≥ 0, xT Bx ≥ 0 is strictly feasible and the inequality xT Cx is a consequence of the system.
1) Assume that there exists a nonsingular matrix Q such that both the matrices QAQT and
QBQT are diagonal. Prove that then there exist λ, µ ≥ 0 such that C  λA + µB.
2) Prove that if a linear combination of two symmetric matrices A, B (not necessarily 2 × 2
ones) is positive definite, then there exists a system of (not necessarily orthogonal) coordinates
where both quadratic forms xT Ax, xT Bx are diagonal, or, equivalently, that there exists a non-
singular matrix Q such that both QAQT and QBQT are diagonal matrices. Combine this fact
with 1) to prove (SL.E).

We have seen that the “2-inequality-premise” version of the S-Lemma is valid (under an addi-
tional mild assumption that a linear combination of the three matrices in question is positive
definite). In contrast, the “3-inequality-premise” version of the Lemma is hopeless:
Exercise 3.59 Consider four matrices
2+ −1 −1
     

A1 =  −1  , A2 =  2+  , A3 =  −1 ,
−1 −1 2+
1 1.1 1.1

B = 1.1
 1 1.1  .
1.1 1.1 1
1) Prove that if  > 0 is small enough, then the matrices satisfy the conditions

(a) ∀(x, xT Ai x ≥ 0, i = 1, 2, 3) : xT Bx ≥ 0,
(b) ∃x̄ : x̄T Ai x̄ > 0.

2) Prove that whenever  ≥ 0, there do not exist nonnegative λi , i = 1, 2, 3, such that


3
B
P
λ i Ai .
i=1
Thus, an attempt to extend the S-Lemma to the case of three quadratic inequalities in the premise
fails already when the matrices of these three quadratic forms are diagonal.
Hint: Note that if there exists a collection of nonnegative weights λi ≥ 0, i = 1, 2, 3, such
P3
that B  λi Ai , then the same property is shared by any collection of weights obtained
i=1
from the original one by a permutation. Conclude that under the above “if” one should have
P3
Bθ Ai with some θ ≥ 0, which in fact is not the case.
i=1

Exercise 3.59 demonstrates that when considering (SL.?) for m = 3, even the assumption that
all quadratic forms in (S) are diagonal not necessarily implies an equivalence between (i) and
(ii). Note that under a stronger assumption that all quadratic forms in question are diagonal,
(i) is equivalent to (ii) for all m:
Exercise 3.60 Let A, B1 , ..., Bm be diagonal matrices. Prove that the inequality

xT Ax ≥ 0

is a consequence of the system of inequalities

xT Bi x ≥ 0, i = 1, ..., m
308 LECTURE 3. SEMIDEFINITE PROGRAMMING

if and only if it is a linear consequence of the system and an identically true quadratic inequality,
i.e., if and only if X
∃(λ ≥ 0, ∆  0) : A = λi Bi + ∆.
i

Hint: Pass to new variables yi = x2i and apply the Homogeneous Farkas Lemma.

3.8.7.3 Relaxed versions of S-Lemma


In Exercises 3.51 – 3.60 we were interested to understand under which additional assumptions
on the data of (SL.?) we can be sure that (i) is equivalent to (ii). In the exercises to follow,
we are interested to understand what could be the “gap” between (i) and (ii) “in general”. An
example of such a “gap” statement is as follows:

(SL.F) Consider the situation of (SL.?) and assume that (i) holds. Then (ii) is valid
on a subspace of codimension ≤ m − 1, i.e., there exist nonnegative weights λi such
that the symmetric matrix
m
X
∆=A− λ i Bi
i=1

has at most m − 1 negative eigenvalues (counted with their multiplicities).

Note that in the case m = 1 this statement becomes exactly the S-Lemma.

The idea of the proof of (SL.F) is very simple. To say that (I) is a consequence of (S) is
basically the same as to say that the optimal value in the optimization problem

min f0 (x) ≡ xT Ax : fi (x) ≡ xT Bi x ≥ , i = 1, ..., m



(P )
x

is positive, whenever  > 0. Assume that the problem is solvable with an optimal solution x ,
and let I = {i ≥ 1 | fi (x ) = }. Assume, in addition, that the gradients {∇fi (x ) | i ∈ I }
are linearly independent. Then the second-order necessary optimality conditions are satisfied
at x , i.e., there exist nonnegative Lagrange multipliers λi , i ∈ I , such that for the function
X
L (x) = f0 (x) − λi fi (x)
i∈I

one has:
∇L (x ) = 0,
∀(d ∈ E = {d : dT ∇fi (x ) = 0, i ∈ I }) : dT ∇2 L (x )d ≥ 0.
P 
In other words, setting D = A − λi Bi , we have
i∈I

Dx = 0; dT Dd ≥ 0 ∀d ∈ E.

We conclude that dT Dd ≥ 0 for all d ∈ E + = E + Rx , and it is easily seen that the
codimension of E + is at most m − 1. Consequently, the number of negative eigenvalues of
D, counted with their multiplicities, is at most m − 1.
The outlined “proof” is, of course, incomplete: we should justify all the assumptions made
along the way. This indeed can be done (and the “driving force” of the justification is the
Sard Theorem: if f : Rn → Rk , n ≥ k, is a C∞ mapping, then the image under f of the set
of points x where the rank of f 0 (x) is < k, is of the k-dimensional Lebesgue measure 0).
3.8. EXERCISES FOR LECTURE 3 309

We should confess that we do not know any useful applications of (SL.F), which is not the
case for other “relaxations” of the S-Lemma we are about to consider. All these relaxations
have to do with “inhomogeneous” versions of the Lemma, like the one which follows:
(SL.??) Consider a system of quadratic inequalities of the form
xT Bi x ≤ di , i = 1, ..., m, (S0 )
where all di are ≥ 0, and a homogeneous quadratic form
f (x) = xT Ax;
we are interested to evaluate the maximum f ∗ of the latter form on the solution set
of (S0 ).
The standard semidefinite relaxation of the optimization problem
n o
max f (x) : xT Bi x ≤ di (P)
x

is the problem
n o
max F (X) ≡ Tr(AX) : Tr(Bi X) ≤ di , i = 1, ..., m, X = X T  0 , (SDP)
X

and the optimal value F ∗ in this problem is an upper bound on f ∗ (why?). How
large can the difference F ∗ − f ∗ be?
The relation between (SL.??) and (SL.?) is as follows. Assume that the only solution to the
system of inequalities
xT Bi x ≤ 0, i = 1, ..., m
is x = 0. Then (P) is equivalent to the optimization problem
min θ : xT Ax ≤ θt2 , xT Bi x ≤ di t2 , i = 1, ..., m (P0 )

θ,x

in the sense that both problems have the same optimal value f ∗ (why?). In other words,
(J) f ∗ is the smallest value of a parameter θ such that the homogeneous quadratic
inequality
 T    
x
Abθ x ≥ 0, A bθ = −A (C)
t t θ
| {z }
z
is a consequence of the system of homogeneous quadratic inequalities
 
zT B bi = −Bi
bi z ≥ 0, i = 1, ..., m, B (H)
di
Now let us assume that (P) is strictly feasible, so that (SDP) is also strictly feasible (why?),
and that (SDP) is bounded above. By the Conic Duality Theorem, the semidefinite dual of
(SDP) (m )
X m
X
min λi di : λi Bi  A, λ ≥ 0 (SDD)
λ
i=1 i=1
is solvable and has the same optimal value F ∗ as (SDP). On the other hand, it is immediately
seen that the optimal value in (SDD) is the smallest θ such that there exist nonnegative
weights λi satisfying the relation
Xm
Abθ  λi B
bi .
i=1
Thus,
310 LECTURE 3. SEMIDEFINITE PROGRAMMING

(K) F ∗ is the smallest value of θ such that A bθ is  a combination, with non-


bi ’s, or, which is the same, F ∗ is the smallest value of the
negative weights, of B
parameter θ for which (C) is a “linear consequence” of (H).

Comparing (J) and (K), we see that our question (SL.??) is closely related to the ques-
tion what is the “gap” between (i) and (ii) in (SL.?): in (SL.??), we are considering a
parameterized family z T Abθ z ≥ 0 of quadratic inequalities and ask ourselves what is the gap
between
(a) the smallest value f ∗ of the parameter θ for which the inequality z T A
bθ z ≥ 0 is a conse-
quence of the system (H) of homogeneous quadratic inequalities,
and
(b) the smallest value F ∗ of θ for which the inequality z T A
bθ z ≥ 0 is a linear consequence of
(H).

The goal of the subsequent exercises is to establish the following result related to (SL.??):
Proposition 3.8.4 [Nesterov;Ye] Consider (SL.??), and assume that
1. The matrices B1 , ..., Bm commute with each other;

2. System (S0 ) is strictly feasible, and there exists a combination of the matrices Bi with
nonnegative coefficients which is positive definite;

3. A  0.
Then f ∗ ≥ 0, (SDD) is solvable with the optimal value F ∗ , and
π ∗
F∗ ≤ f . (3.8.22)
2

Exercise 3.61 Derive Proposition 3.8.4 from the result of Exercise 3.33.
Hint: Observe that since Bi are commuting symmetric matrices, they share a common or-
thogonal eigenbasis, so that w.l.o.g. we can assume that all Bi ’s are diagonal.

3.8.8 Around Chance constraints


Exercise 3.62 Here you will learn how to verify claims like “distinguishing, with reliability
0.99, between distributions A and B takes at least so much observations.”
1. Problem’s setting. Let P and Q be two probability distributions on the same space Ω with
densities p(·), q(·) with respect to some measure µ.

Those with limited experience in measure theory will lose nothing by assuming whenever
possible that Ω is the finite set {1, ..., N } and µ is the “counting measure” (the µ-mass
of every point from Ω is 1). In this case, the density of a probability distribution P
on Ω is just the function p(·) on the N -point set Ω (i.e., N -dimensional vector) with
p(i) = Probω∼P {ω = i} (that is, p(i) is the probability mass which is assigned by P to
a point i ∈ Ω). In this case (in the sequel, we refer to it as to the discrete one), the
density of a probability distribution on Ω is just a probabilistic vector from RN — a
nonnegative vector with entries summing up to 1.
3.8. EXERCISES FOR LECTURE 3 311

Given an observation ω ∈ Ω drawn at random from one (we do not know in advance from
which one) of the distributions P, Q, we want to decide what is the underlying distribution;
this is called distinguishing between two simple hypotheses28 . A (deterministic) decision
rule clearly should be as follows: we specify a subset ΩP ⊂ Ω and accept the hypothesis
HP that the distribution from which ω is drawn is P if and only if ω ∈ ΩP ; otherwise we
accept the alternative hypothesis HQ saying that the “actual” distribution is Q.
A decision rule for distinguishing between the hypotheses can be characterized by two
error probabilities: P (to accept HQ when HP is true) and Q (to accept HP when HQ is
true). We clearly have
Z Z
P = p(ω)dµ(ω), Q = q(ω)dµ(ω)
ω6∈ΩP ω∈ΩP

Task 1: Prove29 that for every decision rule it holds


Z
P + Q ≥ min[p(ω), q(ω)]dµ(ω).

Prove that this lower bound is achieved for the maximum likelihood test, where ΩP = {ω :
p(ω) ≥ q(ω)}.
Follow-up: Essentially, the only multidimensional case where it is easy to compute the total
error P +Q of the maximum likelihood test is when P and Q are Gaussian distributions on
Ω = Rk with common covariance matrix (which we for simplicity set to the unit matrix)
and different expectations, say, a and b, so that the densities (taken w.r.t. the usual
Lebesgue measure dµ(ω1 , ..., ωk ) = dω1 ...dωk ) are p(ω) = (2π)1k/2 exp{−(ω − a)T (ω − a)/2}
and q(ω) = (2π)1k/2 exp{−(ω − b)T (ω − b)/2}. Prove that in this case the likelihood test
reduces to accepting P when kω − ak2 ≤ kω − bk2 and accepting Q otherwise.
Prove that the total error of the above test is 2Φ(ka − bk2 /2), where
Z ∞
1
Φ(s) = √ exp{−t2 /2}dt
2π s

is the error function.

2. Kullback-Leibler divergence. The question we address next is how to compute the outlined
lower bound on P + Q (its explicit representation as an integral does not help much –
how could we compute this integral in a multi-dimensional case?)
One of the useful approaches to the task at hand is based on Kullback-Leibler divergence
defined as
p(ω)
Z   h   i
p(ω)
H(p, q) =
P
p(ω) ln dµ(ω) = ω∈Ω p(ω) ln q(ω) in the discrete case
q(ω)

In this definition, we set 0 ln(0/a) = 0 whenever a ≥ 0 and a ln(a/0) = +∞ whenever


a > 0.
Task 2:
28
“simple” expresses the fact that every one of the two hypotheses specifies exactly the distribution of ω, in
contrast to the general case where a hypothesis can specify only a family of distributions from which the “actual
one,” underlying the generation of the observed ω, is chosen.
29
You are allowed to provide the proofs of this and the subsequent claims in the discrete case only.
312 LECTURE 3. SEMIDEFINITE PROGRAMMING

(a) Compute the Kullback-Leibler divergence between two Gaussian distributions on Rk


with unit covariance matrices and expectations a, b.
(b) Prove that H(p, q) is convex function of (p(·), q(·)) ≥ 0.
Hint: It suffices to prove that the function t ln(t/s) is a convex function of t, s ≥ 0. To
this end, it suffices to note that this function is the projective transformation sf (t/s)
of the (clearly convex) function f (t) = t ln(t) with the domain t ≥ 0.
(c) Prove that when p, q are probability densities, H(p, q) ≥ 0.
(d) Let k be a positive integer and let for 1 ≤ s ≤ k ps be a probability density on Ωs
taken w.r.t. measure µs . Let us define the “direct product” p1 ×...×pk of the densities
ps the density of k-element sample ω1 , ...ωk with independent ωs drawn according to
ps , s = 1, ..., k; the product density is taken w.r.t. the measure µ = µ1 × ... × µk
(i.e., dµ(ω1 ), ..., ωk ) = dµ1 (ω1 )...dµk (ωk )) on the space Ω1 × ... × Ωk where the sample
lives. For example, in the discrete case ps are probability vectors of dimension Ns ,
s = 1, ..., k, and p1 × ... × pk is the probability vector of the dimension N1 , ..., Nk with
the entries
pi1 ,...,ik = p1i1 p2i2 ...pkik , 1 ≤ is ≤ Ns , 1 ≤ s ≤ k.
Prove that if ps , q s are probability densities on Ωs taken w.r.t. measures µs on these
spaces, then
k
X
H(p1 × ... × pk , q 1 × ... × q k ) = H(ps , q s ).
s=1

In particular, when all ps coincide with some p (in this case, we denote p1 × ... × pk
by p⊗k ) and q 1 , ..., q k coincide with some q, then
H(p⊗k , q ⊗k ) = kH(p, q).
Follow-up: What is Kullback-Leibler divergence between two k-dimensional Gaussian
distributions N (a, Ik ) and N (b, Ik ) (Ω = Rk , dµ(ω1 , ..., ωk ) = dω1 ...dωk , the density
of N (a, Ik ) is (2π)1k/2 exp{−(w − a)T (w − a)/2})?
(e) Let p, q be probability densities on (Ω, µ), and let A ⊂ Ω, B = Ω\A. Let pA , pB =
1 − pA be the p-probability masses of A and B, and let qA , qB = 1 − qA be the
q-probability masses of A, B. Prove that
pA pB
   
H(p, q) ≥ pA ln + pB ln .
qA qB
(f) In the notation from Problem’s setting, prove that if the hypotheses HP , HQ can be
distinguished from an observation with probabilities of the errors satisfying P + Q ≤
2 < 1/2, then
1
 
H(p, q) ≥ (1 − 4) ln . (3.8.23)
2
Follow-up: Let p and q be Gaussian densities on Rk with unit covariance matrices
and expectations a, b. Given that kb − ak2 = 1, how large should be a sample of
independent realizations (drawn either all from p, or all from q) in order to distinguish
between P and Q with total error 2.e-6? Give a lower bound on the sample size based
on (3.8.23) and its true minimum size (to find it, use the result of the Follow-up in
item 1; note that a sample of vectors drawn independently from Gaussian distribution
is itself a large Gaussian vector).
3.8. EXERCISES FOR LECTURE 3 313

3. Hellinger affinity. The Hellinger affinity of P and Q is defined as


Z q h i
P p
Hel(p, q) = p(ω)q(ω)dµ(ω) = ω∈Ω p(ω)q(ω) in the discrete case

Task 3:
(a) Prove that the Hellinger affinity is nonnegative, is concave in (p, q) ≥ 0, does not
exceed 1 when p, q are probability densities (and is equal to one if and only if p = q),
and possesses the following two properties:
s Z
Hel(p, q) ≤ 2 min[p(ω), q(ω)]dµ(ω)

and
Hel(p1 × ... × pk , q 1 × ... × q k ) = Hel(p1 , q 1 ) · ... · Hel(pk , q k ).
(b) Derive from the previous item that the total error 2 in distinguishing two hypotheses
on distribution of ω k = (ω1 , ..., ωk ) ∈ Ω × ... × Ω, the first stating that the density of
ω k is p⊗k , and the second stating that this density is q ⊗k , admits the lower bound
4 ≥ (Hel(p, q))2k .
Follow-up: Compute the Hellinger affinity of two Gaussian densities on Rk , both with
covariance matrices Ik , the means of the densities being a and b. Use this result to derive
a lower bound on the sample size considered in the previous Follow-up.
4. Experiment.
Task 4: Carry out the experiment as follows:
(a) Use the scheme represented in Proposition 3.6.1 to reproduce the results presented
in Illustration for the hypotheses B and C. Use the following data on the returns:
i−1 i−1
 
n = 15, ri = 1+µi +σi ζi , µi = 0.001+0.9· , σi = 0.9 + 0.2 · µi , 1 ≤ i ≤ n,
n−1 n−1
where ζi are zero mean random perturbations supported on [−1, 1].
(b) Find numerically the probability distribution p of ζ ∈ B = [−1, 1]15 for which the
probability of “crisis” ζ = −1, 1 ≤ i ≤ 15, is as large as possible under the restrictions
that
— p is supported on the set of 215 vertices of the box B (i.e., the factors ζi take values
±1 only);
— the marginal distributions of ζi induced by p are the uniform distributions on
{−1; 1} (i.e., every ζi takes values ±1 with probabilities 1/2);
— the covariance matrix of ζ is I15 .
(c) After p is found, take its convex combination with the uniform distribution on the
vertices of B to get a distribution P∗ on the vertices of B for which the probability
of the crisis P∗ ({ζ = [−1; ...; −1]}) is exactly 0.01.
(d) Use the Kullback-Leibler and the Hellinger bounds to bound from below the observa-
tion time needed to distinguish, with the total error 2 · 0.01, between two probability
distributions on the vertices of B, namely, P∗ and the uniform one (the latter corre-
sponds to the case where ζi , 1 ≤ i ≤ 15, are independent and take values ±1 with
probabilities 1/2).
314 LECTURE 3. SEMIDEFINITE PROGRAMMING
Lecture 4

Polynomial Time Interior Point


algorithms for LP, CQP and SDP

4.1 Complexity of Convex Programming


When we attempt to solve any problem, we would like to know whether it is possible to find
a correct solution in a “reasonable time”. Had we known that the solution will not be reached
in the next 30 years, we would think (at least) twice before starting to solve it. Of course,
this in an extreme case, but undoubtedly, it is highly desirable to distinguish between “com-
putationally tractable” problems – those that can be solved efficiently – and problems which
are “computationally intractable”. The corresponding complexity theory was first developed in
Computer Science for combinatorial (discrete) problems, and later somehow extended onto the
case of continuous computational problems, including those of Continuous Optimization. In this
Section, we outline the main concepts of the CT – Combinatorial Complexity Theory – along
with their adaptations to Continuous Optimization.

4.1.1 Combinatorial Complexity Theory


A generic combinatorial problem is a family P of problem instances of a “given structure”,
each instance (p) ∈ P being identified by a finite-dimensional data vector Data(p), specifying
the particular values of the coefficients of “generic” analytic expressions. The data vectors are
assumed to be Boolean vectors – with entries taking values 0, 1 only, so that the data vectors
are, actually, finite binary words.

The model of computations in CT: an idealized computer capable to store only integers
(i.e., finite binary words), and its operations are bitwise: we are allowed to multiply, add and
compare integers. To add and to compare two `-bit integers, it takes O(`) “bitwise” elementary
operations, and to multiply a pair of `-bit integers it costs O(`2 ) elementary operations (the cost
of multiplication can be reduced to O(` ln(`)), but it does not matter) .

In CT, a solution to an instance (p) of a generic problem P is a finite binary word y such
that the pair (Data(p),y) satisfies certain “verifiable condition” A(·, ·). Namely, it is assumed
that there exists a code M for the above “Integer Arithmetic computer” such that executing the
code on every input pair x, y of finite binary words, the computer after finitely many elementary

315
316 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

operations terminates and outputs either “yes”, if A(x, y) is satisfied, or “no”, if A(x, y) is not
satisfied. Thus, P is the problem

Given x, find y such that


A(x, y) = true, (4.1.1)

or detect that no such y exists.

For example, the problem Stones:

Given n positive integers a1 , ..., an , find a vector x = (x1 , ..., xn )T with coordinates
±1 such that xi ai = 0, or detect that no such vector exists
P
i

is a generic combinatorial problem. Indeed, the data of the instance of the problem, same as
candidate solutions to the instance, can be naturally encoded by finite sequences of integers. In
turn, finite sequences of integers can be easily encoded by finite binary words. And, of course,
for this problem you can easily point out a code for the “Integer Arithmetic computer” which,
given on input two binary words x = Data(p), y encoding the data vector of an instance (p)
of the problem and a candidate solution, respectively, verifies in finitely many “bit” operations
whether y represents or does not represent a solution to (p).

A solution algorithm for a generic problem P is a code S for the Integer Arithmetic computer
which, given on input the data vector Data(p) of an instance (p) ∈ P, after finitely many
operations either returns a solution to the instance, or a (correct!) claim that no solution exists.
The running time TS (p) of the solution algorithm on instance (p) is exactly the number of
elementary (i.e., bit) operations performed in course of executing S on Data(p).

A solvability test for a generic problem P is defined similarly to a solution algorithm, but
now all we want of the code is to say (correctly!) whether the input instance is or is not solvable,
just “yes” or “no”, without constructing a solution in the case of the “yes” answer.

The complexity of a solution algorithm/solvability test S is defined as

ComplS (`) = max{TS (p) | (p) ∈ P, length(Data(p)) ≤ `},

where length(x) is the bit length (i.e., number of bits) of a finite binary word x. The algo-
rithm/test is called polynomial time, if its complexity is bounded from above by a polynomial
of `.
Finally, a generic problem P is called to be polynomially solvable, if it admits a polynomial time
solution algorithm. If P admits a polynomial time solvability test, we say that P is polynomially
verifiable.

Classes P and NP. A generic problem P is said to belong to the class NP, if the corresponding
condition A, see (4.1.1), possesses the following two properties:
4.1. COMPLEXITY OF CONVEX PROGRAMMING 317

I. A is polynomially computable, i.e., the running time T (x, y) (measured, of course,


in elementary “bit” operations) of the associated code M is bounded from above by
a polynomial of the bit length length(x) + length(y) of the input:
T (x, y) ≤ χ(length(x) + length(y))χ ∀(x, y) 1)

Thus, the first property of an NP problem states that given the data Data(p) of a
problem instance p and a candidate solution y, it is easy to check whether y is an
actual solution of (p) – to verify this fact, it suffices to compute A(Data(p), y), and
this computation requires polynomial in length(Data(p)) + length(y) time.
The second property of an NP problem makes its instances even more easier:

II. A solution to an instance (p) of a problem cannot be “too long” as compared to


the data of the instance: there exists χ such that
length(y) > χlengthχ (x) ⇒ A(x, y) = ”no”.

A generic problem P is said to belong to the class P, if it belongs to the class NP and is
polynomially solvable.

NP-completeness is defined as follows:


Definition 4.1.1 (i) Let P, Q be two problems from NP. Problem Q is called to be polynomially
reducible to P, if there exists a polynomial time algorithm M (i.e., a code for the Integer
Arithmetic computer with the running time bounded by a polynomial of the length of the input)
with the following property. Given on input the data vector Data(q) of an instance (q) ∈ Q, M
converts this data vector to the data vector Data(p[q]) of an instance of P such that (p[q]) is
solvable if and only if (q) is solvable.
(ii) A generic problem P from NP is called NP-complete, if every other problem Q from NP
is polynomially reducible to P.
The importance of the notion of an NP-complete problem comes from the following fact:
If a particular NP-complete problem is polynomially verifiable (i.e., admits a poly-
nomial time solvability test), then every problem from NP is polynomially solvable:
P = NP.

The question whether P=NP – whether NP-complete problems are or are not polynomially
solvable, is qualified as “the most important open problem in Theoretical Computer Science”
and remains open for about 30 years. One of the most basic results of Theoretical Computer
Science is that NP-complete problems do exist (the Stones problem is an example). Many of
these problems are of huge practical importance, and are therefore subject, over decades, of
intensive studies of thousands excellent researchers. However, no polynomial time algorithm for
any of these problems was found. Given the huge total effort invested in this research, we should
conclude that it is “highly improbable” that NP-complete problems are polynomially solvable.
Thus, at the “practical level” the fact that certain problem is NP-complete is sufficient to qualify
the problem as “computationally intractable”, at least at our present level of knowledge.
1)
Here and in what follows, we denote by χ positive “characteristic constants” associated with the predi-
cates/problems in question. The particular values of these constants are of no importance, the only thing that
matters is their existence. Note that in different places of even the same equation χ may have different values.
318 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

4.1.2 Complexity in Continuous Optimization


It is convenient to represent continuous optimization problems as Mathematical Programming
problems, i.e. programs of the following form:
n o
min p0 (x) : x ∈ X(p) ⊂ Rn(p) (p)
x

where

• n(p) is the design dimension of program (p);

• X(p) ⊂ Rn is the feasible domain of the program;

• p0 (x) : Rn → R is the objective of (p).

Families of optimization programs. We want to speak about methods for solving opti-
mization programs (p) “of a given structure” (for example, Linear Programming ones). All
programs (p) “of a given structure”, like in the combinatorial case, form certain family P, and
we assume that every particular program in this family – every instance (p) of P – is specified by
its particular data Data(p). However, now the data is a finite-dimensional real vector; one may
think about the entries of this data vector as about particular values of coefficients of “generic”
(specific for P) analytic expressions for p0 (x) and X(p). The maximum of the design dimension
n(p) of an instance p and the dimension of the vector Data(p) will be called the size of the
instance (p):
Size(p) = max[n(p), dim Data(p)].

The model of computations. This is what is known as “Real Arithmetic Model of Computa-
tions”, as opposed to ”Integer Arithmetic Model” in the CT. We assume that the computations
are carried out by an idealized version of the usual computer which is capable to store count-
ably many reals and can perform with them the standard exact real arithmetic operations – the
four basic arithmetic operations, evaluating elementary functions, like cos and exp, and making
comparisons.

Accuracy of approximate solutions. We assume that a generic optimization problem P


is equipped with an “infeasibility measure” InfeasP (x, p) – a real-valued function of p ∈ P and
x ∈ Rn(p) which quantifies the infeasibility of vector x as a candidate solution to (p). In our
general considerations, all we require from this measure is that

• InfeasP (x, p) ≥ 0, and InfeasP (x, p) = 0 when x is feasible for (p) (i.e., when x ∈ X(p)).

Given an infeasibility measure, we can proceed to define the notion of an -solution to an instance
(p) ∈ P, namely, as follows. Let

Opt(p) ∈ {−∞} ∪ R ∪ {+∞}

be the optimal value of the instance (i.e., the infimum of the values of the objective on the
feasible set, if the instance is feasible, and +∞ otherwise). A point x ∈ Rn(p) is called an
-solution to (p), if
InfeasP (x, p) ≤  and p0 (x) − Opt(p) ≤ ,
4.1. COMPLEXITY OF CONVEX PROGRAMMING 319

i.e., if x is both “-feasible” and “-optimal” for the instance.


It is convenient to define the number of accuracy digits in an -solution to (p) as the quantity
!
Size(p) + kData(p)k1 + 2
Digits(p, ) = ln .


Solution methods. A solution method M for a given family P of optimization programs


is a code for the idealized Real Arithmetic computer. When solving an instance (p) ∈ P, the
computer first inputs the data vector Data(p) of the instance and a real  > 0 – the accuracy to
which the instance should be solved, and then executes the code M on this input. We assume
that the execution, on every input (Data(p),  > 0) with (p) ∈ P, takes finitely many elementary
operations of the computer, let this number be denoted by ComplM (p, ), and results in one of
the following three possible outputs:
– an n(p)-dimensional vector ResM (p, ) which is an -solution to (p),
– a correct message “(p) is infeasible”,
– a correct message “(p) is unbounded below”.
We measure the efficiency of a method by its running time ComplM (p, ) – the number of
elementary operations performed by the method when solving instance (p) within accuracy .
By definition, the fact that M is “efficient” (polynomial time) on P, means that there exists a
polynomial π(s, τ ) such that

ComplM (p, ) ≤ π (Size(p), Digits(p, ))


(4.1.2)
∀(p) ∈ P ∀ > 0.

Informally speaking, polynomially of M means that when we increase the size of an instance and
the required number of accuracy digits by absolute constant factors, the running time increases
by no more than another absolute constant factor.
We call a family P of optimization problems polynomially solvable (or, which is the same,
computationally tractable), if it admits a polynomial time solution method.

4.1.3 Computational tractability of convex optimization problems


A generic optimization problem P is called convex, if, for every instance (p) ∈ P, both the
objective p0 (x) of the instance and the infeasibility measure InfeasP (x, p) are convex functions
of x ∈ Rn(p) . One of the major complexity results in Continuous Optimization is that a generic
convex optimization problem, under mild computability and regularity assumptions, is polyno-
mially solvable (and thus “computationally tractable”). To formulate the precise result, we start
with specifying the aforementioned “mild assumptions”.

Polynomial computability. Let P be a generic convex program, and let InfeasP (x, p) be
the corresponding measure of infeasibility of candidate solutions. We say that our family is
polynomially computable, if there exist two codes Cobj , Ccons for the Real Arithmetic computer
such that
1. For every instance (p) ∈ P, the computer, when given on input the data vector of the
instance (p) and a point x ∈ Rn(p) and executing the code Cobj , outputs the value p0 (x) and a
subgradient e(x) ∈ ∂p0 (x) of the objective p0 of the instance at the point x, and the running
320 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

time (i.e., total number of operations) of this computation Tobj (x, p) is bounded from above by
a polynomial of the size of the instance:
 
∀ (p) ∈ P, x ∈ Rn(p) : Tobj (x, p) ≤ χSizeχ (p) [Size(p) = max[n(p), dim Data(p)]]. (4.1.3)

(recall that in our notation, χ is a common name of characteristic constants associated with P).
2. For every instance (p) ∈ P, the computer, when given on input the data vector of the
instance (p), a point x ∈ Rn(p) and an  > 0 and executing the code Ccons , reports on output
whether InfeasP (x, p) ≤ , and if it is not the case, outputs a linear form a which separates the
point x from all those points y where InfeasP (y, p) ≤ :

∀ (y, InfeasP (y, p) ≤ ) : aT x > aT y, (4.1.4)

the running time Tcons (x, , p) of the computation being bounded by a polynomial of the size of
the instance and of the “number of accuracy digits”:
 
∀ (p) ∈ P, x ∈ Rn(p) ,  > 0 : Tcons (x, , p) ≤ χ (Size(p) + Digits(p, ))χ . (4.1.5)

Note that the vector a in (4.1.4) is not supposed to be nonzero; when it is 0, (4.1.4) simply says
that there are no points y with InfeasP (y, p) ≤ .

Polynomial growth. We say that a generic convex program P is with polynomial growth, if
the objectives and the infeasibility measures, as functions of x, grow polynomially with kxk1 ,
the degree of the polynomial being a power of Size(p):
 
∀ (p) ∈ P, x ∈ Rn(p) :
χ (4.1.6)
|p0 (x)| + InfeasP (x, p) ≤ (χ [Size(p) + kxk1 + kData(p)k1 ])(χSize (p))
.

Polynomial boundedness of feasible sets. We say that a generic convex program P has
polynomially bounded feasible sets, if the feasible set X(p) of every instance (p) ∈ P is bounded,
and is contained in the Euclidean ball, centered at the origin, of “not too large” radius:

∀(p) ∈ Pn:
χ (4.1.7)
X(p) ⊂ x ∈ Rn(p) : kxk2 ≤ (χ [Size(p) + kData(p)k1 ])χSize
o
(p)
.

Example. Consider generic optimization problems LP b , CQP b , SDP b with instances in the
conic form
n o
min p0 (x) ≡ cT(p) x : x ∈ X(p) ≡ {x : A(p) x − b(p) ∈ K(p), kxk2 ≤ R} ; (4.1.8)
x∈Rn(p)

where K is a cone belonging to a characteristic for the generic program family K of cones,
specifically,

• the family of nonnegative orthants for LP b ,

• the family of direct products of Lorentz cones for CQP b ,

• the family of semidefinite cones for SDP b .


4.1. COMPLEXITY OF CONVEX PROGRAMMING 321

The data of and instance (p) of the type (4.1.8) is the collection

Data(p) = (n(p), c(p) , A(p) , b(p) , R, hsize(s) of K(p) i),

with naturally defined size(s) of a cone K from the family K associated with the generic program
under consideration: the sizes of Rn+ and of Sn+ equal n, and the size of a direct product of Lorentz
cones is the sequence of the dimensions of the factors.
The generic conic programs in question are equipped with the infeasibility measure
n o
Infeas(x, p) = min t ≥ 0 : te[K(p) ] + A(p) x − b(p) ∈ K(p) , (4.1.9)

where e[K] is a naturally defined “central point” of K ∈ K, specifically,

• the n-dimensional vector of ones when K = Rn+ ,

• the vector em = (0, ..., 0, 1)T ∈ Rm when K(p) is the Lorentz cone Lm , and the direct sum
of these vectors, when K is a direct product of Lorentz cones,

• the unit matrix of appropriate size when K is a semidefinite cone.

In the sequel, we refer to the three generic problems we have just defined as to Linear, Conic
Quadratic and Semidefinite Programming problems with ball constraints, respectively. It is
immediately seen that the generic programs LP b , CQP b and SDP b are convex and possess the
properties of polynomial computability, polynomial growth and polynomially bounded feasible
sets (the latter property is ensured by making the ball constraint kxk2 ≤ R a part of program’s
formulation).

Computational Tractability of Convex Programming. The role of the properties we


have introduced becomes clear from the following result:

Theorem 4.1.1 Let P be a family of convex optimization programs equipped with infeasibility
measure InfeasP (·, ·). Assume that the family is polynomially computable, with polynomial growth
and with polynomially bounded feasible sets. Then P is polynomially solvable.
In particular, the generic Linear, Conic Quadratic and Semidefinite programs with ball con-
straints LP b , CQP b , SDP b are polynomially solvable.

4.1.3.1 What is inside Theorem 4.1.1: Black-box represented convex programs


and the Ellipsoid method
Theorem 4.1.1 is a more or less straightforward corollary of a result related to the so called
Information-Based complexity of black-box represented convex programs. This result is inter-
esting by its own right, this is why we reproduce it here:
Consider a Convex Programming program

min {f (x) : x ∈ X} (4.1.10)


x

where

• X is a convex compact set in Rn with a nonempty interior


322 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

• f is a continuous convex function on X.

Assume that our “environment” when solving (4.1.10) is as follows:

1. We have access to a Separation Oracle Sep(X) for X – a routine which, given on input a
point x ∈ Rn , reports on output whether or not x ∈ int X, and in the case of x 6∈ int X,
returns a separator – a nonzero vector e such that

eT x ≥ max eT y (4.1.11)
y∈X

(the existence of such a separator is guaranteed by the Separation Theorem for convex
sets);

2. We have access to a First Order oracle which, given on input a point x ∈ int X, returns
the value f (x) and a subgradient f 0 (x) of f at x (Recall that a subgradient f 0 (x) of f at
x is a vector such that
f (y) ≥ f (x) + (y − x)T f 0 (x) (4.1.12)
for all y; convex function possesses subgradients at every relative interior point of its
domain, see Section C.6.2.);

3. We are given two positive reals R ≥ r such that X is contained in the Euclidean ball,
centered at the origin, of the radius R and contains a Euclidean ball of the radius r (not
necessarily centered at the origin).

The result we are interested in is as follows:

Theorem 4.1.2 In the outlined “working environment”, for every given  > 0 it is possible to
find an -solution to (4.1.10), i.e., a point x ∈ X with

f (x ) ≤ min f (x) + 


x∈X

in no more than N () subsequent calls to the Separation and the First Order oracles plus no
more than O(1)n2 N () arithmetic operations to process the answers of the oracles, with

VarX (f )R
 
N () = O(1)n2 ln 2 + . (4.1.13)
·r

Here
VarX (f ) = max f − min f.
X X

4.1.3.2 Proof of Theorem 4.1.2: the Ellipsoid method


Assume that we are interested to solve the convex program (4.1.10) and we have an access to
a separation oracle Sep(X) for the feasible domain of (4.1.10) and to a first order oracle O(f )
for the objective of (4.1.10). How could we solve the problem via these “tools”? An extremely
transparent way is given by the Ellipsoid method which can be viewed as a multi-dimensional
extension of the usual bisection.
4.1. COMPLEXITY OF CONVEX PROGRAMMING 323

Ellipsoid method: the idea. Assume that we have already found an n-dimensional ellipsoid

E = {x = c + Bu | uT u ≤ 1} [B ∈ Mn,n , DetB 6= 0]

which contains the optimal set X∗ of (4.1.10) (note that X∗ 6= ∅, since the feasible set X of
(4.1.10) is assumed to be compact, and the objective f – to be convex on the entire Rn and
therefore continuous, see Theorem C.4.1). How could we construct a smaller ellipsoid containing
X∗ ?
The answer is immediate.
1) Let us call the separation oracle Sep(X), the center c of the current ellipsoid being the
input. There are two possible cases:
1.a) Sep(X) reports that c 6∈ X and returns a separator a:

a 6= 0, aT c ≥ sup aT y. (4.1.14)
y∈X

In this case we can replace our current “localizer” E of the optimal set X∗ by a smaller one –
namely, by the “half-ellipsoid”

b = {x ∈ E | aT x ≤ aT c}.
E

Indeed, by assumption X∗ ⊂ E; when passing from E to E, b we cut off all points x of E where
T T
a x > a c, and by (4.1.14) all these points are outside of X and therefore outside of X∗ ⊂ X.
Thus, X∗ ⊂ E. b
1.b) Sep(X) reports that c ∈ X. In this case we call the first order oracle O(f ), c being the
input; the oracle returns the value f (c) and a subgradient a = f 0 (c) of f at c. Again, two cases
are possible:
1.b.1) a = 0. In this case we are done – c is a minimizer of f on X. Indeed, c ∈ X, and
(4.1.12) reads
f (y) ≥ f (c) + 0T (y − c) = f (c) ∀y ∈ Rn .
Thus, c is a minimizer of f on Rn , and since c ∈ X, c minimizes f on X as well.
1.b.2) a 6= 0. In this case (4.1.12) reads

aT (x − c) > 0 ⇒ f (x) > f (c),

so that replacing the ellipsoid E with the half-ellipsoid

b = {x ∈ E | aT x ≤ aT c}
E

we ensure the inclusion X∗ ⊂ E. b Indeed, X∗ ⊂ E by assumption, and when passing from E


b we cut off all points of E where aT x > aT c and, consequently, where f (x) > f (c); since
to E,
c ∈ X, no one of these points can belong to the set X∗ of minimizers of f on X.
2) We have seen that as a result of operations described in 1.a-b) we either terminate with
an exact minimizer of f on X, or obtain a “half-ellipsoid”

b = {x ∈ E | aT x ≤ aT c}
E [a 6= 0]

containing X∗ . It remains to use the following simple geometric fact:


324 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

(*) Let E = {x = c + Bu | uT u ≤ 1} (DetB =


6 0) be an n-dimensional ellipsoid
and Eb = {x ∈ E | aT x ≤ aT c} (a 6= 0) be a “half” of E. If n > 1, then E b is
contained in the ellipsoid

E + = {x = c+ + B + u | uT u ≤ 1},
1

c+ = c − n+1
Bp,  
+ n T n
B = B √n2 −1 (In − pp ) + n+1 pp = √nn2 −1 B + n+1
T n
− √ n
n2 −1
(Bp)pT ,
T
p= √ B a
aT BB T a
(4.1.15)
and if n = 1, then the set E
b is contained in the ellipsoid (which now is just a segment)

E + = {x | c+ B + u | |u| ≤ 1},
Ba
c+ = c − 12 |Ba| ,
B+ = 21 B.

In all cases, the n-dimensional volume Vol(E + ) of the ellipsoid E + is less than the
one of E:
n−1
n n

+
Vol(E ) = √ Vol(E) ≤ exp{−1/(2n)}Vol(E) (4.1.16)
2
n −1 n+1
 n−1
(in the case of n = 1, √ n = 1).
n2 −1

(*) says that there exists (and can be explicitly specified) an ellipsoid E + ⊃ E b with the volume
+
constant times less than the one of E. Since E covers E, and the latter set, as we have
b
seen, covers X∗ , E + covers X∗ . Now we can iterate the above construction, thus obtaining a
sequence of ellipsoids E, E + , (E + )+ , ... with volumes going to 0 at a linear rate (depending on
the dimension n only) which “collapses” to the set X∗ of optimal solutions of our problem –
exactly as in the usual bisection!

Note that (*) is just an exercise in elementary calculus. Indeed, the ellipsoid E is given
as an image of the unit Euclidean ball W = {u | uT u ≤ 1} under the one-to-one affine
mapping u 7→ c + Bu; the half-ellipsoid E b is then the image, under the same mapping, of
the half-ball
Wc = {u ∈ W | pT u ≤ 0}

p being the unit vector from (4.1.15); indeed, if x = c + Bu, then aT x ≤ aT c if and only if
aT Bu ≤ 0, or, which is the same, if and only if pT u ≤ 0. Now, instead of covering E b by
+ +
a small in volume ellipsoid E , we may cover by a small ellipsoid W the half-ball W and
c
then take E + to be the image of W + under our affine mapping:

E + = {x = c + Bu | u ∈ W + }. (4.1.17)

Indeed, if W + contains W
c , then the image of W + under our affine mapping u 7→ c + Bu
contains the image of W
c , i.e., contains E.
b And since the ratio of volumes of two bodies
remain invariant under affine mapping (passing from a body to its image under an affine
mapping u 7→ c + Bu, we just multiply the volume by |DetB|), we have

Vol(E + ) Vol(W + )
= .
Vol(E) Vol(W )
4.1. COMPLEXITY OF CONVEX PROGRAMMING 325


x=c+Bu
b and E +
E, E c and W +
W, W

Figure 4.1: Finding minimum volume ellipsoid containing half-ellipsoid

Thus, the problem of finding a “small” ellipsoid E + containing the half-ellipsoid E b can be
+
reduced to the one of finding a “small” ellipsoid W containing the half-ball W , as shown
c
on Fig. 4.1. Now, the problem of finding a small ellipsoid containing W c is very simple: our
“geometric data” are invariant with respect to rotations around the p-axis, so that we may
look for W + possessing the same rotational symmetry. Such an ellipsoid W + is given by just
3 parameters: its center should belong to our symmetry axis, i.e., should be −hp for certain
h, one of the half-axes of the ellipsoid (let its length be µ) should be directed along p, and the
remaining n − 1 half-axes should be of the same length λ and be orthogonal to p. For our 3
parameters h, µ, λ we have 2 equations expressing the fact that the boundary of W + should
pass through the “South pole” −p of W and trough the “equator” {u | uT u = 1, pT u = 0};
indeed, W + should contain W c and thus – both the pole and the equator, and since we are
+
looking for W with the smallest possible volume, both the pole and the equator should
be on the boundary of W + . Using our 2 equations to express µ and λ via h, we end up
with a single “free” parameter h, and the volume of W + (i.e., const(n)µλn−1 ) becomes an
explicit function of h; minimizing this function in h, we find the “optimal” ellipsoid W + ,
check that it indeed contains W c (i.e., that our geometric intuition was correct) and then
+ +
convert W into E according to (4.1.17), thus coming to the explicit formulas (4.1.15) –
(4.1.16); implementation of the outlined scheme takes from 10 to 30 minutes, depending on
how many miscalculations are made...
It should be mentioned that although the indicated scheme is quite straightforward and
elementary, the fact that it works is not evident a priori: it might happen that the smallest
volume ellipsoid containing a half-ball is just the original ball! This would be the death of
our idea – instead of a sequence of ellipsoids collapsing to the solution set X∗ , we would get
a “stationary” sequence E, E, E... Fortunately, it is not happening, and this is a great favour
Nature does to Convex Optimization...

Ellipsoid method: the construction. There is a small problem with implementing our idea
of “trapping” the optimal set X∗ of (4.1.10) by a “collapsing” sequence of ellipsoids. The only
thing we can ensure is that all our ellipsoids contain X∗ and that their volumes rapidly (at a
linear rate) converge to 0. However, the linear sizes of the ellipsoids should not necessarily go
to 0 – it may happen that the ellipsoids are shrinking in some directions and are not shrinking
(or even become larger) in other directions (look what happens if we apply the construction to
minimizing a function of 2 variables which in fact depends only on the first coordinate). Thus,
to the moment it is unclear how to build a sequence of points converging to X∗ . This difficulty,
however, can be easily resolved: as we shall see, we can form this sequence from the best
326 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

feasible solutions generated so far. Another issue which remains open to the moment is when
to terminate the method; as we shall see in a while, this issue also can be settled satisfactory.
The precise description of the Ellipsoid method as applied to (4.1.10) is as follows (in this
description, we assume that n ≥ 2, which of course does not restrict generality):
The Ellipsoid Method.
Initialization. Recall that when formulating (4.1.10) it was assumed that the
feasible set X of the problem is contained in the ball E0 = {x | kxk2 ≤ R} of a given
radius R and contains an (unknown) Euclidean ball of a known radius r > 0. The
ball E0 will be our initial ellipsoid; thus, we set

c0 = 0, B0 = RI, E0 = {x = c0 + B0 u | uT u ≤ 1};

note that E0 ⊃ X.
We also set
ρ0 = R, L0 = 0.
The quantities ρt will be the “radii” of the ellipsoids Et to be built, i.e., the radii
of the Euclidean balls of the same volumes as Et ’s. The quantities Lt will be our
guesses for the variation

VarR (f ) = max f (x) − min f (x)


x∈E0 x∈E0

of the objective on the initial ellipsoid E0 . We shall use these guesses in the termi-
nation test.
Finally, we input the accuracy  > 0 to which we want to solve the problem.
Step t, t = 1, 2, .... At the beginning of step t, we have the previous ellipsoid

Et−1 = {x = ct−1 + Bt−1 u | uT u ≤ 1} [ct−1 ∈ Rn , Bt−1 ∈ Mn,n , DetBt−1 6= 0]

(i.e., have ct−1 , Bt−1 ) along with the quantities Lt−1 ≥ 0 and

ρt−1 = |DetBt−1 |1/n .

At step t, we act as follows (cf. the preliminary description of the method):


1) We call the separation oracle Sep(X), ct−1 being the input. It is possible that
the oracle reports that ct−1 6∈ X and provides us with a separator

a 6= 0 : aT ct−1 ≥ sup aT y.
y∈X

In this case we call step t non-productive, set

at = a, Lt = Lt−1

and go to rule 3) below. Otherwise – i.e., when ct−1 ∈ X – we call step t productive
and go to rule 2).
2) We call the first order oracle O(f ), ct−1 being the input, and get the value
f (ct−1 ) and a subgradient a ≡ f 0 (ct−1 ) of f at the point ct−1 . It is possible that
a = 0; in this case we terminate and claim that ct−1 is an optimal solution to (4.1.10).
In the case of a 6= 0 we set
at = a,
4.1. COMPLEXITY OF CONVEX PROGRAMMING 327

compute the quantity

`t = max[aTt y − aTt ct−1 ] = Rkat k2 − aTt ct−1 ,


y∈E0

update L by setting
Lt = max{Lt−1 , `t }

and go to rule 3).


3) We set
bt = {x ∈ Et−1 | aT x ≤ aT ct−1 }
E t t

(cf. the definition of E


b in our preliminary description of the method) and define the
new ellipsoid
Et = {x = ct + Bt u | uT u ≤ 1}

by setting (see (4.1.15))


T a
Bt−1 t
pt = p T T a
at Bt−1 Bt−1 t
1
ct = ct−1 − n+1 Bt−1 p, (4.1.18)
 t 
n n √ n
Bt = √n2 −1 Bt−1 + n+1 − n2 −1
(Bt−1 pt )pTt .

We also set
(n−1)/n  1/n
n n

1/n
ρt = |DetBt | = √ ρt−1
2
n −1 n+1

(see (4.1.16)) and go to rule 4).


4) [Termination test]. We check whether the inequality
ρt 
< (4.1.19)
r Lt + 

is satisfied. If it is the case, we terminate and output, as the result of the solution
process, the best (i.e., with the smallest value of f ) of the “search points” cτ −1
associated with productive steps τ ≤ t (we shall see that these productive steps
indeed exist, so that the result of the solution process is well-defined). If (4.1.19) is
not satisfied, we go to step t + 1.

Just to get some feeling how the method works, here is a 2D illustration. The problem is

min f (x),
−1≤x1 ,x2 ≤1
1
f (x) = + 0.623233851x2 − 7.957418455)2
2 (1.443508244x1
+5(−0.350974738x1 + 0.799048618x2 + 2.877831823)4 ,

the optimal solution is x∗1 = 1, x∗2 = −1, and the exact optimal value is 70.030152768...
The values of f at the best (i.e., with the smallest value of the objective) feasible solutions
found in course of first t steps of the method, t = 1, 2, ..., 256, are shown in the following table:
328 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

1
2 3
0

2
3
15
15

Ellipses Et−1 and search points ct−1 , t = 1, 2, 3, 4, 16


Arrows: gradients of the objective f (x)
Unmarked segments: tangents to the level lines of f (x)

Figure 4.2: Trajectory of Ellipsoid method

t best value t best value


1 374.61091739 16 76.838253451
2 216.53084103 ... ...
3 146.74723394 32 70.901344815
4 112.42945457 ... ...
5 93.84206347 64 70.031633483
6 82.90928589 ... ...
7 82.90928589 128 70.030154192
8 82.90928589 ... ...
... ... 256 70.030152768
The initial phase of the process looks as shown on Fig. 4.2.

Ellipsoid method: complexity analysis. We are about to establish our key result (which,
in particular, immediately implies Theorem 4.1.2):
Theorem 4.1.3 Let the Ellipsoid method be applied to convex program (4.1.10) of dimension
n ≥ 2 such that the feasible set X of the problem contains a Euclidean ball of a given radius
r > 0 and is contained in the ball E0 = {kxk2 ≤ R} of a given radius R. For every input
accuracy  > 0, the Ellipsoid method terminates after no more than
R  + VarR (f )
     
2
N () = Ceil 2n ln + ln +1 (4.1.20)
r 
steps, where
VarR (f ) = max f − min f,
E0 E0
4.1. COMPLEXITY OF CONVEX PROGRAMMING 329

and Ceil(a) is the smallest integer ≥ a. Moreover, the result x


b generated by the method is a
feasible -solution to (4.1.10):

b ∈ X and f (x) − min f ≤ .


x (4.1.21)
X

Proof. We should prove the following pair of statements:


(i) The method terminates in course of the first N () steps
(ii) The result x
b is a feasible -solution to the problem.
10 . Comparing the preliminary and the final description of the Ellipsoid method and taking
into account the initialization rule, we see that if the method does not terminate before step t
or terminates at this step according to rule 4), then

(a) E0 ⊃ X; n o
(b) bτ = x ∈ Eτ −1 | aT x ≤ aT cτ −1 , τ = 1, ..., t;
Eτ ⊃ E τ τ
 n−1 (4.1.22)
(c) Vol(Eτ ) = ρnτ Vol(E0 ) = √nn2 −1 n
n+1 Vol(Eτ −1 )
≤ exp{−1/(2n)}vol(Eτ −1 ), τ = 1, ..., t.

Note that from (c) it follows that

ρτ ≤ exp{−τ /(2n2 )}R, τ = 1, ..., t. (4.1.23)

20 . We claim that

If the Ellipsoids method terminates at certain step t, then the result x


b is well-
defined and is a feasible -solution to (4.1.10).

Indeed, there are only two possible reasons for termination. First, it may happen that
ct−1 ∈ X and f 0 (ct−1 ) = 0 (see rule 2)). From our preliminary considerations we know that in
this case ct−1 is an optimal solution to (4.1.10), which is even more than what we have claimed.
Second, it may happen that at step t relation (4.1.19) is satisfied. Let us prove that the claim
of 20 takes place in this case as well.
20 .a) Let us set

ν= ∈ (0, 1].
 + Lt
By (4.1.19), we have ρt /r < ν, so that there exists ν 0 such that
ρt
< ν0 < ν [≤ 1]. (4.1.24)
r
Let x∗ be an optimal solution to (4.1.10), and X + be the “ν 0 -shrinkage” of X to x∗ :

X + = x∗ + ν 0 (X − x∗ ) = {x = (1 − ν 0 )x∗ + ν 0 z | z ∈ X}. (4.1.25)

We have n
rν 0

+ 0 n
Vol(X ) = (ν ) Vol(X) ≥ Vol(E0 ) (4.1.26)
R
(the last inequality is given by the fact that X contains a Euclidean ball of the radius r), while
n
ρt

Vol(Et ) = Vol(E0 ) (4.1.27)
R
330 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

by definition of ρt . Comparing (4.1.26), (4.1.27) and taking into account that ρt < rν 0 by
(4.1.24), we conclude that Vol(Et ) < Vol(X + ) and, consequently, X + cannot be contained in
Et . Thus, there exists a point y which belongs to X + :

y = (1 − ν 0 )x∗ + ν 0 z [z ∈ X], (4.1.28)

and does not belong to Et .


20 .b) Since y does not belong to Et and at the same time belongs to X ⊂ E0 along with
x∗ and z (X is convex!), we see that there exists a τ ≤ t such that y ∈ Eτ −1 and y 6∈ Eτ . By
(4.1.22.b), every point x from the complement of Eτ in Eτ −1 satisfies the relation aTτ x > aTτ cτ −1 .
Thus, we have
aTτ y > aTτ cτ −1 (4.1.29)
20 .c) Observe that the step τ is surely productive. Indeed, otherwise, by construction of
the method, at would separate X from cτ −1 , and (4.1.29) would be impossible (we know that
y ∈ X !). Notice that in particular we have just proved that if the method terminates at a step
t, then at least one of the steps 1, ..., t is productive, so that the result is well-defined.
Since step τ is productive, aτ is a subgradient of f at cτ −1 (description of the method!), so
that
f (u) ≥ f (cτ −1 ) + aTτ (u − cτ −1 )
for all u ∈ X, and in particular for u = x∗ . On the other hand, z ∈ X ⊂ E0 , so that by the
definition of `τ and Lτ we have

aTτ (z − cτ −1 ) ≤ `τ ≤ Lτ .

Thus,
f (x∗ ) ≥ f (cτ −1 ) + aTτ (x∗ − cτ −1 )
Lτ ≥ aTτ (z − cτ −1 )
Multiplying the first inequality by (1 − ν 0 ), the second – by ν 0 and adding the results, we get

(1 − ν 0 )f (x∗ ) + ν 0 Lτ ≥ (1 − ν 0 )f (cτ −1 ) + aTτ ([(1 − ν 0 )x∗ + ν 0 z] − cτ −1 )


= (1 − ν 0 )f (cτ −1 ) + aTτ (y − cτ −1 )
[see (4.1.28)]
≥ (1 − ν 0 )f (cτ −1 )
[see (4.1.29)]

and we come to 0
ν Lτ
f (cτ −1 ) ≤ f (x∗ ) + 1−ν 0
ν 0 Lt
≤ f (x∗ ) + 1−ν 0
[since Lτ ≤ Lt in view of τ ≤ t]
≤ f (x∗ ) + 
[by definition of ν and since ν 0 < ν]
= Opt(C) + .
We see that there exists a productive (i.e., with feasible cτ −1 ) step τ ≤ t such that the corre-
sponding search point cτ −1 is -optimal. Since we are in the situation where the result x b is the
best of the feasible search points generated in course of the first t steps, x
b is also feasible and
-optimal, as claimed in 20 .
4.1. COMPLEXITY OF CONVEX PROGRAMMING 331

30 It remains to verify that the method does terminate in course of the first N = N ()
steps. Assume, on the contrary, that it is not the case, and let us lead this assumption to a
contradiction.
First, observe that for every productive step t we have
ct−1 ∈ X and at = f 0 (ct−1 ),
whence, by the definition of a subgradient and the variation VarR (f ),
u ∈ E0 ⇒ VarR (f ) ≥ f (u) − f (ct−1 ) ≥ aTt (u − ct−1 ),
whence
`t ≡ max aTt (u − ct−1 ) ≤ VarR (f ).
u∈E0
Looking at the description of the method, we conclude that
Lt ≤ VarR (f ) ∀t. (4.1.30)
Since we have assumed that the method does not terminate in course of the first N steps, we
have
ρN 
≥ . (4.1.31)
r  + LN
The right hand side in this inequality is ≥ /( + VarR (f )) by (4.1.30), while the left hand side
is ≤ exp{−N/(2n2 )}R by (4.1.23). We get
 R  + VarR (f )
    
exp{−N/(2n2 )}R/r ≥ ⇒ N ≤ 2n2 ln + ln ,
 + VarR (f ) r 
which is the desired contradiction (see the definition of N = N () in (4.1.20)). 2

4.1.4 Difficult continuous optimization problems


Real Arithmetic Complexity Theory can borrow from the Combinatorial Complexity Theory
techniques for detecting “computationally intractable” problems. Consider the situation as
follows: we are given a family P of optimization programs and want to understand whether the
family is computationally tractable. An affirmative answer can be obtained from Theorem 4.1.1;
but how could we justify that the family is intractable? A natural course of action here is to
demonstrate that certain difficult (NP-complete) combinatorial problem Q can be reduced to P
in such a way that the possibility to solve P in polynomial time would imply similar possibility
for Q. Assume that the objectives of the instances of P are polynomially computable, and that
we can point out a generic combinatorial problem Q known to be NP-complete which can be
reduced to P in the following sense:
There exists a CT-polynomial time algorithm M which, given on input the data
vector Data(q) of an instance (q) ∈ Q, converts it into a triple Data(p[q]), (q), µ(q)
comprised of the data vector of an instance (p[q]) ∈ P, positive rational (q) and
rational µ(q) such that (p[q]) is solvable and
— if (q) is unsolvable, then the value of the objective of (p[q]) at every (q)-solution
to this problem is ≤ µ(q) − (q);
— if (q) is solvable, then the value of the objective of (p[q]) at every (q)-solution to
this problem is ≥ µ(q) + (q).
332 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

We claim that in the case in question we have all reasons to qualify P as a “computationally
intractable” problem. Assume, on contrary, that P admits a polynomial time solution method
S, and let us look what happens if we apply this algorithm to solve (p[q]) within accuracy (q).
Since (p[q]) is solvable, the method must produce an (q)-solution x b to (p[q]). With additional
“polynomial time effort” we may compute the value of the objective of (p[q]) at x b (recall that
the objectives of instances from P are assumed to be polynomially computable). Now we can
compare the resulting value of the objective with µ(q); by definition of reducibility, if this value
is ≤ µ(q), q is unsolvable, otherwise q is solvable. Thus, we get a correct “Real Arithmetic”
solvability test for Q. By definition of a Real Arithmetic polynomial time algorithm, the running
time of the test is bounded by a polynomial of s(q) = Size(p[q]) and of the quantity
!
Size(p[q]) + kData(p[q])k1 + 2 (q)
d(q) = Digits((p[q]), (q)) = ln .
(q)

Now note that if ` = length(Data(q)), then the total number of bits in Data(p[q]) and in (q) is
bounded by a polynomial of ` (since the transformation Data(q) 7→ (Data(p[q]), (q), µ(q)) takes
CT-polynomial time). It follows that both s(q) and d(q) are bounded by polynomials in `, so
that our “Real Arithmetic” solvability test for Q takes polynomial in length(Data(q)) number
of arithmetic operations.
Recall that Q was assumed to be an NP-complete generic problem, so that it would be “highly
improbable” to find a polynomial time solvability test for this problem, while we have managed
to build such a test. We conclude that the polynomial solvability of P is highly improbable as
well.

4.2 Interior Point Polynomial Time Methods for LP, CQP and
SDP
4.2.1 Motivation
Theorem 4.1.1 states that generic convex programs, under mild computability and bounded-
ness assumptions, are polynomially solvable. This result is extremely important theoretically;
however, from the practical viewpoint it is, essentially, no more than “an existence theorem”.
Indeed, the “universal” complexity bounds coming from Theorem 4.1.2, although polynomial,
are not that attractive: by Theorem 4.1.1, when solving problem (4.1.10) with n design vari-
ables, the “price” of an accuracy digit (what it costs to reduce current inaccuracy  by factor
2) is O(n2 ) calls to the first order and the separation oracles plus O(n4 ) arithmetic operations
to process the answers of the oracles. Thus, even for simplest objectives to be minimized over
simplest feasible sets, the arithmetic price of an accuracy digit is O(n4 ); think how long will
it take to solve a problem with, say, 1,000 variables (which is still a “small” size for many
applications). The good news about the methods underlying Theorem 4.1.2 is their universal-
ity: all they need is a Separation oracle for the feasible set and the possibility to compute the
objective and its subgradient at a given point, which is not that much. The bad news about
these methods has the same source as the good news: the methods are “oracle-oriented” and
capable to use only local information on the program they are solving, in contrast to the fact
that when solving instances of well-structured programs, like LP, we from the very beginning
have in our disposal complete global description of the instance. And of course it is ridiculous
to use a complete global knowledge of the instance just to mimic the local in their nature first
4.2. INTERIOR POINT POLYNOMIAL TIME METHODS FOR LP, CQP AND SDP 333

order and separation oracles. What we would like to have is an optimization technique capable
to “utilize efficiently” our global knowledge of the instance and thus allowing to get a solution
much faster than it is possible for “nearly blind” oracle-oriented algorithms. The major event
in the “recent history” of Convex Optimization, called sometimes “Interior Point revolution”,
was the invention of these “smart” techniques.

4.2.2 Interior Point methods


The Interior Point revolution was started by the seminal work of N. Karmarkar (1984) where
the first interior point method for LP was proposed; in 18 years since then, interior point (IP)
polynomial time methods have become an extremely deep and rich theoretically and highly
promising computationally area of Convex Optimization. A somehow detailed overview of the
history and the recent state of this area is beyond the scope of this course; an interested reader
is referred to [52, 55, 42] and references therein. All we intend to do is to give an idea of what are
the IP methods, skipping nearly all (sometimes highly instructive and nontrivial) technicalities.
The simplest way to get a proper impression of the (most of) IP methods is to start with a
quite traditional interior penalty scheme for solving optimization problems.

4.2.2.1 The Newton method and the Interior penalty scheme


Unconstrained minimization and the Newton method. Seemingly the simplest convex
optimization problem is the one of unconstrained minimization of a smooth strongly convex
objective:
min {f (x) : x ∈ Rn } ; (UC)
x

a “smooth strongly convex” in this context means a 3 times continuously differentiableh convexi
∂ 2 f (x)
function f such that f (x) → ∞, kxk2 → ∞, and such that the Hessian matrix f 00 (x) = ∂x i ∂xj
of f is positive definite at every point x. Among numerous techniques for solving (UC), the
most remarkable one is the Newton method. In its pure form, the Newton method is extremely
transparent and natural: given a current iterate x, we approximate our objective f by its
second-order Taylor expansion at the iterate – by the quadratic function

1
fx (y) = f (x) + (y − x)T f 0 (x) + (y − x)T f 00 (x)(y − x)
2
– and choose as the next iterate x+ the minimizer of this quadratic approximation. Thus, the
Newton method merely iterates the updating

x 7→ x+ = x − [f 00 (x)]−1 f 0 (x). (Nwt)

In the case of a (strongly convex) quadratic objective, the approximation coincides with
the objective itself, so that the method reaches the exact solution in one step. It is natural to
guess (and indeed is true) that in the case when the objective is smooth and strongly convex
(although not necessary quadratic) and the current iterate x is close enough to the minimizer
x∗ of f , the next iterate x+ , although not being x∗ exactly, will be “much closer” to the exact
minimizer than x. The precise (and easy) result is that the Newton method converges locally
quadratically, i.e., that
kx+ − x∗ k2 ≤ Ckx − x∗ k22 ,
334 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

provided that kx − x∗ k2 ≤ r with small enough value of r > 0 (both this value and C depend
on f ). Quadratic convergence means essentially that eventually every new step of the process
increases by a constant factor the number of accuracy digits in the approximate solution.
When started not “close enough” to the minimizer, the “pure” Newton method (Nwt) can
demonstrate weird behaviour √ (look, e.g., what happens when the method is applied to the
univariate function f (x) = 1 + x2 ). The simplest way to overcome this drawback is to pass
from the pure Newton method to its damped version

x 7→ x+ = x − γ(x)[f 00 (x)]−1 f 0 (x), (NwtD)

where the stepsize γ(x) > 0 is chosen in a way which, on one hand, ensures global convergence of
the method and, on the other hand, enforces γ(x) → 1 as x → x∗ , thus ensuring fast (essentially
the same as for the pure Newton method) asymptotic convergence of the process2) .
Practitioners thought the (properly modified) Newton method to be the fastest, in terms
of the iteration count, routine for smooth (not necessarily convex) unconstrained minimization,
although sometimes “too heavy” for practical use: the practical drawbacks of the method are
both the necessity to invert the Hessian matrix at each step, which is computationally costly in
the large-scale case, and especially the necessity to compute this matrix (think how difficult it is
to write a code computing 5,050 second order derivatives of a messy function of 100 variables).

Classical interior penalty scheme: the construction. Now consider a constrained convex
optimization program. As we remember, one can w.l.o.g. make its objective linear, moving, if
necessary, the actual objective to the list of constraints. Thus, let the problem be
n o
min cT x : x ∈ X ⊂ Rn , (C)
x

where X is a closed convex set, which we assume to possess a nonempty interior. How could we
solve the problem?
Traditionally it was thought that the problems of smooth convex unconstrained minimization
are “easy”; thus, a quite natural desire was to reduce the constrained problem (C) to a series
of smooth unconstrained optimization programs. To this end, let us choose somehow a barrier
(another name – “an interior penalty function”) F (x) for the feasible set X – a function which
is well-defined (and is smooth and strongly convex) on the interior of X and “blows up” as a
point from int X approaches a boundary point of X :

xi ∈ int X , x ≡ lim xi ∈ ∂X ⇒ F (xi ) → ∞, i → ∞,


i→∞

and let us look at the one-parametric family of functions generated by our objective and the
barrier:
Ft (x) = tcT x + F (x) : int X → R.
Here the penalty parameter t is assumed to be nonnegative.
It is easily seen that under mild regularity assumptions (e.g., in the case of bounded X ,
which we assume from now on)
2)
There are many ways to provide the required behaviour of γ(x), e.g., to choose γ(x) by a linesearch in the
direction e(x) = −[f 00 (x)]−1 f 0 (x) of the Newton step:
γ(x) = argmin f (x + te(x)).
t
4.2. INTERIOR POINT POLYNOMIAL TIME METHODS FOR LP, CQP AND SDP 335

• Every function Ft (·) attains its minimum over the interior of X , the minimizer x∗ (t) being
unique;
• The central path x∗ (t) is a smooth curve, and all its limiting, t → ∞, points belong to the
set of optimal solutions of (C).
This fact is quite clear intuitively. To minimize Ft (·) for large t is the same as to minimize the
function fρ (x) = cT x + ρF (x) for small ρ = 1t . When ρ is small, the function fρ is very close to
cT x everywhere in X , except a narrow stripe along the boundary of X , the stripe becoming thinner
and thinner as ρ → 0. Therefore we have all reasons to believe that the minimizer of Ft for large t
(i.e., the minimizer of fρ for small ρ) must be close to the set of minimizers of cT x on X .

We see that the central path x∗ (t) is a kind of Ariadne’s thread which leads to the solution set of
(C). On the other hand, to reach, given a value t ≥ 0 of the penalty parameter, the point x∗ (t)
on this path is the same as to minimize a smooth strongly convex function Ft (·) which attains
its minimum at an interior point of X . The latter problem is “nearly unconstrained one”, up to
the fact that its objective is not everywhere defined. However, we can easily adapt the methods
of unconstrained minimization, including the Newton one, to handle “nearly unconstrained”
problems. We see that constrained convex optimization in a sense can be reduced to the “easy”
unconstrained one. The conceptually simplest way to make use of this observation would be to
choose a “very large” value t̄ of the penalty parameter, like t̄ = 106 or t̄ = 1010 , and to run an
unconstrained minimization routine, say, the Newton method, on the function Ft̄ , thus getting
a good approximate solution to (C) “in one shot”. This policy, however, is impractical: since
we have no idea where x∗ (t̄) is, we normally will start our process of minimizing Ft̄ very far
from the minimizer of this function, and thus for a long time will be unable to exploit fast local
convergence of the method for unconstrained minimization we have chosen. A smarter way to
use our Ariadne’s thread is exactly the one used by Theseus: to follow the thread. Assume, e.g.,
that we know in advance the minimizer of F0 ≡ F , i.e., the point x∗ (0)3) . Thus, we know where
the central path starts. Now let us follow this path: at i-th step, standing at a point xi “close
enough” to some point x∗ (ti ) of the path, we
• first, increase a bit the current value ti of the penalty parameter, thus getting a new “target
point” x∗ (ti+1 ) on the path,
and
• second, approach our new target point x∗ (ti+1 ) by running, say, the Newton method,
started at our current iterate xi , on the function Fti+1 , until a new iterate xi+1 “close enough”
to x∗ (ti+1 ) is generated.
As a result of such a step, we restore the initial situation – we again stand at a point which
is close to a point on the central path, but this latter point has been moved along the central
path towards the optimal set of (C). Iterating this updating and strengthening appropriately
our “close enough” requirements as the process goes on, we, same as the central path, approach
the optimal set. A conceptual advantage of this “path-following” policy as compared to the
“brute force” attempt to reach a target point x∗ (t̄) with large t̄ is that now we have a hope to
exploit all the time the strongest feature of our “working horse” (the Newton method) – its fast
local convergence. Indeed, assuming that xi is close to x∗ (ti ) and that we do not increase the
penalty parameter too rapidly, so that x∗ (ti+1 ) is close to x∗ (ti ) (recall that the central path is
3)
There is no difficulty to ensure thus assumption: given an arbitrary barrier F and an arbitrary starting
point x̄ ∈ int X , we can pass from F to a new barrier F̄ = F (x) − (x − x̄)T F 0 (x̄) which attains its minimum
exactly at x̄, and then use the new barrier F̄ instead of our original barrier F ; and for the traditional approach
we are following for the time being, F has absolutely no advantages as compared to F̄ .
336 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

smooth!), we conclude that xi is close to our new target point x∗ (ti+1 ). If all our “close enough”
and “not too rapidly” are properly controlled, we may ensure xi to be in the domain of the
quadratic convergence of the Newton method as applied to Fti+1 , and then it will take a quite
small number of steps of the method to recover closeness to our new target point.

Classical interior penalty scheme: the drawbacks. At a qualitative “common sense”


level, the interior penalty scheme looks quite attractive and extremely flexible: for the majority
of optimization problems treated by the classical optimization, there is a plenty of ways to build
a relatively simple barrier meeting all the requirements imposed by the scheme, there is a huge
room to play with the policies for increasing the penalty parameter and controlling closeness to
the central path, etc. And the theory says that under quite mild and general assumptions on the
choice of the numerous “free parameters” of our construction, it still is guaranteed to converge
to the optimal set of the problem we have to solve. All looks wonderful, until we realize that
the convergence ensured by the theory is completely “unqualified”, it is a purely asymptotical
phenomenon: we are promised to reach eventually a solution of a whatever accuracy we wish,
but how long it will take for a given accuracy – this is the question the “classical” optimization
theory, with its “convergence” – “asymptotic linear/superlinear/quadratic convergence” neither
posed nor answered. And since our life in this world is finite (moreover, usually more finite
than we would like it to be), “asymptotical promises” are perhaps better than nothing, but
definitely are not all we would like to know. What is vitally important for us in theory (and to
some extent – also in practice) is the issue of complexity: given an instance of such and such
generic optimization problem and a desired accuracy , how large is the computational effort
(# of arithmetic operations) needed to get an -solution of the instance? And we would like
the answer to be a kind of a polynomial time complexity bound, and not a quantity depending
on “unobservable and uncontrollable” properties of the instance, like the “level of regularity” of
the boundary of X at the (unknown!) optimal solution of the instance.
It turns out that the intuitively nice classical theory we have outlined is unable to say a single
word on the complexity issues (it is how it should be: a reasoning in purely qualitative terms
like “smooth”, “strongly convex”, etc., definitely cannot yield a quantitative result...) Moreover,
from the complexity viewpoint just the very philosophy of the classical convex optimization turns
out to be wrong:
• As far as the complexity is concerned, for nearly all “black box represented” classes of
unconstrained convex optimization problems (those where all we know is that the objective is
called f (x), is (strongly) convex and 2 (3,4,5...) times continuously differentiable, and can be
computed, along with its derivatives up to order ... at every given point), there is no such
phenomenon as “local quadratic convergence”, the Newton method (which uses the second
derivatives) has no advantages as compared to the methods which use only the first order
derivatives, etc.;
• The very idea to reduce “black-box-represented” constrained convex problems to uncon-
strained ones – from the complexity viewpoint, the unconstrained problems are not easier than
the constrained ones...

4.2.3 But...
Luckily, the pessimistic analysis of the classical interior penalty scheme is not the “final truth”.
It turned out that what prevents this scheme to yield a polynomial time method is not the
structure of the scheme, but the huge amount of freedom it allows for its elements (too much
4.2. INTERIOR POINT POLYNOMIAL TIME METHODS FOR LP, CQP AND SDP 337

freedom is another word for anarchy...). After some order is added, the scheme becomes a
polynomial time one! Specifically, it was understood that

1. There is a (completely non-traditional) class of “good” (self-concordant4) ) barriers. Every


barrier F of this type is associated with a “self-concordance parameter” θ(F ), which is a
real ≥ 1;

2. Whenever a barrier F underlying the interior penalty scheme is self-concordant, one can
specify the notion of “closeness to the central path” and the policy for updating the penalty
parameter in such a way that a single Newton step

xi 7→ xi+1 = xi − [∇2 Fti+1 (xi )]−1 ∇Fti+1 (xi ) (4.2.1)

suffices to update a “close to x∗ (ti )” iterate xi into a new iterate xi+1 which is close, in
the same sense, to x∗ (ti+1 ). All “close to the central path” points belong to int X , so that
the scheme keeps all the iterates strictly feasible.

3. The penalty updating policy mentioned in the previous item is quite simple:
!
0.1
ti 7→ ti+1 = 1+ p ti ;
θ(F )

in particular,
  it does not “slow down” as ti grows and ensures linear, with the ratio
1 + √0.1 , growth of the penalty. This is vitally important due to the following fact:
θ(F )

4. The inaccuracy of a point x, which is close to some point x∗ (t) of the central path, as an
approximate solution to (C) is inverse proportional to t:

2θ(F )
cT x − min cT y ≤ .
y∈X t

It follows that

(!) After we have managed once to get close to the central path – havepbuilt a
point x0 which is close to a point x(t0 ), t0 > 0, on the path, every O( θ(F ))
steps of the scheme improve the quality of approximate solutions generated by
the scheme by an absolute constant factor. In particular, it takes no more than

θ(F )
q  
O(1) θ(F ) ln 2 +
t0 

steps to generate a strictly feasible -solution to (C).

Note that with our simple penalty updating policy all needed to perform a step of the
interior penalty scheme is to compute the gradient and the Hessian of the underlying
barrier at a single point and to invert the resulting Hessian.
4)
We do not intend to explain here what is a “self-concordant barrier”; for our purposes it suffices to say that
this is a three times continuously differentiable convex barrier F satisfying a pair of specific differential inequalities
linking the first, the second and the third directional derivatives of F .
338 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

Items 3, 4 say that essentially all we need to derive from the just listed general results a polyno-
mial time method for a generic convex optimization problem is to be able to equip every instance
of the problem with a “good” barrier in such a way that both the parameter of self-concordance
of the barrier θ(F ) and the arithmetic cost at which we can compute the gradient and the Hes-
sian of this barrier at a given point are polynomial in the size of the instance5) . And it turns
out that we can meet the latter requirement for all interesting “well-structured” generic convex
programs, in particular, for Linear, Conic Quadratic, and Semidefinite Programming. Moreover,
“the heroes” of our course – LP, CQP and SDP – are especially nice application fields of the
general theory of interior point polynomial time methods; in these particular applications, the
theory can be simplified, on one hand, and strengthened, on another.

4.3 Interior point methods for LP, CQP, and SDP: building
blocks
We are about to explain what the interior point methods for LP, CQP, SDP look like.

4.3.1 Canonical cones and canonical barriers


We will be interested in a generic conic problem
n o
min cT x : Ax − B ∈ K (CP)
x

associated with a cone K given as a direct product of m “basic” cones, each of them being either
a second-order, or a semidefinite cone:
k
K = Sk+1 × ... × S+p × Lkp+1 × ... × Lkm ⊂ E = Sk1 × ... × Skp × Rkp+1 × ... × Rkm . (Cone)

Of course, the generic problem in question covers LP (no Lorentz factors, all semidefinite factors
are of dimension 1), CQP (no semidefinite factors) and SDP (no Lorentz factors).
Now, we shall equip the semidefinite and the Lorentz cones with “canonical barriers”:
• The canonical barrier for a semidefinite cone Sn+ is

Sk (X) = − ln Det(X) : int Sk+ → R;

the parameter of this barrier, by definition, is θ(Sk ) = k 6) . q


• the canonical barrier for a Lorentz cone Lk = {x ∈ Rk | xk ≥ x21 + ... + x2k−1 } is

−Ik−1
 
Lk (x) = − ln(x2k − x21 − ... − x2k−1 ) = − ln(xT Jk x), Jk = ;
1

the parameter of this barrier is θ(Lk ) = 2.


5)
Another requirement is to be able once get close to a point x∗ (t0 ) on the central path with a not “disastrously
small” value of t0 – we should initialize somehow our path-following method! It turns out that such an initialization
is a minor problem – it can be carried out via the same path-following technique, provided we are given in advance
a strictly feasible solution to our problem.
6)
The barrier S k , same as the canonical barrier Lk for the Lorentz cone Lk , indeed are self-concordant
(whatever it means), and the parameters they are assigned here by definition are exactly their parameters of
self-concordance.
4.3. INTERIOR POINT METHODS FOR LP, CQP, AND SDP: BUILDING BLOCKS 339

• The canonical barrier K for the cone K given by (Cone), by definition, is the direct sum
of the canonical barriers of the factors:

int Sk+i ,

K(X) = Sk1 (X1 ) + ... + Skp (Xp ) + Lkp+1 (Xp+1 ) + ... + Lkm (Xm ), Xi ∈ i≤p ;
int Lki , p<i≤m

from now on, we use upper case Latin letters, like X, Y, Z, to denote elements of the space E;
for such an element X, Xi denotes the projection of X onto i-th factor in the direct product
representation of E as shown in (Cone).
The parameter of the barrier K, again by definition, is the sum of parameters of the basic
barriers involved:

p
X
θ(K) = θ(Sk1 ) + ... + θ(Skp ) + θ(Lkp+1 ) + ... + θ(Lkm ) = ki + 2(m − p).
i=1

Recall that all direct factors in the direct product representation (Cone) of our “universe” E
are Euclidean spaces; the matrix factors Ski are endowed with the Frobenius inner product

hXi , Yi iSki = Tr(Xi Yi ),

while the “arithmetic factors” Rki are endowed with the usual inner product

hXi , Yi iRki = XiT Yi ;

E itself will be regarded as a Euclidean space endowed with the direct sum of inner products
on the factors:
p
X m
X
hX, Y iE = Tr(Xi Yi ) + XiT Yi .
i=1 i=p+1

It is clearly seen that our basic barriers, same as their direct sum K, indeed are barriers for
the corresponding cones: they are C∞ -smooth on the interiors of their domains, blow up to
∞ along every sequence of points from these interiors converging to a boundary point of the
corresponding domain and are strongly convex. To verify the latter property, it makes sense to
compute explicitly the first and the second directional derivatives of these barriers (we need the
corresponding formulae in any case); to simplify notation, we write down the derivatives of the
basic functions Sk , Lk at a point x from their domain along a direction h (you should remember
that in the case of Sk both the point and the direction, in spite of their lower-case denotation,
340 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

are k × k symmetric matrices):



d
Sk (x + th) = −Tr(x−1 h) = −hx−1 , hiSk ,

DSk (x)[h] ≡ dt
t=0
i.e.
∇Sk (x) = −x−1 ;
d2
2 Sk (x + th) = Tr(x−1 hx−1 h) = hx−1 hx−1 , hiSk ,

D Sk (x)[h, h] ≡ dt2
t=0
i.e.
[∇2 Sk (x)]h = x−1 hx−1 ;

T (4.3.1)
d
Lk (x + th) = −2 hxT JJk xx ,

DLk (x)[h] ≡ dt
k
t=0
i.e.
∇Lk (x) = − xT 2J x Jk x;
k
T J x]2 T
d2
2 4 [h − 2 hxTJJx
kh

D Lk (x)[h, h] ≡ dt2 Lk (x + th) = k
[xT Jk x]2
,
t=0
i.e.
4 2
∇2 Lk (x) = J xxT Jk
[xT Jk x]2 k
− J .
xT J k x k

From the expression for D2 Sk (x)[h, h] we see that

D2 Sk (x)[h, h] = Tr(x−1 hx−1 h) = Tr([x−1/2 hx−1/2 ]2 ),

so that D2 Sk (x)[h, h] is positive whenever h 6= 0. It is not difficult to prove that the same is true
for D2 Lk (x)[h, h]. Thus, the canonical barriers for semidefinite and Lorentz cones are strongly
convex, and so is their direct sum K(·).
It makes sense to illustrate relatively general concepts and results to follow by how they
look in a particular case when K is the semidefinite cone Sk+ ; we shall refer to this situation
as to the “SDP case”. The essence of the matter in our general case is exactly the same as in
this particular one, but “straightforward computations” which are easy in the SDP case become
nearly impossible in the general case; and we have no possibility to explain here how it is possible
(it is!) to get the desired results with minimum amount of computations.
Due to the role played by the SDP case in our exposition, we use for this case special
notation, along with the just introduced “general” one. Specifically, we denote the standard
– the Frobenius – inner product on E = Sk as h·, ·iF , although feel free, if necessary, to use
p “general” notation h·, ·iE as well; the associated norm is denoted by k · k2 , so that kXk2 =
our
Tr(X 2 ), X being a symmetric matrix.

4.3.2 Elementary properties of canonical barriers


Let us establish a number of simple and useful properties of canonical barriers.

Proposition 4.3.1 A canonical barrier, let it be denoted F (F can be either Sk , or Lk , or the


direct sum K of several copies of these “elementary” barriers), possesses the following properties:
(i) F is logarithmically homogeneous, the parameter of logarithmic homogeneity being −θ(F ),
i.e., the following identity holds:

t > 0, x ∈ Dom F ⇒ F (tx) = F (x) − θ(F ) ln t.


4.3. INTERIOR POINT METHODS FOR LP, CQP, AND SDP: BUILDING BLOCKS 341

• In the SDP case, i.e., when F = Sk = − ln Det(x) and x is k × k positive definite


matrix, (i) claims that
− ln Det(tx) = − ln Det(x) − k ln t,
which of course is true.
(ii) Consequently, the following two equalities hold identically in x ∈ Dom F :
(a) h∇F (x), xi = −θ(F );
(b) [∇2 F (x)]x = −∇F (x).
• In the SDP case, ∇F (x) = ∇Sk (x) = −x−1 and [∇2 F (x)]h = ∇2 Sk (x)h =
x−1 hx−1 (see (4.3.1)). Here (a) becomes the identity hx−1 , xiF ≡ Tr(x−1 x) = k,
and (b) kindly informs us that x−1 xx−1 = x−1 .
(iii) Consequently, k-th differential Dk F (x) of F , k ≥ 1, is homogeneous, of degree −k, in
x ∈ Dom F :
∀(x ∈ Dom F, t > 0, h1 , ..., hk ) :
∂ k F (tx+s1 h1 +...+sk hk ) (4.3.2)
Dk F (tx)[h 1 , ..., hk ] ≡ ∂s1 ∂s2 ...∂sk = t−k Dk F (x)[h1 , ..., hk ].
s1 =...=sk =0

Proof. (i): it is immediately seen that Sk and Lk are logarithmically homogeneous with parameters of
logarithmic homogeneity −θ(Sk ), −θ(Lk ), respectively; and of course the property of logarithmic homo-
geneity is stable with respect to taking direct sums of functions: if Dom Φ(u) and Dom Ψ(v) are closed
w.r.t. the operation of multiplying a vector by a positive scalar, and both Φ and Ψ are logarithmi-
cally homogeneous with parameters α, β, respectively, then the function Φ(u) + Ψ(v) is logarithmically
homogeneous with the parameter α + β.
(ii): To get (ii.a), it suffices to differentiate the identity
F (tx) = F (x) − θ(F ) ln t
in t at t = 1:
d
F (tx) = F (x) − θ(F ) ln t ⇒ h∇F (tx), xi = F (tx) = −θ(F )t−2 ,
dt
and it remains to set t = 1 in the concluding identity.
Similarly, to get (ii.b), it suffices to differentiate the identity
h∇F (x + th), x + thi = −θ(F )
(which is just (ii.a)) in t at t = 0, thus arriving at
h[∇2 F (x)]h, xi + h∇F (x), hi = 0;
since h[∇2 F (x)]h, xi = h[∇2 F (x)]x, hi (symmetry of partial derivatives!) and since the resulting equality
h[∇2 F (x)]x, hi + h∇F (x), hi = 0
holds true identically in h, we come to [∇2 F (x)]x = −∇F (x).
(iii): Differentiating k times the identity
F (tx) = F (x) − θ ln t
in x, we get
tk Dk F (tx)[h1 , ..., hk ] = Dk F (x)[h1 , ..., hk ]. 2

An especially nice specific feature of the barriers Sk , Lk and K is their self-duality:


342 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

Proposition 4.3.2 A canonical barrier, let it be denoted F (F can be either Sk , or Lk , or the


direct sum K of several copies of these “elementary” barriers), possesses the following property:
for every x ∈ Dom F , −∇F (x) belongs to Dom F as well, and the mapping x 7→ −∇F (x) :
Dom F → Dom F is self-inverse:
−∇F (−∇F (x)) = x ∀x ∈ Dom F. (4.3.3)
Besides this, the mapping x 7→ −∇F (x) is homogeneous of degree -1:
t > 0, x ∈ int domF ⇒ −∇F (tx) = −t−1 ∇F (x). (4.3.4)
• In the SDP case, i.e., when F = Sk and x is k × k semidefinite matrix,
∇F (x) = ∇Sk (x) = −x−1 , see (4.3.1), so that the above statements merely say
that the mapping x 7→ x−1 is a self-inverse one-to-one mapping of the interior of
the semidefinite cone onto itself, and that −(tx)−1 = −t−1 x−1 , both claims being
trivially true.

4.4 Primal-dual pair of problems and primal-dual central path


4.4.1 The problem(s)
It makes sense to consider simultaneously the “problem of interest” (CP) and its conic dual;
since K is a direct product of self-dual cones, this dual is a conic problem on the same cone K.
As we remember from Lecture 1, the primal-dual pair associated with (CP) is
n o
min cT x : Ax − B ∈ K (CP)
x
max {hB, SiE : A∗ S = c, S ∈ K} (CD)
S

Assume from now on that KerA = {0}, so that we can write down our primal-dual pair in a
symmetric geometric form (Lecture 1, Section 1.4.4):
min {hC, XiE : X ∈ (L − B) ∩ K} (P)
X n o
max hB, SiE : S ∈ (L⊥ + C) ∩ K (D)
S

where L is a linear subspace in E (the image space of the linear mapping x 7→ Ax), L⊥ is the
orthogonal complement to L in E, and C ∈ E satisfies A∗ C = c, i.e., hC, AxiE ≡ cT x.
To simplify things, from now on we assume that both problems (CP) and (CD) are strictly
feasible. In terms of (P) and (D) this assumption means that both the primal feasible plane
L − B and the dual feasible plane L⊥ + C intersect the interior of the cone K.
Remark 4.4.1 By Conic Duality Theorem (Theorem 1.4.2), both (CP) and (D) are solvable
with equal optimal values:
Opt(CP) = Opt(D)
(recall that we have assumed strict primal-dual feasibility). Since (P) is equivalent to (CP), (P)
is solvable as well, and the optimal value of (P) differs from the one of (P) by hC, BiE 7) . It
follows that the optimal values of (P) and (D) are linked by the relation
Opt(P) − Opt(D) + hC, BiE = 0. (4.4.1)

7)
Indeed, the values of the respective objectives cT x and hC, Ax − BiE at the corresponding to each other
4.4. PRIMAL-DUAL PAIR OF PROBLEMS AND PRIMAL-DUAL CENTRAL PATH 343

4.4.2 The central path(s)


The canonical barrier K of K induces a barrier for the feasible set X = {x | Ax − B ∈ K} of
the problem (CP) written down in the form of (C), i.e., as
n o
min cT x : x ∈ X ;
x

this barrier is
K(x)
c = K(Ax − B) : int X → R (4.4.2)
and is indeed a barrier. Now we can apply the interior penalty scheme to trace the central
path x∗ (t) associated with the resulting barrier; with some effort it can be derived from the
primal-dual strict feasibility that this central path is well-defined (i.e., that the minimizer of
ct (x) = tcT x + K(x)
K c

on int X exists for every t ≥ 0 and is unique)8) . What is important for us for the moment, is
the central path itself, not how to trace it. Moreover, it is highly instructive to pass from the
central path x∗ (t) in the space of design variables to its image

X∗ (t) = Ax∗ (t) − B

in E. The resulting curve has a name – it is called the primal central path of the primal-dual
pair (P), (D); by its origin, it is a curve comprised of strictly feasible solutions of (P) (since it
is the same – to say that x belongs to the (interior of) the set X and to say that X = Ax − B
is a (strictly) feasible solution of (P)). A simple and very useful observation is that the primal
central path can be defined solely in terms of (P), (D) and thus is a “geometric entity” – it is
independent of a particular parameterization of the primal feasible plane L − B by the design
vector x:
(*) A point X∗ (t) of the primal central path is the minimizer of the aggregate

Pt (X) = thC, XiE + K(X)

on the set (L − B) ∩ int K of strictly feasible solutions of (P).


This observation is just a tautology: x∗ (t) is the minimizer on int X of the aggregate
b t (x) ≡ tcT x + K(x)
K b = thC, AxiE + K(Ax − B) = Pt (Ax − B) + thC, BiE ;

we see that the function Pbt (x) = Pt (Ax − B) of x ∈ int X differs from the function K b t (x)
by a constant (depending on t) and has therefore the same minimizer x∗ (t) as the function
Kb t (x). Now, when x runs through int X , the point X = Ax − B runs exactly through the
set of strictly feasible solutions of (P), so that the minimizer X∗ of Pt on the latter set and
the minimizer x∗ (t) of the function Pbt (x) = Pt (Ax − B) on int X are linked by the relation
X∗ = Ax∗ (t) − B.
feasible solutions x of (CP) and X = Ax − B of (P) differ from each other by exactly hC, BiE :
cT x − hC, XiE = cT x − hC, Ax − BiE = cT x − hA∗ C, xiE +hC, BiE .
| {z }
=0 due to A∗ C=c

8)
In Section 4.2.1, there was no problem with the existence of the central path, since there X was assumed to
be bounded; in our now context, X not necessarily is bounded.
344 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

The “analytic translation” of the above observation is as follows:


(*0 ) A point X∗ (t) of the primal central path is exactly the strictly feasible solution
X to (P) such that the vector tC + ∇K(X) ∈ E is orthogonal to L (i.e., belongs to
L⊥ ).

Indeed, we know that X∗ (t) is the unique minimizer of the smooth convex function Pt (X) =
thC, XiE +K(X) on the intersection of the primal feasible plane L−B and the interior of the
cone K; a necessary and sufficient condition for a point X of this intersection to minimize
Pt over the intersection is that ∇Pt must be orthogonal to L.

• In the SDP case, a point X∗ (t), t > 0, of the primal central path is uniquely defined
by the following two requirements: (1) X∗ (t)  0 should be feasible for (P), and (2)
the k × k matrix
tC − X∗−1 (t) = tC + ∇Sk (X∗ (t))
(see (4.3.1)) should belong to L⊥ , i.e., should be orthogonal, w.r.t. the Frobenius
inner product, to every matrix of the form Ax.

The dual problem (D) is in no sense “worse” than the primal problem (P) and thus also
possesses the central path, now called the dual central path S∗ (t), t ≥ 0, of the primal-dual pair
(P), (D). Similarly to (*), (*0 ), the dual central path can be characterized as follows:
(**0 ) A point S∗ (t), t ≥ 0, of the dual central path is the unique minimizer of the
aggregate
Dt (S) = −thB, SiE + K(S)
on the set of strictly feasible solutions of (D) 9) . S∗ (t) is exactly the strictly feasible
solution S to (D) such that the vector −tB +∇F (S) is orthogonal to L⊥ (i.e., belongs
to L).

• In the SDP case, a point S∗ (t), t > 0, of the dual central path is uniquely defined
by the following two requirements: (1) S∗ (t)  0 should be feasible for (D), and (2)
the k × k matrix
−tB − S∗−1 (t) = −tB + ∇Sk (S∗ (t))
(see (4.3.1)) should belong to L, i.e., should be representable in the form Ax for
some x.

From Proposition 4.3.2 we can derive a wonderful connection between the primal and the dual
central paths:
Theorem 4.4.1 For t > 0, the primal and the dual central paths X∗ (t), S∗ (t) of a (strictly
feasible) primal-dual pair (P), (D) are linked by the relations
S∗ (t) = −t−1 ∇K(X∗ (t))
(4.4.3)
X∗ (t) = −t−1 ∇K(S∗ (t))
9)
Note the slight asymmetry between the definitions of the primal aggregate Pt and the dual aggregate Dt :
in the former, the linear term is thC, XiE , while in the latter it is −thB, SiE . This asymmetry is in complete
accordance with the fact that we write (P) as a minimization, and (D) – as a maximization problem; to write
(D) in exactly the same form as (P), we were supposed to replace B with −B, thus getting the formula for Dt
completely similar to the one for Pt .
4.4. PRIMAL-DUAL PAIR OF PROBLEMS AND PRIMAL-DUAL CENTRAL PATH 345

Proof. By (*0 ), the vector tC+∇K(X∗ (t)) belongs to L⊥ , so that the vector S = −t−1 ∇K(X∗ (t)) belongs
to the dual feasible plane L⊥ +C. On the other hand, by Proposition 4.4.3 the vector −∇K(X∗ (t)) belongs
to Dom K, i.e., to the interior of K; since K is a cone and t > 0, the vector S = −t−1 ∇F (X∗ (t)) belongs
to the interior of K as well. Thus, S is a strictly feasible solution of (D). Now let us compute the gradient
of the aggregate Dt at the point S:

∇Dt (S) = −tB + ∇K(−t−1 ∇K(X∗ (t)))


= −tB + t∇K(−∇K(X∗ (t)))
[we have used (4.3.4)]
= −tB − tX∗ (t)
[we have used (4.3.3)]
= −t(B + X∗ (t))
∈ L
[since X∗ (t) is primal feasible]

Thus, S is strictly feasible for (D) and ∇Dt (S) ∈ L. But by (**0 ) these properties characterize S∗ (t);
thus, S∗ (t) = S ≡ −t−1 ∇K(X∗ (t)). This relation, in view of Proposition 4.3.2, implies that X∗ (t) =
−t−1 ∇K(S∗ (t)). Another way to get the latter relation from the one S∗ (t) = −t−1 ∇K(X∗ (t)) is just to
refer to the primal-dual symmetry. 2

In fact, the connection between the primal and the dual central paths stated by Theorem 4.4.1
can be used to characterize both the paths:

Theorem 4.4.2 Let (P), (D) be a strictly feasible primal-dual pair.


For every t > 0, there exists a unique strictly feasible solution X of (P) such that −t−1 ∇K(X)
is a feasible solution to (D), and this solution X is exactly X∗ (t).
Similarly, for every t > 0, there exists a unique strictly feasible solution S of (D) such that
−t−1 ∇K(S) is a feasible solution of (P), and this solution S is exactly S∗ (t).

Proof. By primal-dual symmetry, it suffices to prove the first claim. We already know (Theorem
4.4.1) that X = X∗ (t) is a strictly feasible solution of (P) such that −t−1 ∇K(X) is feasible
for (D); all we need to prove is that X∗ (t) is the only point with these properties, which is
immediate: if X is a strictly feasible solution of (P) such that −t−1 ∇K(X) is dual feasible, then
−t−1 ∇K(X) ∈ L⊥ + C, or, which is the same, ∇K(X) ∈ L⊥ − tC, or, which again is the same,
∇Pt (X) = tC + ∇K(X) ∈ L⊥ . And we already know from (*0 ) that the latter property, taken
together with the strict primal feasibility, is characteristic for X∗ (t). 2

4.4.2.1 On the central path

As we have seen, the primal and the dual central paths are intrinsically linked one to another,
and it makes sense to think of them as of a unique entity – the primal-dual central path of the
primal-dual pair (P), (D). The primal-dual central path is just a curve (X∗ (t), S∗ (t)) in E × E
such that the projection of the curve on the primal space is the primal central path, and the
projection of it on the dual space is the dual central path.
To save words, from now on we refer to the primal-dual central path simply as to the central
path.
The central path possesses a number of extremely nice properties; let us list some of them.
346 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

Characterization of the central path. By Theorem 4.4.2, the points (X∗ (t), S∗ (t)) of the
central path possess the following properties:

(CentralPath):

1. [Primal feasibility] The point X∗ (t) is strictly primal feasible.


2. [Dual feasibility] The point S∗ (t) is dual feasible.
3. [“Augmented complementary slackness”] The points X∗ (t) and S∗ (t) are linked
by the relation

S∗ (t) = −t−1 ∇K(X∗ (t)) [⇔ X∗ (t) = −t−1 ∇K(S∗ (t))].

• In the SDP case, ∇K(U ) = ∇Sk (U ) = −U −1 (see (4.3.1)), and the


augmented complementary slackness relation takes the nice form

X∗ (t)S∗ (t) = t−1 I, (4.4.4)

where I, as usual, is the unit matrix.

In fact, the indicated properties fully characterize the central path: whenever two points X, S
possess the properties 1) - 3) with respect to some t > 0, X is nothing but X∗ (t), and S is
nothing but S∗ (t) (this again is said by Theorem 4.4.2).

Duality gap along the central path. Recall that for an arbitrary primal-dual feasible pair
(X, S) of the (strictly feasible!) primal-dual pair of problems (P), (D), the duality gap

DualityGap(X, S) ≡ [hC, XiE − Opt(P)] + [Opt(D) − hB, SiE ] = hC, XiE − hB, SiE + hC, BiE

(see (4.4.1)) which measures the “total inaccuracy” of X, S as approximate solutions of the
respective problems, can be written down equivalently as hS, XiE (see statement (!) in Section
1.4.5). Now, what is the duality gap along the central path? The answer is immediate:

DualityGap(X∗ (t), S∗ (t)) = hS∗ (t), X∗ (t)iE


= h−t−1 ∇K(X∗ (t)), X∗ (t)iE
[see (4.4.3)]
= t−1 θ(K)
[see Proposition 4.3.1.(ii)]

We have arrived at a wonderful result10) :

Proposition 4.4.1 Under assumption of primal-dual strict feasibility, the duality gap along the
central path is inverse proportional to the penalty parameter, the proportionality coefficient being
the parameter of the canonical barrier K:

θ(K)
DualityGap(X∗ (t), S∗ (t)) = .
t
10)
Which, among other, much more important consequences, explains the name “augmented complementary
slackness” of the property 10 .3): at the primal-dual pair of optimal solutions X ∗ , S ∗ the duality gap should be
zero: hS ∗ , X ∗ iE = 0. Property 10 .3, as we just have seen, implies that the duality gap at a primal-dual pair
(X∗ (t), S∗ (t)) from the central path, although nonzero, is “controllable” – θ(K)
t
– and becomes small as t grows.
4.4. PRIMAL-DUAL PAIR OF PROBLEMS AND PRIMAL-DUAL CENTRAL PATH 347
 
θ(K)
In particular, both X∗ (t) and S∗ (t) are strictly feasible t -approximate solutions to their
respective problems:
θ(K)
hC, X∗ (t)iE − Opt(P) ≤ t ,
θ(K)
Opt(D) − hB, S∗ (t)iE ≤ t .

• In the SDP case, K = Sk+ and θ(K) = θ(Sk ) = k.


We see that
All we need in order to get “quickly” good primal and dual approximate solutions,
is to trace fast the central path; if we were interested to solve only one of the prob-
lems (P), (D), it would be sufficient to trace fast the associated – primal or dual –
component of this path. The quality guarantees we get in such a process depend – in
a completely universal fashion! – solely on the value t of the penalty parameter we
have managed to achieve and on the value of the parameter of the canonical barrier
K and are completely independent of other elements of the data.

4.4.2.2 Near the central path


The conclusion we have just made is a bit too optimistic: well, our life when moving along
the central path would be just fine (at the very least, we would know how good are the solu-
tions we already have), but how could we move exactly along the path? Among the relations
(CentralPath.1-3) defining the path the first two are “simple” – just linear, but the third is in
fact a system of nonlinear equations, and we have no hope to satisfy these equations exactly.
Thus, we arrive at the crucial question which, a bit informally, sounds as follows:
How close (and in what sense close) should we be to the path in order for our life to
be essentially as nice as if we were exactly on the path?
There are several ways to answer this question; we will present the simplest one.

A distance to the central path. Our canonical barrier K(·) is a strongly convex smooth
function on int K; in particular, its Hessian matrix ∇2 K(Y ), taken at a point Y ∈ int K, is
positive definite. We can use the inverse of this matrix to measure the distances between points
of E, thus arriving at the norm
q
kHkY = h[∇2 K(Y )]−1 H, HiE .

It turns out that


A good measure of proximity of a strictly feasible primal-dual pair Z = (X, S) to a
point Z∗ (t) = (X∗ (t), S∗ (t)) from the primal-dual central path is the quantity
q
dist(Z, Z∗ (t)) ≡ ktS + ∇K(X)kX ≡ h[∇2 K(X)]−1 (tS + ∇K(X)), tS + ∇K(X)iE

Although written in a non-symmetric w.r.t. X, S form, this quantity is in fact


symmetric in X, S: it turns out that

ktS + ∇K(X)kX = ktX + ∇K(S)kS (4.4.5)

for all t > 0 and S, X ∈ int K.


348 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

Observe that dist(Z, Z∗ (t)) ≥ 0, and dist(Z, Z∗ (t)) = 0 if and only if S = −t−1 ∇K(X), which,
for a strictly primal-dual feasible pair Z = (X, S), means that Z = Z∗ (t) (see the characterization
of the primal-dual central path); thus, dist(Z, Z∗ (t)) indeed can be viewed as a kind of distance
from Z to Z∗ (t).

In the SDP case X, S are k × k symmetric matrices, and

dist2 (Z, Z∗ (t)) = ktS + ∇Sk (X)k2X = h[∇2 Sk (X)] −1 (tS + ∇Sk (X)), tS + ∇Sk (X)iF
= Tr X(tS − X −1 )X(tS − X −1 )
[see (4.3.1)]
= Tr([tX 1/2 SX 1/2 − I]2 ),

so that

dist2 (Z, Z∗ (t)) = Tr X(tS − X −1 )X(tS − X −1 ) = ktX 1/2 SX 1/2 − Ik22 .



(4.4.6)

Besides this,

ktX 1/2 SX 1/2 − Ik22 = Tr [tX 1/2 SX 1/2 − I]2 
= Tr t2 X 1/2 SX 1/2 X 1/2 SX 1/2 − 2tX 1/2 SX 1/2 + I
= Tr(t2 X 1/2 SXSX 1/2 ) − 2tTr(X 1/2 SX 1/2 ) + Tr(I)
= Tr(t2 XSXS − 2tXS + I)
= Tr(t2 SXSX − 2tSX + I)
= Tr(t2 S 1/2 XS 1/2 S 1/2 XS 1/2 − 2tS 1/2 XS 1/2 + I)
= Tr([tS 1/2 XS 1/2 − I]2 ),

i.e., (4.4.5) indeed is true.

In a moderate dist(·, Z∗ (·))-neighbourhood of the central path. It turns out that in


such a neighbourhood all is essentially as fine as at the central path itself:

A. Whenever Z = (X, S) is a pair of primal-dual strictly feasible solutions to (P),


(D) such that
dist(Z, Z∗ (t)) ≤ 1, (Close)

Z is “essentially as good as Z∗ (t)”, namely, the duality gap at (X, S) is essentially


as small as at the point Z∗ (t):

2θ(K)
DualityGap(X, S) = hS, XiE ≤ 2DualityGap(Z∗ (t)) = . (4.4.7)
t
Let us check A in the SDP case. Let (t, X, S) satisfy the premise of A. The duality gap
at the pair (X, S) of strictly primal-dual feasible solutions is

DualityGap(X, S) = hX, SiF = Tr(XS),

while by (4.4.6) the relation dist((S, X), Z∗ (t)) ≤ 1 means that

ktX 1/2 SX 1/2 − Ik2 ≤ 1,

whence
1
kX 1/2 SX 1/2 − t−1 Ik2 ≤ .
t
4.5. TRACING THE CENTRAL PATH 349

Denoting by δ the vector of eigenvalues of the symmetric matrix X 1/2 SX 1/2 , we conclude
k
(δi − t−1 )2 ≤ t−2 , whence
P
that
i=1

k
Tr(XS) = Tr(X 1/2 SX 1/2 ) =
P
DualityGap(X, S) = δi
i=1 s
k √ k
kt−1 + |δi − t−1 | ≤ kt−1 + k
P P
≤ (δi − t−1 )2
i=1 i=1

≤ kt−1 + kt−1 ,
and (4.4.7) follows.

It follows from A that


For our purposes, it is essentially the same – to move along the primal-dual central
path, or to trace this path, staying in its “time-space” neighbourhood
Nκ = {(t, X, S) | X ∈ L − B, S ∈ L⊥ + C, t > 0, dist((X, S), (X∗ (t), S∗ (t))) ≤ κ}
(4.4.8)
with certain κ ≤ 1.
Most of the interior point methods for LP, CQP, and SDP, including those most powerful in
practice, solve the primal-dual pair (P), (D) by tracing the central path11) , although not all of
them keep the iterates in NO(1) ; some of the methods work in much wider neighbourhoods of
the central path, in order to avoid slowing down when passing “highly curved” segments of the
path. At the level of ideas, these “long step path following methods” essentially do not differ
from the “short step” ones – those keeping the iterates in NO(1) ; this is why in the analysis part
of our forthcoming presentation we restrict ourselves with the short-step methods. It should be
added that as far as the theoretical efficiency estimates are concerned, the short-step methods
yield the best known so far complexity bounds for LP, CQP and SDP, and are essentially better
than the long-step methods (although in practice the long-step methods usually outperform
their short-step counterparts).

4.5 Tracing the central path


4.5.1 The path-following scheme
Assume we are solving a strictly feasible primal-dual pair of problems (P), (D) and intend to
trace the associated central path. Essentially all we need is a mechanism for updating a current
iterate (t̄, X̄, S̄) such that t̄ > 0, X̄ is strictly primal feasible, S̄ is strictly dual feasible, and
(X̄, S̄) is “good”, in certain precise sense, approximation of the point Z∗ (t̄) = (X∗ (t̄), S∗ (t̄))
on the central path, into a new iterate (t+ , X+ , S+ ) with similar properties and a larger value
t+ > t̄ of the penalty parameter. Given such an updating and iterating it, we indeed shall trace
the central path, with all the benefits (see above) coming from the latter fact12) How could we
construct the required updating? Recalling the description of the central path, we see that our
question is:
11)
There exist also potential reduction interior point methods which do not take explicit care of tracing the
central path; an example is the very first IP method for LP – the method of Karmarkar. The potential reduction
IP methods are beyond the scope of our course, which is not a big loss for a practically oriented reader, since, as
a practical tool, these methods are thought of to be obsolete.
12)
Of course, besides knowing how to trace the central path, we should also know how to initialize this process
– how to come close to the path to be able to start its tracing. There are different techniques to resolve this
350 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

Given a triple (t̄, X̄, S̄) which satisfies the relations

X ∈ L − B,
(4.5.1)
S ∈ L⊥ + C

(which is in fact a system of linear equations) and approximately satisfies the system
of nonlinear equations

Gt (X, S) ≡ S + t−1 ∇K(X) = 0, (4.5.2)

update it into a new triple (t+ , X+ , S+ ) with the same properties and t+ > t̄.

Since the left hand side G(·) in our system of nonlinear equations is smooth around (t̄, X̄, S̄)
(recall that X̄ was assumed to be strictly primal feasible), the most natural, from the viewpoint
of Computational Mathematics, way to achieve our target is as follows:

1. We choose somehow a desired new value t+ > t̄ of the penalty parameter;

2. We linearize the left hand side Gt+ (X, S) of the system of nonlinear equations (4.5.2) at
the point (X̄, S̄), and replace (4.5.2) with the linearized system of equations

∂Gt+ (X̄, S̄) ∂Gt+ (X̄, S̄)


Gt+ (X̄, S̄) + (X − X̄) + (S − S̄) = 0 (4.5.3)
∂X ∂S

3. We define the corrections ∆X, ∆S from the requirement that the updated pair X+ =
X̄ + ∆X, S+ = S̄ + ∆S must satisfy (4.5.1) and the linearized version (4.5.3) of (4.5.2).
In other words, the corrections should solve the system

∆X ∈ L,
∆S ∈ L⊥ , (4.5.4)
∂Gt+ (X̄,S̄) ∂Gt+ (X̄,S̄)
Gt+ (X̄, S̄) + ∂X ∆X + ∂S ∆S =0

4. Finally, we define X+ and S+ as

X+ = X̄ + ∆X,
(4.5.5)
S+ = S̄ + ∆S.

The primal-dual IP methods we are describing basically fit the outlined scheme, up to the
following two important points:

• If the current iterate (X̄, S̄) is not enough close to Z∗ (t̄), and/or if the desired improvement
t+ − t̄ is too large, the corrections given by the outlined scheme may be too large; as a
result, the updating (4.5.5) as it is may be inappropriate, e.g., X+ , or S+ , or both, may
be kicked out of the cone K. (Why not: linearized system (4.5.3) approximates well the
“true” system (4.5.2) only locally, and we have no reasons to trust in corrections coming
from the linearized system, when these corrections are large.)
“initialization difficulty”, and basically all of them achieve the goal by using the same path-tracing technique,
now applied to an appropriate auxiliary problem where the “initialization difficulty” does not arise at all. Thus,
at the level of ideas the initialization techniques do not add something essentially new, which allows us to skip in
our presentation all initialization-related issues.
4.5. TRACING THE CENTRAL PATH 351

There is a standard way to overcome the outlined difficulty – to use the corrections in a
damped fashion, namely, to replace the updating (4.5.5) with

X+ = X̄ + α∆X,
(4.5.6)
S+ = S̄ + β∆S,

and to choose the stepsizes α > 0, β > 0 from additional “safety” considerations, like
ensuring the updated pair (X+ , S+ ) to reside in the interior of K, or enforcing it to stay in
a desired neighbourhood of the central path, or whatever else. In IP methods, the solution
(∆X, ∆S) of (4.5.4) plays the role of search direction (and this is how it is called), and the
actual corrections are proportional to the search ones rather than to be exactly the same.
In this sense the situation is completely similar to the one with the Newton method from
Section 4.2.2 (which is natural: the latter method is exactly the linearization method for
solving the Fermat equation ∇f (x) = 0).

• The “augmented complementary slackness” system (4.5.2) can be written down in many
different forms which are equivalent to each other in the sense that they share a common
solution set. E.g., we have the same reasons to express the augmented complementary
slackness requirement by the nonlinear system (4.5.2) as to express it by the system
b t (X, S) ≡ X + t−1 ∇K(S) = 0,
G

not speaking about other possibilities. And although all systems of nonlinear equations

Ht (X, S) = 0

expressing the augmented complementary slackness are “equivalent” in the sense that they
share a common solution set, their linearizations are different and thus – lead to different
search directions and finally to different path-following methods. Choosing appropriate (in
general even varying from iteration to iteration) analytic representation of the augmented
complementary slackness requirement, one can gain a lot in the performance of the result-
ing path-following method, and the IP machinery facilitates this flexibility (see “SDP case
examples” below).

4.5.2 Speed of path-tracing


In the LP-CQP-SDP situation, the speed at which the best, from the theoretical viewpoint, path-
following methods manage to trace the path, is inverse proportional to the square root of the
parameter θ(K) of the underlying canonical barrier. It means the following. Started p at a point
(t0 , X 0 , S 0 ) from the neighbourhood N0.1 of the central path, the method after O(1) θ(K)psteps
reaches the point (t1 = 2t0 , X 1 , S 1 ) from the same neighbourhood, after the same O(1) θ(K)
steps more reaches the point (t2 = 2 0 2 2
p 2 t , X , S ) from the neighbourhood, and so on – it takes
the method a fixed number O(1) θ(K) steps to increase by factor 2 the current p value of the
penalty parameter, staying all the time in N0.1 . By (4.4.7) it means that every O(1) θ(K) steps
of the method reduce the (upper bound on the) inaccuracy of current approximate solutions by
factor 2, or, which is the same, add a fixed number of accuracy digits to thesepsolutions. Thus,
“the cost of an accuracy digit” for the (best) path-following methods is O(1) θ(K) steps. To
realize what this indeed mean, we should, of course, know how “heavy” a step is – what is its
arithmetic cost. Well, the arithmetic cost of a step for the “cheapest among the fastest” IP
methods as applied to (CP) is as if all operations carried out at a step were those required by
352 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

1. Assembling, given a point X ∈ int K, the symmetric n × n matrix (n = dim x)


H = A∗ [∇2 K(X)]A;

2. Subsequent Choleski factorization of the matrix H (which, due to its origin, is symmetric
positive definite and thus admits Choleski decomposition H = DDT with lower triangular
D).
Looking at (Cone), (CP) and (4.3.1), we immediately conclude that the arithmetic cost of
assembling and factorizing H is polynomial in the size dim Data(·) of the data defining (CP),
and that the parameter θ(K) also is polynomial in this size. Thus, the cost of an accuracy digit
for the methods in question is polynomial in the size of the data, as is required from polynomial
time methods13) . Explicit complexity bounds for LP b , CQP b , SDP b are given in Sections 4.6.1,
4.6.2, 4.6.3, respectively.

4.5.3 The primal and the dual path-following methods


The simplest way to implement the path-following scheme from Section 4.5.1 is to linearize the
augmented complementary slackness equations (4.5.2) as they are, ignoring the option to rewrite
these equations equivalently before linearization. Let us look at the resulting method in more
details. Linearizing (4.5.2) at a current iterate X̄, S̄, we get the vector equation
t+ (S̄ + ∆S) + ∇K(X̄) + [∇2 K(X̄)]∆X = 0,
where t+ is the target value of the penalty parameter. The system (4.5.4) now becomes
(a) ∆X ∈ L
m
(a0 ) ∆X = A∆x [∆x ∈ Rn ]
(b) ∆S ∈ L⊥ (4.5.7)
m
(b0 ) A∗ ∆S = 0
(c) t+ [S̄ + ∆S] + ∇K(X̄) + [∇2 K(X̄)]∆X = 0;
the unknowns here are ∆X, ∆S and ∆x. To process the system, we eliminate ∆X via (a0 ) and
multiply both sides of (c) by A∗ , thus getting the equation
A∗ [∇2 K(X̄)]A ∆x + [t+ A∗ [S̄ + ∆S] + A∗ ∇K(X̄)] = 0. (4.5.8)
| {z }
H

Note that A∗ [S̄


+ ∆S] = c is the objective of (CP) (indeed, S̄ ∈ L⊥ + C, i.e., A∗ S̄ = c, while
A∗ ∆S = 0 by (b0 )). Consequently, (4.5.8) becomes the primal Newton system
H∆x = −[t+ c + A∗ ∇K(X̄)]. (4.5.9)
Solving this system (which is possible – it is easily seen that the n × n matrix H is positive
definite), we get ∆x and then set
∆X = A∆x,
(4.5.10)
∆S = −t−1 2
+ [∇K(X̄) + [∇ K(X̄)∆X] − S̄,
13)
Strictly speaking, the outlined complexity considerations are applicable to the “highway” phase of the
solution process, after we once have reached the neighbourhood N0.1 of the central path. However, the results of
our considerations remain unchanged after the initialization expenses are taken into account, see Section 4.6.
4.5. TRACING THE CENTRAL PATH 353

thus getting a solution to (4.5.7). Restricting ourselves with the stepsizes α = β = 1 (see
(4.5.6)), we come to the “closed form” description of the method:

(a)  t 7→ t+ > t 
(b) x 7→ x+ = x + −[A (∇2 K(X))A]−1 [t+ c + A∗ ∇K(X)] ,

| {z } (4.5.11)
∆x
(c) S 7→ S+ = −t−1
+ [∇K(X) + [∇2 K(X)]A∆x],
where x is the current iterate in the space Rn of design variables and X = Ax − B is its image
in the space E.
The resulting scheme admits a quite natural explanation. Consider the function

F (x) = K(Ax − B);

you can immediately verify that this function is a barrier for the feasible set of (CP). Let also

Ft (x) = tcT x + F (x)

be the associated barrier-generated family of penalized objectives. Relation (4.5.11.b) says that
the iterates in the space of design variables are updated according to

x 7→ x+ = x − [∇2 Ft+ (x)]−1 ∇Ft+ (x),

i.e., the process in the space of design variables is exactly the process (4.2.1) from Section 4.2.3.
Note that (4.5.11) is, essentially, a purely primal process (this is where the name of the
method comes from). Indeed, the dual iterates S, S+ just do not appear in formulas for x+ , X+ ,
and in fact the dual solutions are no more than “shadows” of the primal ones.
Remark 4.5.1 When constructing the primal path-following method, we have started with the
augmented slackness equations in form (4.5.2). Needless to say, we could start our developments
with the same conditions written down in the “swapped” form

X + t−1 ∇K(S) = 0

as well, thus coming to what is called “dual path-following method”. Of course, as applied to a
given pair (P), (D), the dual path-following method differs from the primal one. However, the
constructions and results related to the dual path-following method require no special care – they
can be obtained from their “primal counterparts” just by swapping “primal” and “dual” entities.
The complexity analysis of the primal path-following method can be summarized in the following

Theorem 4.5.1 Let 0 < χ ≤ κ ≤ 0.1. Assume that we are given a starting point (t0 , x0 , S0 )
such that t0 > 0 and the point
(X0 = Ax0 − B, S0 )
is κ-close to Z∗ (t0 ):
dist((X0 , S0 ), Z∗ (t0 )) ≤ κ.
Starting with (t0 , x0 , X0 , S0 ), let us iterate process (4.5.11) equipped with the penalty updating
policy !
χ
t+ = 1 + p t (4.5.12)
θ(K)
354 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

i.e., let us build the iterates (ti , xi , Xi , Si ) according to


 
ti = 1 + √ χ ti−1 ,
θ(K)
∗ −1
2
xi = xi−1 − [A (∇ K(Xi−1 ))A] [ti c + A∗ ∇K(Xi−1 )],
| {z }
∆xi
Xi = Axi − B,
Si = −t−1
i [∇K(X 2
i−1 ) + [∇ K(Xi−1 )]A∆xi ]

The resulting process is well-defined and generates strictly primal-dual feasible pairs (Xi , Si ) such
that (ti , Xi , Si ) stay in the neighbourhood Nκ of the primal-dual central path.
The theorem says that with properly chosen κ, χ (e.g., κ = χ = 0.1) we can, getting once
close to the primal-dual central path, trace it by the primal path-following method, keeping the
iterates in Nκ -neighbourhood
p of the path and increasing the penalty parameter by an absolute
constant factor every O( θ(K)) steps – exactly as it was claimed in Sections 4.2.3, 4.5.2.
This fact is extremely important theoretically; in particular, it underlies the polynomial time
complexity bounds for LP, CQP and SDP from Section 4.6 below. As a practical tool, the
primal and the dual path-following methods, at least in their short-step form presented above,
are not that attractive. The computational power of the methods can be improved by passing
to appropriate large-step versions of the algorithms, but even these versions are thought of to be
inferior as compared to “true” primal-dual path-following methods (those which “indeed work
with both (P) and (D)”, see below). There are, however, cases when the primal or the dual
path-following scheme seems to be unavoidable; these are, essentially, the situations where the
pair (P), (D) is “highly asymmetric”, e.g., (P) and (D) have different by order of magnitudes
design dimensions dim L, dim L⊥ . Here it becomes too expensive computationally to treat (P),
(D) in a “nearly symmetric way”, and it is better to focus solely on the problem with smaller
design dimension.
To get an impression of how the primal path-following method works, here is a picture:

What you see is the 2D feasible set of a toy SDP (K = S3+ ). “Continuous curve” is the primal central
path; dots are iterates xi of the algorithm. We cannot draw the dual solutions, since they “live” in 4-
dimensional space (dim L⊥ = dim S3 − dim L = 6 − 2 = 4)

Here are the corresponding numbers:


4.5. TRACING THE CENTRAL PATH 355

Itr# Objective Duality Gap Itr# Objective Duality Gap


1 -0.100000 2.96 7 -1.359870 8.4e-4
2 -0.906963 0.51 8 -1.360259 2.1e-4
3 -1.212689 0.19 9 -1.360374 5.3e-5
4 -1.301082 6.9e-2 10 -1.360397 1.4e-5
5 -1.349584 2.1e-2 11 -1.360404 3.8e-6
6 -1.356463 4.7e-3 12 -1.360406 9.5e-7

4.5.4 The SDP case


In what follows, we specialize the primal-dual path-following scheme in the SDP case and carry
out its complexity analysis.

4.5.4.1 The path-following scheme in SDP


Let us look at the outlined scheme in the SDP case. Here the system of nonlinear equations
(4.5.2) becomes (see (4.3.1))
Gt (X, S) ≡ S − t−1 X −1 = 0, (4.5.13)
X, S being positive definite k × k symmetric matrices.
Recall that our generic scheme of a path-following IP method suggests, given a current triple
(t̄, X̄, S̄) with positive t̄ and strictly primal, respectively, dual feasible X̄ and S̄, to update the
this triple into a new triple (t+ , X+ , S+ ) of the same type as follows:
(i) First, we somehow rewrite the system (4.5.13) as an equivalent system

Ḡt (X, S) = 0; (4.5.14)

(ii) Second, we choose somehow a new value t+ > t̄ of the penalty parameter and linearize
system (4.5.14) (with t set to t+ ) at the point (X̄, S̄), thus coming to the system of linear
equations
∂ Ḡt+ (X̄, S̄) ∂ Ḡt+ (X̄, S̄)
∆X + ∆S = −Ḡt+ (X̄, S̄), (4.5.15)
∂X ∂S
for the “corrections” (∆X, ∆S);
We add to (4.5.15) the system of linear equations on ∆X, ∆S expressing the requirement that
a shift of (X̄, S̄) in the direction (∆X, ∆S) should preserve the validity of the linear constraints
in (P), (D), i.e., the equations saying that ∆X ∈ L, ∆S ∈ L⊥ . These linear equations can be
written down as
∆X = A∆x [⇔ ∆X ∈ L]
∗ (4.5.16)
A ∆S = 0 [⇔ ∆S ∈ L⊥ ]
(iii) We solve the system of linear equations (4.5.15), (4.5.16), thus obtaining a primal-dual
search direction (∆X, ∆S), and update current iterates according to

X+ = X̄ + α∆x, S+ = S̄ + β∆S

where the primal and the dual stepsizes α, β are given by certain “side requirements”.
The major “degree of freedom” of the construction comes from (i) – from how we construct
the system (4.5.14). A very popular way to handle (i), the way which indeed leads to primal-dual
356 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

methods, starts from rewriting (4.5.13) in a form symmetric w.r.t. X and S. To this end we
first observe that (4.5.13) is equivalent to every one of the following two matrix equations:

XS = t−1 I; SX = t−1 I.

Adding these equations, we get a “symmetric” w.r.t. X, S matrix equation

XS + SX = 2t−1 I, (4.5.17)

which, by its origin, is a consequence of (4.5.13). On a closest inspection, it turns out that
(4.5.17), regarded as a matrix equation with positive definite symmetric matrices, is equivalent
to (4.5.13). It is possible to use in the role of (4.5.14) the matrix equation (4.5.17) as it is;
this policy leads to the so called AHO (Alizadeh-Overton-Haeberly) search direction and the
“XS + SX” primal-dual path-following method.
It is also possible to use a “scaled” version of (4.5.17). Namely, let us choose somehow a
positive definite scaling matrix Q and observe that our original matrix equation (4.5.13) says
that S = t−1 X −1 , which is exactly the same as to say that Q−1 SQ−1 = t−1 (QXQ)−1 ; the latter,
in turn, is equivalent to every one of the matrix equations

QXSQ−1 = t−1 I; Q−1 SXQ = t−1 I;

Adding these equations, we get the scaled version of (4.5.17):

QXSQ−1 + Q−1 SXQ = 2t−1 I, (4.5.18)

which, same as (4.5.17) itself, is equivalent to (4.5.13).


With (4.5.18) playing the role of (4.5.14), we get a quite flexible scheme with a huge freedom
for choosing the scaling matrix Q, which in particular can be varied from iteration to iteration.
As we shall see in a while, this freedom reflects the intrinsic (and extremely important in the
interior-point context) symmetries of the semidefinite cone.
Analysis of the path-following methods based on search directions coming from (4.5.18)
(“Zhang’s family of search directions”) simplifies a lot when at every iteration we choose its own
scaling matrix and ensure that the matrices

Se = Q−1 S̄Q−1 , X
b = QX̄Q

commute (X̄, S̄ are the iterates to be updated); we call such a policy a “commutative scaling”.
Popular commutative scalings are:

1. Q = S̄ 1/2 (Se = I, X
b = S̄ 1/2 X̄ S̄ 1/2 ) (the “XS” method);

2. Q = X̄ −1/2 (Se = X̄ 1/2 S̄ X̄ 1/2 , X


b = I) (the “SX” method);

3. Q is such that Se = X
b (the NT (Nesterov-Todd) method, extremely attractive and deep)
 1/4

If X̄ and S̄ were just positive reals, the formula for Q would be simple: Q = X̄
.
In the matrix case this simple formula becomes a bit more complicated (to make our
life easier, below we write X instead of X̄ and S instead of S̄):

Q = P 1/2 , P = X −1/2 (X 1/2 SX 1/2 )−1/2 X 1/2 S.


4.5. TRACING THE CENTRAL PATH 357

We should verify that (a) P is symmetric positive definite, so that Q is well-defined,


and that (b) Q−1 SQ−1 = QXQ.
(a): Let us first verify that P is symmetric:

P ? =? P T
m
X −1/2 (X 1/2 SX 1/2 )−1/2 X 1/2 S ? =? SX 1/2 (X 1/2 SX 1/2 )−1/2 X −1/2
m
X −1/2 (X 1/2 SX 1/2 )−1/2 X 1/2 S X 1/2 (X 1/2 SX 1/2 )1/2 X −1/2 S −1 ? =? I
 

m
X −1/2 (X 1/2 SX 1/2 )−1/2 (X 1/2 SX 1/2 )(X 1/2 SX 1/2 )1/2 X −1/2 S −1 ? =? I
m
−1/2
X (X SX )X −1/2 S −1 ? =? I
1/2 1/2

and the concluding ? =? indeed is =.


Now let us verify that P is positive definite. Recall that the spectrum of the product of
two square matrices, symmetric or not, remains unchanged when swapping the factors.
Therefore, denoting σ(A) the spectrum of A, we have

X −1/2 (X 1/2 SX 1/2 )−1/2 X 1/2 S 



σ(P ) = σ
= σ (X 1/2 SX 1/2 )−1/2 X 1/2 SX −1/2
(X 1/2 SX 1/2 )−1/2 (X 1/2 SX 1/2 )X −1

= σ
(X 1/2 SX 1/2 )1/2 X −1

= σ
X −1/2 (X 1/2 SX 1/2 )1/2 X −1/2 ,

= σ

and the argument of the concluding σ(·) clearly is a positive definite symmetric matrix.
Thus, the spectrum of symmetric matrix P is positive, i.e., P is positive definite.
(b): To verify that QXQ = Q−1 SQ−1 , i.e., that P 1/2 XP 1/2 = P −1/2 SP −1/2 , is the
same as to verify that P XP = S. The latter equality is given by the following compu-
tation:

P XP = X −1/2 (X 1/2 SX 1/2 )−1/2 X 1/2 S X X −1/2 (X 1/2 SX 1/2 )−1/2 X 1/2 S
 

= X −1/2 (X 1/2 SX 1/2 )−1/2 (X 1/2 SX 1/2 )(X 1/2 SX 1/2 )−1/2 X 1/2 S
= X −1/2 X 1/2 S
= S.

You should not think that Nesterov and Todd guessed the formula for this scaling ma-
trix. They did much more: they have developed an extremely deep theory (covering the
general LP-CQP-SDP case, not just the SDP one!) which, among other things, guar-
antees that the desired scaling matrix exists (and even is unique). After the existence
is established, it becomes much easier (although still not that easy) to find an explicit
formula for Q.

4.5.4.2 Complexity analysis


We are about to carry out the complexity analysis of the primal-dual path-following methods based
on “commutative” Zhang’s scalings. This analysis, although not that difficult, is more technical than
whatever else in our course, and a non-interested reader may skip it without any harm.

Scalings. We already have mentioned what a scaling of Sk+ is: this is the linear one-to-one transfor-
mation of Sk given by the formula
H 7→ QHQT , (Scl)
358 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

where Q is a nonsingular scaling matrix. It is immediately seen that (Scl) is a symmetry of the semidefinite
cone Sk+ – it maps the cone onto itself. This family of symmetries is quite rich: for every pair of points
A, B from the interior of the cone, there exists a scaling which maps A onto B, e.g., the scaling
1/2 −1/2 −1/2 1/2
H 7→ (B
| {z A })H(A
| {zB }).
Q QT

Essentially, this is exactly the existence of that rich family of symmetries of the underlying cones which
makes SDP (same as LP and CQP, where the cones also are “perfectly symmetric”) especially well suited
for IP methods.
In what follows we will be interested in scalings associated with positive definite scaling matrices.
The scaling given by such a matrix Q (X,S,...) will be denoted by Q (resp., X ,S,...):
Q[H] = QHQ.
Given a problem of interest (CP) (where K = Sk+ ) and a scaling matrix Q  0, we can scale the problem,
i.e., pass from it to the problem
min cT x : Q [Ax − B]  0

Q(CP)
x

which, of course, is equivalent to (CP) (since Q[H] is positive semidefinite iff H is so). In terms of
“geometric reformulation” (P) of (CP), this transformation is nothing but the substitution of variables
QXQ = Y ⇔ X = Q−1 Y Q−1 ;
with respect to Y -variables, (P) is the problem
min Tr(C[Q−1 Y Q−1 ]) : Y ∈ Q(L) − Q[B], Y  0 ,

Y

i.e., the problem n o


e ) : Y ∈ Lb − B,
min Tr(CY b Y 0
h Y i (P)
b
e = Q−1 CQ−1 , B
C b = QBQ, Lb = Im(QA) = Q(L)

The problem dual to (P)


b is
n o
b : Z ∈ Lb⊥ + C,
max Tr(BZ) b Z0 . (D)
e
Z

It is immediate to realize what is Lb⊥ :


hZ, QXQiF = Tr(ZQXQ) = Tr(QZQX) = hQZQ, XiF ;

thus, Z is orthogonal to every matrix from L,


b i.e., to every matrix of the form QXQ with X ∈ L iff the
matrix QZQ is orthogonal to every matrix from L, i.e., iff QZQ ∈ L⊥ . It follows that
Lb⊥ = Q−1 (L⊥ ).
Thus, when acting on the primal-dual pair (P), (D) of SDP’s, a scaling, given by a matrix Q  0, converts
it into another primal-dual pair of problems, and this new pair is as follows:
• The “primal” geometric data – the subspace L and the primal shift B (which has a part-time job
to be the dual objective as well) – are replaced with their images under the mapping Q;
• The “dual” geometric data – the subspace L⊥ and the dual shift C (it is the primal objective as
well) – are replaced with their images under the mapping Q−1 inverse to Q; this inverse mapping again
is a scaling, the scaling matrix being Q−1 .
We see that it makes sense to speak about primal-dual scaling which acts on both the primal and
the dual variables and maps a primal variable X onto QXQ, and a dual variable S onto Q−1 SQ−1 .
Formally speaking, the primal-dual scaling associated with a matrix Q  0 is the linear transformation
(X, S) 7→ (QXQ, Q−1 SQ−1 ) of the direct product of two copies of Sk (the “primal” and the “dual” ones).
A primal-dual scaling acts naturally on different entities associated with a primal-dual pair (P), (S), in
particular, at:
4.5. TRACING THE CENTRAL PATH 359

• the pair (P), (D) itself – it is converted into another primal-dual pair of problems (P),
b (D);
e

• a primal-dual feasible pair (X, S) of solutions to (P), (D) – it is converted to the pair (X
b =
−1 −1
QXQ, S = Q SQ ), which, as it is immediately seen, is a pair of feasible solutions to (P), (D).
e b e
Note that the primal-dual scaling preserves strict feasibility and the duality gap:
DualityGapP,D (X, S) = Tr(XS) = Tr(QXSQ−1 ) = Tr(X
b S)
e = DualityGap (X,
P,D
b e
b S);
e

• the primal-dual central path (X∗ (·), S∗ (·)) of (P), (D); it is converted into the curve (X
b∗ (t) =
−1 −1
QX∗ (t)Q, S∗ (t) = Q S∗ (t)Q ), which is nothing but the primal-dual central path Z(t) of the
e
primal-dual pair (P),
b (D).
e
The latter fact can be easily derived from the characterization of the primal-dual central path; a
more instructive derivation is based on the fact that our “hero” – the barrier Sk (·) – is “semi-
invariant” w.r.t. scaling:
Sk (Q(X)) = − ln Det(QXQ) = − ln Det(X) − 2 ln Det(Q) = Sk (X) + const(Q).
Now, a point on the primal central path of the problem (P) b associated with penalty parameter t,
let this point be temporarily denoted by Y (t), is the unique minimizer of the aggregate
Skt (Y ) = thQ−1 CQ−1 , Y iF + Sk (Y ) ≡ tTr(Q−1 CQ−1 Y ) + Sk (Y )
over the set of strictly feasible solutions of (P).
b The latter set is exactly the image of the set of
strictly feasible solutions of (P) under the transformation Q, so that Y (t) is the image, under the
same transformation, of the point, let it be called X(t), which minimizes the aggregate
Skt (QXQ) = tTr((Q−1 CQ−1 )(QXQ)) + Sk (QXQ) = tTr(CX) + Sk (X) + const(Q)
over the set of strictly feasible solutions to (P). We see that X(t) is exactly the point X∗ (t) on
the primal central path associated with problem (P). Thus, the point Y (t) of the primal central
path associated with (P)b is nothing but X b∗ (t) = QX∗ (t)Q. Similarly, the point of the central path
associated with the problem (D) e is exactly Se∗ (t) = Q−1 S∗ (t)Q−1 .
• the neighbourhood Nκ of the primal-dual central path Z(·) associated with the pair of problems
(P), (D) (see (4.4.8)). As you can guess, the image of Nκ is exactly the neighbourhood N κ , given
by (4.4.8), of the primal-dual central path Z(·) of (P),
b (D).e
The latter fact is immediate: for a pair (X, S) of strictly feasible primal and dual solutions to (P),
(D) and a t > 0 we have (see (4.4.6)):
dist2 ((X, e Z ∗ (t)) = Tr [QXQ](tQ−1 SQ−1 − [QXQ]−1 )[QXQ](tQ−1 SQ−1 − [QXQ]−1 )

b S),
= Tr QX(tS − X −1 )X(tS − X −1)Q−1


= Tr X(tS − X −1 )X(tS − X −1 )
= dist2 ((X, S), Z∗ (t)).

Primal-dual short-step path-following methods based on commutative scalings.


Path-following methods we are about to consider trace the primal-dual central path of (P), (D), staying
in Nκ -neighbourhood of the path; here κ ≤ 0.1 is fixed. The path is traced by iterating the following
updating:
(U): Given a current pair of strictly feasible primal and dual solutions (X̄, S̄) such that the
triple  
k
t̄ = , X̄, S̄ (4.5.19)
Tr(X̄ S̄)
belongs to Nκ , i.e. (see (4.4.6))
kt̄X̄ 1/2 S̄ X̄ 1/2 − Ik2 ≤ κ, (4.5.20)
we
360 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

1. Choose the new value t+ of the penalty parameter according to


 −1
χ
t+ = 1 − √ t̄, (4.5.21)
k
where χ ∈ (0, 1) is a parameter of the method;
2. Choose somehow the scaling matrix Q  0 such that the matrices X
b = QX̄Q and
−1 −1
S = Q S̄Q commute with each other;
e
3. Linearize the equation
2
QXSQ−1 + Q−1 SXQ = I
t+
at the point (X̄, S̄), thus coming to the equation
2
Q[∆XS +X∆S]Q−1 +Q−1 [∆SX +S∆X]Q = I −[QX̄ S̄Q−1 +Q−1 S̄ X̄Q]; (4.5.22)
t+
4. Add to (4.5.22) the linear equations
∆X ∈ L,
(4.5.23)
∆S ∈ L⊥ ;
5. Solve system (4.5.22), (4.5.23), thus getting “primal-dual search direction” (∆X, ∆S);
6. Update current primal-dual solutions (X̄, S̄) into a new pair (X+ , S+ ) according to
X+ = X̄ + ∆X, S+ = S̄ + ∆S.
We already have explained the ideas underlying (U), up to the fact that in our previous explanations we
dealt with three “independent” entities t̄ (current value of the penalty parameter), X̄, S̄ (current primal
and dual solutions), while in (U) t̄ is a function of X̄, S̄:
k
t̄ = . (4.5.24)
Tr(X̄ S̄)
The reason for establishing this dependence is very simple: if (t, X, S) were on the primal-dual central
path: XS = t−1 I, then, taking traces, we indeed would get t = Tr(XS)k
. Thus, (4.5.24) is a reasonable
way to reduce the number of “independent entities” we deal with.
Note also that (U) is a “pure Newton scheme” – here the primal and the dual stepsizes are equal to
1 (cf. (4.5.6)).
The major element of the complexity analysis of path-following polynomial time methods for SDP is
as follows:
Theorem 4.5.2 Let the parameters κ, χ of (U) satisfy the relations
0 < χ ≤ κ ≤ 0.1. (4.5.25)
Let, further, (X̄, S̄) be a pair of strictly feasible primal and dual solutions to (P), (D) such that the triple
(4.5.19) satisfies (4.5.20). Then the updated pair (X+ , S+ ) is well-defined (i.e., system (4.5.22), (4.5.23)
is solvable with a unique solution), X+ , S+ are strictly feasible solutions to (P), (D), respectively,
k
t+ =
Tr(X+ S+ )
and the triple (t+ , X+ , S+ ) belongs to Nκ .
The theorem says that with properly chosen κ, χ (say, κ = χ = 0.1), updating (U) converts a close
to the primal-dual central path, in the sense of (4.5.20), strictly primal-dual feasible iterate (X̄, S̄) into
a new strictly primal-dual feasible iterate with the same closeness-to-the-path property and larger, by
factor (1 + O(1)k −1/2 ), value of the penalty parameter. Thus, after we get close to the path – reach
its 0.1-neighbourhood N0.1 – we are able to trace
√ this path,
p staying in N0.1 and increasing the penalty
parameter by absolute constant factor in O( k) = O( θ(K)) steps, exactly as announced in Section
4.5.2.
4.5. TRACING THE CENTRAL PATH 361

Proof of Theorem 4.5.2. 10 . Observe, first (this observation is crucial!) that it suffices to prove our
Theorem in the particular case when X̄, S̄ commute with each other and Q = I. Indeed, it is immediately
seen that the updating (U) can be represented as follows:
1. We first scale by Q the “input data” of (U) – the primal-dual pair of problems (P), (D) and the
strictly feasible pair X̄, S̄ of primal and dual solutions to these problems, as explained in sect.
“Scaling”. Note that the resulting entities – a pair of primal-dual problems and a strictly feasible
pair of primal-dual solutions to these problems – are linked with each other exactly in the same
fashion as the original entities, due to scaling invariance of the duality gap and the neighbourhood
Nκ . In addition, the scaled primal and dual solutions commute;
2. We apply to the “scaled input data” yielded by the previous step the updating (U)
b completely
similar to (U), but using the unit matrix in the role of Q;
3. We “scale back” the result of the previous step, i.e., subject this result to the scaling associated
with Q−1 , thus obtaining the updated iterate (X + , S + ).
Given that the second step of this procedure preserves primal-dual strict feasibility, w.r.t. the scaled
primal-dual pair of problems, of the iterate and keeps the iterate in the κ-neighbourhood Nκ of the
corresponding central path, we could use once again the “scaling invariance” reasoning to assert that the
result (X + , S + ) of (U) is well-defined, is strictly feasible for (P), (D) and is close to the original central
path, as claimed in the Theorem. Thus, all we need is to justify the above “Given”, and this is exactly
the same as to prove the theorem in the particular case of Q = I and commuting X̄, S̄. In the rest of
the proof we assume that Q = I and that the matrices X̄, S̄ commute with each other. Due to the latter
property, X̄, S̄ are diagonal in a properly chosen orthonormal basis; representing all matrices from Sk in
this basis, we can reduce the situation to the case when X̄ and S̄ are diagonal. Thus, we may (and do)
assume in the sequel that X̄ and S̄ are diagonal, with diagonal entries xi ,si , i = 1, ..., k, respectively, and
that Q = I. Finally, to simplify notation, we write t, X, S instead of t̄, X̄, S̄, respectively.
20 . Our situation and goals now are as follows. We are given orthogonal to each other affine planes
L − B, L⊥ + C in Sk and two positive definite diagonal matrices X = Diag({xi }) ∈ L − B, S =
Diag({si }) ∈ L⊥ + C. We set
1 Tr(XS)
µ= =
t k
and know that
ktX 1/2 SX 1/2 − Ik2 ≤ κ.
We further set
1
µ+ = = (1 − χk −1/2 )µ (4.5.26)
t+
and consider the system of equations w.r.t. unknown symmetric matrices ∆X, ∆S:

(a) ∆X ∈ L
(b) ∆S ∈ L⊥ (4.5.27)
(c) ∆XS + X∆S + ∆SX + S∆X = 2µ+ I − 2XS

We should prove that the system has a unique solution such that the matrices

X+ = X + ∆X, S+ = S + ∆S

are
(i) positive definite,
(ii) belong, respectively, to L − B, L⊥ + C and satisfy the relation

Tr(X+ S+ ) = µ+ k; (4.5.28)

(iii) satisfy the relation


1/2 1/2
Ω ≡ kµ−1
+ X+ S+ X+ − Ik2 ≤ κ. (4.5.29)
362 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

Observe that the situation can be reduced to the one with µ = 1. Indeed, let us pass from the matrices
X, S, ∆X, ∆S, X+ , S+ to X, S 0 = µ−1 S, ∆X, ∆S 0 = µ−1 ∆S, X+ , S+
0
= µ−1 S+ . Now the “we are given”
part of our situation becomes as follows: we are given two diagonal positive definite matrices X, S 0 such
that X ∈ L − B, S 0 ∈ L⊥ + C 0 , C 0 = µ−1 C,

Tr(XS 0 ) = k × 1

and
kX 1/2 S 0 X 1/2 − Ik2 = kµ−1 X 1/2 SX 1/2 − Ik2 ≤ κ.
The “we should prove” part becomes: to verify that the system of equations

(a) ∆X ∈ L
(b) ∆S 0 ∈ L⊥
(c) ∆XS 0 + X∆S 0 + ∆S 0 X + S 0 ∆X = 2(1 − χk −1/2 )I − 2XS 0
0
has a unique solution and that the matrices X+ = X + ∆X, S+ = S 0 + ∆S+
0
are positive definite, are
⊥ 0
contained in L − B, respectively, L + C and satisfy the relations
0 µ+
Tr(X+ S+ )= = 1 − χk −1/2
µ
and
1/2 1/2
k(1 − χk −1/2 )−1 X+ S+
0
X+ − Ik2 ≤ κ.
Thus, the general situation indeed can be reduced to the one with µ = 1, µ+ = 1 − χk −1/2 , and we loose
nothing assuming, in addition to what was already postulated, that

Tr(XS)
µ ≡ t−1 ≡ = 1, µ+ = 1 − χk −1/2 ,
k
whence
k
X
[Tr(XS) =] xi s i = k (4.5.30)
i=1

and
n
X
[ktX 1/2 SX 1/2 − Ik22 ≡] (xi si − 1)2 ≤ κ2 . (4.5.31)
i=1

30 . We start with proving that (4.5.27) indeed has a unique solution. It is convenient to pass in
(4.5.27) from the unknowns ∆X, ∆S to the unknowns

δX = X −1/2 ∆XX −1/2 ⇔ ∆X = X 1/2 δXX 1/2 ,


(4.5.32)
δS = X 1/2 ∆SX 1/2 ⇔ ∆S = X −1/2 δSX −1/2 .

With respect to the new unknowns, (4.5.27) becomes

(a) X 1/2 δXX 1/2 ∈ L,


(b) X −1/2 δSX −1/2 ∈ L⊥ ,
−1/2
(c) 1/2 1/2 1/2
X δXX S + X δSX + X −1/2 δSX 1/2 + SX 1/2 δXX 1/2 = 2µ+ I − 2XS
m
 k
 r r  
√ xi xj  k
(d) L(δX, δS) ≡  x x (s
| i j {zi + s j )(δX)ij + + (δS)ij 
 = 2 [(µ+ − xi si )δij ]i,j=1 ,
 } xj xi 
φij | {z }
ψij
i,j=1
(4.5.33)
4.5. TRACING THE CENTRAL PATH 363

0, i 6= j
where δij = are the Kronecker symbols.
1, i = j
We first claim that (4.5.33), regarded as a system with unknown symmetric matrices δX, δS has a
unique solution. Observe that (4.5.33) is a system with 2dim Sk ≡ 2N scalar unknowns and 2N scalar
linear equations. Indeed, (4.5.33.a) is a system of N 0 ≡ N − dim L linear equations, (4.5.33.b) is a system
of N 00 = N − dim L⊥ = dim L linear equations, and (4.5.33.c) has N equations, so that the total #
of linear equations in our system is N 0 + N 00 + N = (N − dim L) + dim L + N = 2N . Now, to verify
that the square system of linear equations (4.5.33) has exactly one solution, it suffices to prove that the
homogeneous system
X 1/2 δXX 1/2 ∈ L, X −1/2 δSX −1/2 ∈ L⊥ , L(δX, δS) = 0
has only trivial solution. Let (δX, δS) be a solution to the homogeneous system. Relation L(δX, ∆S) = 0
means that
ψij
(δX)ij = − (δS)ij , (4.5.34)
φij
whence
X ψij
Tr(δXδS) = − (∆S)2ij . (4.5.35)
i,j
φij

Representing δX, δS via ∆X, ∆S according to (4.5.32), we get


Tr(δXδS) = Tr(X −1/2 ∆XX −1/2 X 1/2 ∆SX 1/2 ) = Tr(X −1/2 ∆X∆SX 1/2 ) = Tr(∆X∆S),
and the latter quantity is 0 due to ∆X = X 1/2 δXX 1/2 ∈ L and ∆S = X −1/2 δSX −1/2 ∈ L⊥ . Thus, the
left hand side in (4.5.35) is 0; since φij > 0, ψij > 0, (4.5.35) implies that δS = 0. But then δX = 0 in
view of (4.5.34). Thus, the homogeneous version of (4.5.33) has the trivial solution only, so that (4.5.33)
is solvable with a unique solution.
40 . Let δX, δS be the unique solution to (4.5.33), and let ∆X, ∆S be linked to δX, δS according to
(4.5.32). Our local goal is to bound from above the Frobenius norms of δX and δS.
From (4.5.33.c) it follows (cf. derivation of (4.5.35)) that

(δS)ij + 2 µ+ φ−x
ψij i si
(a) (δX)ij = − φij ii
δij , i, j = 1, ..., k;
(4.5.36)
2 µ+ψ−x
φ i si
(b) (δS)ij = − ψij
ij
(δX)ij + ii
δij , i, j = 1, ..., k.

Same as in the concluding part of 30 , relations (4.5.33.a − b) imply that


X
Tr(∆X∆S) = Tr(δXδS) = (δX)ij (δS)ij = 0. (4.5.37)
i,j

Multiplying (4.5.36.a) by (δS)ij and taking sum over i, j, we get, in view of (4.5.37), the relation
X ψij X µ + − xi s i
(δS)2ij = 2 (δS)ii ; (4.5.38)
i,j
φij i
φii

by “symmetric” reasoning, we get


X φij X µ + − xi s i
(δX)2ij = 2 (δX)ii . (4.5.39)
i,j
ψij i
ψii

Now let
θ i = xi s i , (4.5.40)
so that in view of (4.5.30) and (4.5.31) one has
P
(a) θi = k,
P i (4.5.41)
(b) (θi − 1)2 ≤ κ2 .
i
364 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

Observe that   r
√ √ θi θj xi xj
r
φij = xi xj (si + sj ) = xi xj + = θj + θi .
xi xj xj xi
Thus, q q
x
φij = θj xxji + θi xji ,
q q
xj
(4.5.42)
xi
ψij = xj + xi ;

since 1 − κ ≤ θi ≤ 1 + κ by (4.5.41.b), we get


φij
1−κ≤ ≤ 1 + κ. (4.5.43)
ψij
By the geometric-arithmetic mean inequality we have ψij ≥ 2, whence in view of (4.5.43)

φij ≥ (1 − κ)ψij ≥ 2(1 − κ) ∀i, j. (4.5.44)

We now have
φij
(δX)2ij 2
P P
(1 − κ) ≤ ψij (δX)ij
i,j i,j
[see (4.5.43)]
P µ+ −xi si
≤ 2 ψii (δX)ii
i
rP [see (4.5.39)]
rP
2 −2 2
≤ 2 (µ+ − xi si ) ψij (δX)ii
rPi i
rP
≤ ((1 − θi )2 − 2χk −1/2 (1 − θi ) + χ2 k −1 ) (δX)2ij
i i,j

r rP [see (4.5.44)]
(δX)2ij
P
≤ χ2 +
(1 − θi )2
i i,j
P
[since (1 − θi ) = 0 by (4.5.41.a)]
p rP i
≤ 2
χ +κ 2 (δX)ij2
i,j
[see (4.5.41.b)]
and from the resulting inequality it follows that
p
χ2 + κ2
kδXk2 ≤ ρ ≡ . (4.5.45)
1−κ
Similarly,
P ψij
(1 + κ)−1 (δS)2ij (δS)2ij
P
≤ φij
i,j i,j
[see (4.5.43)]
P µ+ −xi si
≤ 2 φii (δS)ii
i

rP rP[see (4.5.38)]
≤ 2 (µ+ − xi si )2 φ−2 2
ij (δS)ii
i i
rP rP
≤ (1 − κ)−1 (µ+ − θi )2 (δS)2ij
i i,j
[see (4.5.44)]
rP
p
−1
≤ (1 − κ) 2
χ +κ2 (δS)2ij
i,j
[same as above]
4.5. TRACING THE CENTRAL PATH 365

and from the resulting inequality it follows that


p
(1 + κ) χ2 + κ2
kδSk2 ≤ = (1 + κ)ρ. (4.5.46)
1−κ
50 . We are ready to prove 20 .(i-ii). We have

X+ = X + ∆X = X 1/2 (I + δX)X 1/2 ,

and the matrix I + δX is positive definite due to (4.5.45) (indeed, the right hand side in (4.5.45) is ρ ≤ 1,
whence the Frobenius norm (and therefore - the maximum of moduli of eigenvalues) of δX is less than
1). Note that by the just indicated reasons I + δX  (1 + ρ)I, whence

X+  (1 + ρ)X. (4.5.47)

Similarly, the matrix


S+ = S + ∆S = X −1/2 (X 1/2 SX 1/2 + δS)X −1/2
is positive definite. Indeed, the eigenvalues of the matrix X 1/2 SX 1/2 are ≥ min θi ≥ 1 − κ, while
√ i
(1+κ) χ2 +κ2
the moduli of eigenvalues of δS, by (4.5.46), do not exceed 1−κ < 1 − κ. Thus, the matrix
1/2 1/2 0
X SX + δS is positive definite, whence S+ also is so. We have proved 2 .(i).
20 .(ii) is easy to verify. First, by (4.5.33), we have ∆X ∈ L, ∆S ∈ L⊥ , and since X ∈ L − B,
S ∈ L⊥ + C, we have X+ ∈ L − B, S+ ∈ L⊥ + C. Second, we have

Tr(X+ S+ ) = Tr(XS + X∆S + ∆XS + ∆X∆S)


= Tr(XS + X∆S + ∆XS)
[since Tr(∆X∆S) = 0 due to ∆X ∈ L, ∆S ∈ L⊥ ]
= µ+ k
[take the trace of both sides in (4.5.27.c)]

20 .(ii) is proved.
60 . It remains to verify 20 .(iii). We should bound from above the quantity
1/2 1/2 1/2 1/2
Ω = kµ−1 −1 −1
+ X+ S+ X+ − Ik2 = kX+ (µ+ S+ − X+ )X+ k2 ,

and our plan is first to bound from above the “close” quantity
b = kX 1/2 (µ−1 S+ − X −1 )X 1/2 k2 = µ−1
Ω + + + kZk2 , (4.5.48)
−1
Z = X 1/2 (S+ − µ+ X+ )X 1/2 ,

and then to bound Ω in terms of Ω.


b
60 .1. Bounding Ω.
b We have
−1
Z = X 1/2 (S+ − µ+ X+ )X 1/2
= 1/2
X (S + ∆S)X 1/2
− µ+ X 1/2 [X + ∆X]−1 X 1/2
= XS + δS − µ+ X [X 1/2 (I + δX)X 1/2 ]−1 X 1/2
1/2

[see (4.5.32)]
= XS + δS − µ+ (I + δX)−1
= XS + δS − µ+ (I − δX) − µ+ [(I + δX)−1 − I + δX]
= XS + δS + δX − µ+ I + (µ+ − 1)δX + µ+ [I − δX − (I + δX)−1 ],
| {z } | {z } | {z }
Z1 Z2 Z3

so that
kZk2 ≤ kZ 1 k2 + kZ 2 k2 + kZ 3 k2 . (4.5.49)
We are about to bound separately all 3 terms in the right hand side of the latter inequality.
366 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

Bounding kZ 2 k2 : We have

kZ 2 k2 = |µ+ − 1|kδXk2 ≤ χk −1/2 ρ (4.5.50)

(see (4.5.45) and take into account that µ+ − 1 = −χk −1/2 ).


Bounding kZ 3 k2 : Let λi be the eigenvalues of δX. We have

kZ 3 k2 = kµ+ [(I + δX)−1 − I + δX]k2


≤ k(I + δX)−1 − I + δXk2
s [since |µ+ | ≤ 1]
  2
1
P
= 1+λi − 1 + λi
i
[pass torthe orthonormal eigenbasis of δX]
P λ4i (4.5.51)
= (1+λi )2
ri
P ρ2 λ2i
≤ (1−ρ)2
i P 2
[see (4.5.45) and note that λi = kδXk22 ≤ ρ2 ]
i
ρ2
≤ 1−ρ

Bounding kZ 1 k2 : This is a bit more involving. We have

1
Zij = (XS)ij + (δS)ij + (δX)ij − µ+ δij
h i si − µ+ )δij
= (δX)ij h+ (δS)iji+ (x i
= (δX)ij 1 − ψij + 2 µ+ψ−x
φij i si
ii
+ x i s i − µ + δij
h i [we have used (4.5.36.b)]
φij
= (δX)ij 1 − ψij
[since ψii = 2, see (4.5.42)]

whence, in view of (4.5.43),



1
1 κ
|Zij | ≤ 1 − |(δX)ij | = |(δX)ij |,
1 − κ 1−κ

so that
κ κ
kZ 1 k2 ≤ kδXk2 ≤ ρ (4.5.52)
1−κ 1−κ

(the concluding inequality is given by (4.5.45)).


Assembling (4.5.50), (4.5.51), (4.5.52) and (4.5.49), we come to
 
χ ρ κ
kZk2 ≤ ρ √ + + ,
k 1−ρ 1−κ

whence, by (4.5.48),
 
ρ χ ρ κ
b≤
Ω √ + + . (4.5.53)
1 − χk −1/2 k 1−ρ 1−κ
4.5. TRACING THE CENTRAL PATH 367

60 .2. Bounding Ω. We have

−1 1/2 1/2
Ω2 = kµ+ X+ S+ X+ − Ik22
1/2 1/2 2
= kX+ [µ−1 −1
+ S+ − X+ ] X+ k2
| {z }
 Θ=ΘT 
1/2 1/2
= Tr X+ ΘX+ ΘX+
 
1/2 1/2
≤ (1 + ρ)Tr X+ ΘXΘX+
 [see (4.5.47)]

1/2 1/2
= (1 + ρ)Tr X+ ΘX 1/2 X 1/2 ΘX+
 
1/2 1/2
= (1 + ρ)Tr X 1/2 ΘX+ X+ ΘX 1/2

= (1 + ρ)Tr X 1/2 ΘX+ ΘX 1/2
≤ (1 + ρ)2 Tr X 1/2 ΘXΘX 1/2
[the same (4.5.47)]
= (1 + ρ)2 kX 1/2 ΘX 1/2 k22
= (1 + ρ)2 kX 1/2 [µ−1
+ S+ − X+ ]X
−1 1/2 2
k2
2 b2
= (1 + ρ) Ω
[see (4.5.48)]

so that h i
ρ(1+ρ) χ ρ κ
Ω ≤ (1 + ρ)Ω
b=
1−χk√−1/2

k
+ 1−ρ + 1−κ ,
(4.5.54)
χ2 +κ2
ρ = 1−κ .

(see (4.5.53) and (4.5.45)).


It is immediately seen that if 0 < χ ≤ κ ≤ 0.1, the right hand side in the resulting bound for Ω is
≤ κ, as required in 20 .(iii). 2

Remark 4.5.2 We have carried out the complexity analysis for a large group of primal-dual
path-following methods for SDP (i.e., for the case of K = Sk+ ). In fact, the constructions and
the analysis we have presented can be word by word extended to the case when K is a direct
product of semidefinite cones – you just should bear in mind that all symmetric matrices we
deal with, like the primal and the dual solutions X, S, the scaling matrices Q, the primal-dual
search directions ∆X, ∆S, etc., are block-diagonal with common block-diagonal structure. In
particular, our constructions and analysis work for the case of LP – this is the case when K
is a direct product of one-dimensional semidefinite cones. Note that in the case of LP Zhang’s
family of primal-dual search directions reduces to a single direction: since now X, S, Q are
diagonal matrices, the scaling (4.5.17) 7→ (4.5.18) does not vary the equations of augmented
complementary slackness.
The recipe to translate all we have presented for the case of SDP to the case of LP is very
simple: in the above text, you should assume all matrices like X, S,... to be diagonal and look
what the operations with these matrices required by the description of the method do with their
diagonals. By the way, one of the very first approaches to the design and the analysis of IP
methods for SDP was exactly opposite: you take an IP scheme for LP, replace in its description
the words “nonnegative vectors” with “positive semidefinite diagonal matrices” and then erase
the adjective “diagonal”.
368 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

4.6 Complexity bounds for LP, CQP, SDP


In what follows we list the best known so far complexity bounds for LP, CQP and SDP. These
bounds are yielded by IP methods and, essentially, say that the Newton complexity of finding
-solution to an instance – the total # of steps of a “good” IP algorithm before an -solution
is found – is O(1) θ(K) ln 1 . This is what should be expected in view of discussion in Section
p

4.5.2; note, however, that the complexity bounds to follow take into account the necessity to
“reach the highway” – to come close to the central path before tracing it, while in Section
4.5.2 we were focusing on how fast could we reduce the duality gap after the central path (“the
highway”) is reached.
Along with complexity bounds expressed in terms of the Newton complexity, we present the
bounds on the number of operations of Real Arithmetic required to build an -solution. Note
that these latter bounds typically are conservative – when deriving them, we assume the data
of an instance “completely unstructured”, which is usually not the case (cf. Warning in Section
4.5.2); exploiting structure of the data, one usually can reduce significantly computational effort
per step of an IP method and consequently – the arithmetic cost of -solution.

4.6.1 Complexity of LP b
Family of problems:

Problem instance: a program


n o
min cT x : aTi x ≤ bi , i = 1, ..., m; kxk2 ≤ R [x ∈ Rn ]; (p)
x

Data:
Data(p) = [m; n; c; a1 , b1 ; ...; am , bm ; R],
Size(p) = dim Data(p) = (m + 1)(n + 1) + 2.

-solution: an x ∈ Rn such that


kxk∞ ≤ R,
aTi x ≤ bi + , i = 1, ..., m,
cT x ≤ Opt(p) + 
(as always, the optimal value of an infeasible problem is +∞).

Newton complexity of -solution: 14)



ComplNwt (p, ) = O(1) m + nDigits(p, ),
where !
Size(p) + kData(p)k1 + 2
Digits(p, ) = ln

is the number of accuracy digits in -solution, see Section 4.1.2.
14)
In what follows, the precise meaning of a statement “the Newton/arithmetic complexity of finding -solution
of an instance (p) does not exceed N ” is as follows: as applied to the input (Data(p), ), the method underlying our
bound terminates in no more than N steps (respectively, N arithmetic operations) and outputs either a vector,
which is an -solution to the instance, or the correct conclusion “(p) is infeasible”.
4.6. COMPLEXITY BOUNDS FOR LP, CQP, SDP 369

Arithmetic complexity of -solution:

Compl(p, ) = O(1)(m + n)3/2 n2 Digits(p, ).

4.6.2 Complexity of CQP b


Family of problems:

Problem instance: a program


" #
n
T
o x ∈ Rn
min c x : kAi x + bi k2 ≤ cTi x + di , i = 1, ..., m; kxk2 ≤ R (p)
x bi ∈ Rki

Data:
Data(P ) = [m; n; k1 , ..., km ; c; A1 , b1 , c1 , d1 ; ...; Am , bm , cm , dm ; R],
m
P
Size(p) = dim Data(p) = (m + ki )(n + 1) + m + n + 3.
i=1

-solution: an x ∈ Rn such that


kxk2 ≤ R,
kAi x + bi k2 ≤ cTi x + di + , i = 1, ..., m,
cT x ≤ Opt(p) + .

Newton complexity of -solution:



ComplNwt (p, ) = O(1) m + 1Digits(p, ).

Arithmetic complexity of -solution:


m
X
Compl(p, ) = O(1)(m + 1)1/2 n(n2 + m + ki2 )Digits(p, ).
i=0

4.6.3 Complexity of SDP b


Family of problems:

Problem instance: a program


 
 n
X 
min cT x : A0 + xj Aj  0, kxk2 ≤ R [x ∈ Rn ], (p)
x  
j=1

(i)
where Aj , j = 0, 1, ..., n, are symmetric block-diagonal matrices with m diagonal blocks Aj of
sizes ki × ki , i = 1, ..., m.

Data:
(1) (m) (1) (m)
Data(p) = [m; n; k1 , ...km ; 
c; A0 , ..., A0 ; ...; An , ..., An ; R],
m
P ki (ki +1)
Size(p) = dim Data(P ) = 2 (n + 1) + m + n + 3.
i=1
370 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

-solution: an x such that

kxk2 ≤ R,
n
 −I,
P
A0 + x j Aj
j=1
cT x ≤ Opt(p) + .

Newton complexity of -solution:


m
X
Nwt
Compl (p, ) = O(1)(1 + ki )1/2 Digits(p, ).
i=1

Arithmetic complexity of -solution:


m
X m
X m
X
Compl(p, ) = O(1)(1 + ki )1/2 n(n2 + n ki2 + ki3 )Digits(p, ).
i=1 i=1 i=1

4.7 Concluding remarks


We have discussed IP methods for LP, CQP and SDP as “mathematical animals”, with emphasis
on the ideas underlying the algorithms and on the theoretical complexity bounds ensured by the
methods. Now it is time to say a couple of words on software implementations of IP algorithms
and on practical performance of the resulting codes.
As far as the performance of recent IP software is concerned, the situation heavily depends
on whether we are speaking about codes for LP, or those for CQP and SDP.
• There exists extremely powerful commercial IP software for LP, capable to handle reliably
really large-scale LP’s and quite competitive with the best Simplex-type codes for Linear Pro-
gramming. E.g., one of the best modern LP solvers – CPLEX – allows user to choose between
a Simplex-type and IP modes of execution, and in many cases the second option reduces the
running time by orders of magnitudes. With a state-of-the-art computer, CPLEX is capable to
solve routinely real-world LP’s with tens and hundreds thousands of variables and constraints; in
the case of favourable structured constraint matrices, the numbers of variables and constraints
can become as large as few millions.
• There already exists a very powerful commercial software for CQP – MOSEK (Erling Ander-
sen, http://www.mosek.com). I would say that as far as LP (and even mixed integer program-
ming) are concerned, MOSEK compares favourable to CPLEX, and it allows to solve really large
CQP’s of favourable structure.
• For the time being, IP software for SDP’s is not as well-polished, reliable and powerful as
the LP one. I would say that the codes available for the moment are capable to solve SDP’s
with no more than 1,000 – 1,500 design variables.
There are two groups of reasons making the power of SDP software available for the moment
that inferior as compared to the capabilities of interior point LP and CQP solvers – the “histor-
ical” and the “intrinsic” ones. The “historical” aspect is simple: the development of IP software
for LP, on one hand, and for SDP, on the other, has started, respectively, in the mid-eighties
and the mid-nineties; for the time being (2002), this is definitely a difference. Well, being too
young is the only shortcoming which for sure passes away... Unfortunately, there are intrinsic
problems with IP algorithms for large-scale (many thousands of variables) SDP’s. Recall that
4.7. CONCLUDING REMARKS 371

the influence of the size of an SDP/CQP program on the complexity of its solving by an IP
method is twofold:
– first, the size affects the Newton complexity of the process. Theoretically, the number of
steps
p required to reduce the duality gap by a constant factor, say, factor 2, is proportional to
θ(K) (θ(K) is twice the total # of conic quadratic inequalities for CQP and the total row
size of LMI’s for SDP). Thus, we could expect an unpleasant growth of the iteration count with
θ(K). Fortunately, the iteration count for good IP methods usually is much less than the one
given by the worst-case complexity analysis and is typically about few tens, independently of
θ(K).
– second, the larger is the instance, the larger is the system of linear equations one should
solve to generate new primal (or primal-dual) search direction, and, consequently, the larger is
the computational effort per step (this effort is dominated by the necessity to assemble and to
solve the linear system). Now, the system to be solved depends, of course, on what is the IP
method we are speaking about, but it newer is simpler (and for most of the methods, is not
more complicated as well) than the system (4.5.8) arising in the primal path-following method:

A∗ [∇2 K(X̄)]A ∆x = −[t+ c + A∗ ∇K(X̄)] . (Nwt)


| {z } | {z }
H h

The size n of this system is exactly the design dimension of problem (CP).
In order to process (Nwt), one should assemble the system (compute H and h) and then solve
it. Whatever is the cost of assembling (Nwt), you should be able to store the resulting matrix H
in memory and to factorize the matrix in order to get the solution. Both these problems – storing
and factorizing H – become prohibitively expensive when H is a large dense15) matrix. (Think
how happy you will be with the necessity to store 5000×50012 = 12, 502, 500 reals representing a
3
dense 5000 × 5000 symmetric matrix H and with the necessity to perform ≈ 5000 6 ≈ 2.08 × 1010
arithmetic operations to find its Choleski factor).
The necessity to assemble and to solve large-scale systems of linear equations is intrinsic
for IP methods as applied to large-scale optimization programs, and in this respect there is no
difference between LP and CQP, on one hand, and SDP, on the other hand. The difference is
in how difficult is to handle these large-scale linear systems. In real life LP’s-CQP’s-SDP’s, the
structure of the data allows to assemble (Nwt) at a cost negligibly small as compared to the
cost of factorizing H, which is a good news. Another good news is that in typical real world
LP’s, and to some extent for real-world CQP’s, H turns out to be “very well-structured”, which
reduces dramatically the expenses required by factorizing the matrix and storing the Choleski
factor. All practical IP solvers for LP and CQP utilize these favourable properties of real life
problems, and this is where their ability to solve problems with tens/hundreds thousands of
variables and constraints comes from. Spoil the structure of the problem – and an IP method
will be unable to solve an LP with just few thousands of variables. Now, in contrast to real life
LP’s and CQP’s, real life SDP’s typically result in dense matrices H, and this is where severe
limitations on the sizes of “tractable in practice” SDP’s come from. In this respect, real life
CQP’s are somewhere in-between LP’s and SDP’s, so that the sizes of “tractable in practice”
CQP’s could be significantly larger than in the case of SDP’s.
It should be mentioned that assembling matrices of the linear systems we are interested in
and solving these systems by the standard Linear Algebra techniques is not the only possible
way to implement an IP method. Another option is to solve these linear systems by iterative
15)
I.e., with O(n2 ) nonzero entries.
372 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

methods. With this approach, all we need to solve a system like (Nwt) is a possibility to multiply
a given vector by the matrix of the system, and this does not require assembling and storing
in memory the matrix itself. E.g., to multiply a vector ∆x by H, we can use the multiplicative
representation of H as presented in (Nwt). Theoretically, the outlined iterative schemes, as
applied to real life SDP’s, allow to reduce by orders of magnitudes the arithmetic cost of building
search directions and to avoid the necessity to assemble and store huge dense matrices, which is
an extremely attractive opportunity. The difficulty, however, is that the iterative schemes are
much more affected by rounding errors that the usual Linear Algebra techniques; as a result,
for the time being “iterative-Linear-Algebra-based” implementation of IP methods is no more
than a challenging goal.
Although the sizes of SDP’s which can be solved with the existing codes are not that im-
pressive as those of LP’s, the possibilities offered to a practitioner by SDP IP methods could
hardly be overestimated. Just ten years ago we could not even dream of solving an SDP with
more than few tens of variables, while today we can solve routinely 20-25 times larger SDP’s,
and we have all reasons to believe in further significant progress in this direction.

4.8 Exercises for Lecture 4


Solutions to exercises/parts of exercises colored in cyan can be found in section 6.4.

4.8.1 Around canonical barriers


Exercise 4.1 Prove that the canonical barrier for the Lorentz cone is strongly convex.

Hint: Rewrite the barrier equivalently as

xT x
 
Lk (x) = − ln t − − ln t
t

xT x
and use the fact that the function t − t is concave in (x, t).

Exercise 4.2 Prove Proposition 4.3.2.

Hint: Note that the property to be proved is “stable w.r.t. taking direct products”, so that
it suffices to verify it in the cases of K = Sk+ ; to end, you can use (4.3.1).

Exercise 4.3 Let K be a direct product of Lorentz and semidefinite cones, and let K(·) be the
canonical barrier for K. Prove that whenever X ∈ int K and S = −∇K(X), the matrices
∇2 K(X) and ∇2 K(S) are inverses of each other.

Hint: Differentiate the identity

−∇K(−∇K(X)) = X

given by Proposition 4.3.2.


4.8. EXERCISES FOR LECTURE 4 373

4.8.2 Scalings of canonical cones


We already know that the semidefinite cone Sk+ is “highly symmetric”: given two interior points
X, X 0 of the cone, there exists a symmetry of Sk+ – an affine transformation of the space where
the cone lives – which maps the cone onto itself and maps X onto X 0 . The Lorentz cone possesses
the same properties,
 and its symmetries are Lorentz transformations. Writing vectors from Rk
u
as x = with u ∈ Rk−1 , t ∈ R, we can write down a Lorentz transformation as
t
h p i!
U u − [µt − ( 1 + µ2 − 1)eT u]e
 
u
7→ α (LT)
t
p
1 + µ2 t − µeT u

Here α > 0, µ ∈ R, e ∈ Rk−1 , eT e = 1, and an orthogonal k × k matrix U are the parameters


of the transformation.
The first question is whether (LT) is indeed a symmetry k
  of L . Note that (LT) is the product
u
of three linear mappings: we first act on vector x = by the special Lorentz transformation
t
p
u − [µtp− ( 1 + µ2 − 1)eT u]e
   
u
Lµ,e : 7→ (∗)
t 1 + µ2 t − µeT u

then “rotate” the result by U around the t-axis, and finally multiply the result by α > 0. The
second and the third transformations clearly map the Lorentz cone onto itself. Thus, in order to
verify that the transformation (LT) maps Lk onto itself, it suffices to establish the same property
for the transformation (∗).

Exercise 4.4 Prove that


1) Whenever e ∈ Rk−1 is a unit vector and µ ∈ R, the linear transformation (∗) maps the
cone Lk onto itself. Moreover, transformation (∗) preserves the “space-time interval” xT Jk x ≡
−x21 − ... − x2k−1 + x2k :

[Lµ,e x]T Jk [Lµ,e x] = xT Jk x ∀x ∈ Rk [⇔ LTµ,e Jk Lµ,e = Jk ]

and L−1
µ,e = Lµ,−e .  

2) Given a point x̄ ≡ ∈ int Lk and specifying a unit vector e and a real µ according to

ū = kūk2 e,
µ = √ kūk
2
2
T
,
t̄ −ū ū
 
the resulting special Lorentz transformation Lµ,e maps x̄ onto the point √ 0k−1 on the
  t̄2 − ūT ū
0k−1 q
2
“axis” {x = | τ ≥ 0} of the cone Lk . Consequently, the transformation L
t̄2 −ūT ū µ,e
τ  
0√
k−1
maps x̄ onto the “central point” e(Lk ) = of the axis – the point where ∇2 Lk (·) is the
2
unit matrix.

By Exercise 4.4, given two points x0 , x00 ∈ int Lk , we can find two symmetries L0 , L00 of Lk such
that L0 x0 = e(Lk ), L00 x00 = e(Lk ), so that the linear mapping (L00 )−1 L0 which is a symmetry
374 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

of L since both L0 , L00 are, maps x0 onto x00 . In fact the product (L00 )−1 L0 is again a Lorentz
transformation – these transformations form a subgroup in the group of all linear transformations
of Rk .
The importance of Lorentz transformations for us comes from the following fact:
Proposition 4.8.1 The canonical barrier Lk (x) = − ln(xT Jk x) of the Lorentz cone
is semi-invariant w.r.t. Lorentz transformations: if L is such a transformation, then
Lk (Lx) = Lk (x) + const(L).

Exercise 4.5 Prove Proposition 4.8.1.


As it was explained, the semidefinite cone Sk+ also possesses a rich group of symmetries (which
here are of the form X 7→ HXH T , DetH 6= 0); as in the case of the Lorentz cone, “richness”
means that there are enough symmetries to map any interior point of Sk+ onto any other interior
point of the cone. Recall also that the canonical barrier for the semidefinite cone is semi-invariant
w.r.t. these symmetries.
Since our “basic components” Lk and Sk+ possess rich groups of symmetries, so are all
canonical cones – those which are direct products of the Lorentz and the semidefinite ones.
Given such a cone
k
K = Sk+1 × ... × S+p × Lkp+1 × ... × Lkm ⊂ E = Sk1 × ... × Skp × Rkp+1 × ... × Rkm . (Cone)
let us call a scaling of K a linear transformation Q of E such that
X1 Q1 X1
   

Q  ...  =  ... 
Xm Qm Xm
and every Qi is either a Lorentz transformation, if the corresponding direct factor of K is a
Lorentz cone (in our notation this is the case when i > p), or a “semidefinite scaling” Qi Xi =
Hi Xi HiT , DetHi 6= 0, if the corresponding direct factor of K is the semidefinite cone (i.e., if
i ≤ p).
Exercise 4.6 Prove that
1) If Q is a scaling of the cone K, then Q is a symmetry of K, i.e., it maps K onto itself,
and the canonical barrier K(·) of K is semi-invariant w.r.t. Q:
K(QX) = K(X) + const(Q).
2) For every pair X 0 , X 00 of interior point of K, there exists a scaling Q of K which maps
X0 onto X 00 . In particular, for every point X ∈ int K there exists a scaling Q which maps X
onto the “central point” e(K) of K defined as
 
Ik1

 ... 

Ikp
 
 ! 
 
 0kp+1
√ −1

e(K) = 
 
 2 

 

 ... ! 

 0k√
m −1


2
4.8. EXERCISES FOR LECTURE 4 375

where the Hessian of K(·) is the unit matrix:

h[∇2 K(e(K))]X, Y iE = hX, Y iE .

Those readers who passed through Section 4.5.4 may guess that scalings play a key role in the
LP-CQP-SDP interior point constructions and proofs. The reason is simple: in order to realize
what happens with canonical barriers and related entities, like central paths, etc., at certain
interior point X of the cone K in question, we apply an appropriate scaling to convert our point
into a “simple” one, such as the central point e(K) of the cone K, and look what happens at this
simple-to-analyze point. We then use the semi-invariance of canonical barriers w.r.t. scalings to
”transfer” our conclusions to the original case of interest. Let us look at a couple of instructive
examples.

4.8.3 The Dikin ellipsoid


Let K be a canonical cone, i.e., a direct product of the Lorentz and the semidefinite cones, E be
the space where K lives (see (Cone)), and K(·) be the canonical barrier for K. Given X ∈ int K,
we can define a “local Euclidean norm”
q
kHkX = h[∇2 K(X)]H, HiE

on E.

Exercise 4.7 Prove that k · kX “conforms” with scalings: if X is an interior point of K and Q
is a scaling of K, then

kQHkQX = kHkX ∀H ∈ E ∀X ∈ int K.

In other words, if X ∈ int K and Y ∈ E, then the k · kX -distance between X and Y equals to
the k · kQX -distance between QX and QY .

Hint: Use the semi-invariance of K(·) w.r.t. Q to show that

Dk K(QX)[QH1 , ..., QHk ] = Dk K(X)[H1 , ..., Hk ].

and then set k = 2, H1 = H2 = H.

Exercise 4.8 For X ∈ int K, the Dikin ellipsoid of the point X is defined as the set

WX = {Y | kY − XkX ≤ 1}

Prove that WX ⊂ K.

Hint: Note that the property to be proved is “stable w.r.t. taking direct products”, so that
it suffices to verify it in the cases of K = Lk and K = Sk+ . Further, use the result of Exercise
4.7 to verify that the Dikin ellipsoid “conforms with scalings”: the image of WX under a
scaling Q is exactly WQX . Use this observation to reduce the general case to the one where
X = e(K) is the central point of the cone in question, and verify straightforwardly that
We(K) ⊂ K.
376 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

A 2D cross-section of S3+ and cross-sections of 3 Dikin ellipsoids

Figure 4.3: Dikin ellipsoids

According to Exercise 4.8, the Dikin ellipsoid of a point X ∈ int K is contained in K; in other
words, the distance from X to the boundary of K, measured in the k · kX -norm, is not too small
(it is at least 1). Can this distance be too large? The answer is “no” – the θ(K)-enlargement
of the Dikin ellipsoid contains a “significant” part of the boundary of K. Specifically, given
X ∈ int K, let us look at the vector −∇K(X). By Proposition 4.3.2, this vector belongs to the
interior of K, and since K is self-dual, it means that the vector has positive inner products with
all nonzero vectors from K. It follows that the set (called a “conic cap”)

KX = {Y ∈ K | h−∇K(X), X − Y iE ≥ 0}

– the part of K “below” the affine hyperplane which is tangent to the level surface of K(·)
passing through X – is a convex compact subset of K which contains an intersection of K and
a small ball centered at the origin, see Fig. 4.4.
Exercise 4.9 Let X ∈ int K. Prove that
1) The conic cap KX “conforms with scalings”: if Q is a scaling of K, then Q(KX ) = KQX .
Hint: From the semi-invariance of the canonical barrier w.r.t. scalings it is clear that the
image, under a scaling, of the hyperplane tangent to a level surface of K is again a hyperplane
tangent to (perhaps, another) level surface of K.

2) Whenever Y ∈ K, one has

h∇K(X), Y − XiE ≤ θ(K).

3) X is orthogonal to the hyperplane {H | h∇K(X), HiE = 0} in the local Euclidean struc-


ture associated with X, i.e.,

h∇K(X), HiE = 0 ⇔ h[∇2 K(X)]X, HiE = 0.


4.8. EXERCISES FOR LECTURE 4 377

Figure 4.4: Conic cap of L3 associated with X = [0.3; 0; 1]

4) The conic cap KX is contained in the k · kX -ball, centered at X, with the radius θ(K):

Y ∈ KX ⇒ kY − XkX ≤ θ(K).

Hint to 2-4): Use 1) to reduce the situation to the one where X is the central point of K.

4.8.4 More on canonical barriers


Equipped with scalings, we can establish two additional useful properties of canonical barriers.
Let K be a canonical cone, and K(·) be the associated canonical barrier.

Exercise 4.10 Prove that if X ∈ int K, then


q
max{h∇K(X), HiE | kHkX ≤ 1} ≤ θ(K).

Hint: Verify that the statement to be proved is “scaling invariant”, so that it suffices to prove
it in the particular case when X is the central point e(K) of the canonical cone K. To verify
the statement in this particular case, use Proposition 4.3.1.

Exercise 4.11 Prove that if X ∈ int K and H ∈ K, H 6= 0, then

h∇K(X), HiE < 0

and
inf K(X + tH) = −∞.
t≥0

Derive from the second statement that


378 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

Proposition 4.8.2 If N is an affine plane which intersects the interior of K, then


K is bounded below on the intersection N ∩ K if and only if the intersection is
bounded.

Hint: The first statement is an immediate corollary of Proposition 4.3.2. To prove the second
fact, observe first that it is “scaling invariant”, so that it suffices to verify it in the case when
X is the central point of K, and then carry out the required verification.

4.8.5 Around the primal path-following method


We have seen that the primal path-following method (Section 4.5.3) is a “strange” entity: it is
a purely primal process for solving conic problem
n o
min cT x : X ≡ Ax − B ∈ K , (CP)
x

where K is a canonical cone, which iterates the updating

t 7→ t+ > t
X 7→ X+ = X − A [A [∇2 K(X)]A]−1 [t+ c + A∗ ∇K(X)]

(U).
| {z }
δx

In spite of its “primal nature”, the method is capable to produce dual approximate solutions;
the corresponding formula is

S+ = −t−1 2
+ [∇K(X) − [∇ K(X)]Aδx]. (S)

What is the “geometric meaning” of (S)?


The answer is simple. Given a strictly feasible primal solution Y = Ay − B and a value
τ > 0 of the penalty parameter, let us think how we could extend (τ, Y ) by a strictly feasible
dual solution S to a triple (τ, Y, S) which is as close as possible, w.r.t. the distance dist(·, ·), to
the point Z∗ (τ ) of the primal-dual central path. Recall that the distance in question is
q
dist((Y, S), Z∗ (τ )) = h[∇2 K(Y )]−1 (τ S + ∇K(Y )), τ S + ∇K(Y )iE . (dist)

The simplest way to resolve our question is to choose in the dual feasible plane

L⊥ + C = {S | A∗ (C − S) = 0} [C : A∗ C = c]

the point S which minimizes the right hand side of (dist). If we are lucky to get the resulting
point in the interior of K, we get the best possible completion of (τ, Y ) to an “admissible” triple
(τ, Y, S) – the one where S is strictly dual feasible, and the triple is “as close as possible” to
Z∗ (τ ).
Now, there is no difficulty in finding the above S – this is just a Least Squares problem
n o
min h[∇2 K(Y )]−1 (τ S + ∇K(Y )), τ S + ∇K(Y )iE : A∗ S = c [≡ A∗ C]
S
4.8. EXERCISES FOR LECTURE 4 379

Exercise 4.12 1) Let τ > 0 and Y = Ay − B ∈ int K. Prove that the solution to the above
Least Squares problem is

S∗ = −τ −1 ∇K(Y ) − [∇2 K(Y )]Aδ ,


 
(∗)
δ = [A∗ [∇2 K(Y )]A]−1 [τ c + A∗ ∇K(Y )],

and that the squared optimal value in the problem is


−1 ∗
λ2 (τ, y) ≡ [∇K(Y )]T A A∗ [∇2 K(Y )]A

A ∇K(Y )
 2
= kS∗ + τ −1 ∇K(Y )k−τ −1 ∇K(Y ) (4.8.1)
 2
= kτ S∗ + ∇K(Y )k−∇K(Y ) .

2) Derive from 1) and the result of Exercise 4.8 that if the Newton decrement λ(τ, y) of (τ, y)
is < 1, then we are lucky – S∗ is in the interior of K.
3) Derive from 1-2) the following

Corollary 4.8.1 Let (CP) and its dual be strictly feasible, let Z∗ (·) be the primal-dual cen-
tral path associated with (CP), and let (τ, Y, S) be a triple comprised of τ > 0, a strictly
feasible primal solution Y = Ay − B, and a strictly feasible dual solution S. Assume that
dist((Y, S), Z∗ (τ )) < 1. Then

λ(τ, y) = dist((Y, S∗ ), Z∗ (τ )) ≤ dist((Y, S), Z∗ (τ )),

where S∗ is the strictly feasible dual solution given by 1 - 2.

Hint: When proving the second equality in (4.8.1), use the result of Exercise 4.3.

The result stated in Exercise 4.12 is very instructive. First, we see that S+ in the primal path-
following method is exactly the “best possible completion” of (t+ , X) to an admissible triple
(t+ , X, S). Second, we see that

Proposition 4.8.3 If (CP) is strictly feasible and there exists τ > 0 and a strictly
feasible solution y of (CP) such that λ(τ, y) < 1, then the problem is strictly dual
feasible.

Indeed, the above scheme, as applied to (τ, y), produces a strictly feasible dual
solution!

In fact, given that (CP) is strictly feasible, the existence of τ > 0 and a strictly
feasible solution y to (CP) such that λ(τ, y) < 1 is a necessary and sufficient condition
for (CP) to be strictly primal-dual feasible.

Indeed, the sufficiency of the condition was already established. To prove its
necessity, note that if (CP) is primal-dual strictly feasible, then the primal central
path is well-defined16) ; and if (τ, Y∗ (τ ) = Ay(τ )−B) is on the primal central path,
then, of course, λ(τ, y(τ )) = 0.
16)
In this Lecture, we have just announced this fact, but did not prove it; an interested reader is asked to give
a proof by him/herself.
380 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

Note that the Newton decrement admits a nice and instructive geometric interpretation:

Let K be a canonical cone, K be the associated canonical barrier, Y = Ay − B be a


strictly feasible solution to (CP) and τ > 0. Consider the barrier-generated family

Bt (x) = tcT x + B(x), B(x) = K(Ax − B).

Then

λ(τ, y) = max{hT ∇Bτ (y) | hT [∇2 Bτ (y)]h ≤ 1}


= max{hτ C + ∇K(Y ), HiE | kHkY ≤ 1, H ∈ Im A} [Y = Ay − B].
(4.8.2)

Exercise 4.13 Prove (4.8.2).

Exercise 4.14 Let K be a canonical cone, K be the associated canonical barrier, and let N be
an affine plane intersecting int K such that the intersection U = N ∩ int K is unbounded. Prove
that for every X ∈ U one has

max{h∇K(X), Y − XiE | kY − XkX ≤ 1, Y ∈ N } ≥ 1.

Hint: Assume that the opposite inequality holds true for some X ∈ U and use (4.8.2) and
Proposition 4.8.3 to conclude that the problem with trivial objective

min {h0, XiE : X ∈ N ∩ K}


X

and its conic dual are strictly feasible. Then use the result of Exercise 1.16 to get a contra-
diction.

The concluding exercise in this series deals with the “toy example” of application of the primal
path-following method described at the end of Section 4.5.3:

Exercise 4.15 Looking at the data in the table at the end of Section 4.5.3, do you believe that
the corresponding method is exactly the short-step primal path-following method from Theorem
4.5.1 with the stepsize policy (4.5.12)?

In fact the data at the end of Section 4.5.3 are given by a simple modification of the short-
step path-following method: instead of the penalty updating policy (4.5.12), we increase at each
step the value of the penalty in the largest ratio satisfying the requirement λ(ti+1 , xi ) ≤ 0.95.

4.8.6 Infeasible start path-following method


In our presentation of the interior point path-following methods, we have ignored completely
the initialization issue – how to come close to the path in order to start its tracing. There are
several techniques for accomplishing this task; we are about to outline one of these techniques –
the infeasible start path-following scheme (originating from C.Roos & T. Terlaky and from Yu.
Nesterov). Among other attractive properties, a good “pedagogical” feature of this technique
is that its analysis heavily relies on the results of exercises in Sections 4.8.3, 4.8.4, 4.8.5, thus
illustrating the extreme importance of the facts which at a first glance look a bit esoteric.
4.8. EXERCISES FOR LECTURE 4 381

Situation and goals. Consider the following situation. We are interested to solve a conic
problem n o
min cT x : X ≡ Ax − B ∈ K , (CP)
x
where K is a canonical cone. The corresponding primal-dual pair, in its geometric form, is

min {hC, XiE : X ∈ (L − B) ∩ K} (P)


X n o
max hB, SiE : S ∈ (L⊥ + C) ∩ K (D)
S h i
L = Im A, L⊥ = Ker A∗ , A∗ C = c

From now on we assume the primal-dual pair (P), (D) to be strictly primal-dual feasible.
To proceed, it is convenient to “normalize” the data as follows: when we shift B along the
subspace L, (P) remains unchanged, while (D) is replaced with an equivalent problem (since
when shifting B along L, the dual objective, restricted to the dual feasible set, gets a constant
additive term). Similarly, when we shift C along L⊥ , the dual problem (D) remains unchanged,
and the primal (P) is replaced with an equivalent problem. Thus, we can shift B along L and C
along L⊥ , while not varying the primal-dual pair (P), (D) (or, better to say, converting it to an
equivalent primal-dual pair). With appropriate shift of B along L we can enforce B ∈ L⊥ , and
with appropriate shift of C along L⊥ we can enforce C ∈ L. Thus, we can assume that from the
very beginning the data are normalized by the requirements

B ∈ L⊥ , C ∈ L, (Nrm)

which, in particular, implies that hC, BiE = 0, so that the duality gap at a pair (X, S) of
primal-dual feasible solutions becomes

DualityGap(X, S) = hX, SiE = hC, XiE − hB, SiE [= hC, XiE − hB, SiE + hC, BiE ].

Our goal is rather ambitious: to develop an interior point method for solving (P), (D) which
requires neither a priori knowledge of a primal-dual strictly feasible pair of solutions, nor a
specific initialization phase.

The scheme. The construction we are about to present achieves the announced goal as follows.
1. We write down the following system of conic constraints in variables X, S and additional
scalar variables τ , σ:

(a) X + τB − P ∈ L;
(b) S − τC − D ∈ L⊥ ;
(c) hC, XiE − hB, SiE + σ − d = 0;
(e) X ∈ K; (C)
(f ) S ∈ K;
(g) σ ≥ 0;
(h) τ ≥ 0.

Here P, D, d are certain fixed entities which we choose in such a way that
(i) We can easily point out a strictly feasible solution Yb = (X,
b S,
b σ
b , τb = 1) to the system;
(ii) The solution set Y of (C) is unbounded; moreover, whenever Yi = (Xi , Si , σi , τi ) ∈ Y
is an unbounded sequence, we have τi → ∞.
382 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

2. Imagine that we have a mechanism which allows us to “run away to ∞ along Y”,
q to generate a sequence of points Yi = (Xi , Si , σi , τi ) ∈ Y such that kYi k ≡
i.e.,
kXi k2E + kSi k2E + σi2 + τi2 → ∞. In this case, by (ii) τi → ∞, i → ∞. Let us define the
normalizations
e i = τ −1 Xi , Sei = τ −1 Si .
X i i

of Xi , Si . Since (Xi , Si , σi , τi ) is a solution to (C), these normalizations satisfy the relations

(a) Xe i ∈ (L − B + τ −1 P ) ∩ K;
i
(b) Sei ∈ (L⊥ + C + τi−1 D) ∩ K; (C0 )
e i iE − hB, Sei iE ≤ τ −1 d.
(c) hC, X i

Since τi → ∞, relations (C0 ) say that as i → ∞, the normalizations X e i , Sei simultaneously


0
approach primal-dual feasibility for (P), (D) (see (C .a−b)) and primal-dual optimality (see
(C0 .c) and recall that the duality gap, with our normalization hC, BiE = 0, is hC, XiE −
hB, SiE ).

3. The issue, of course, is how to build a mechanism which allows to run away to ∞ along Y.
The mechanism we intend to use is as follows. (C) can be rewritten in the generic form

X
 
S
 σ  ∈ (M + R) ∩ K
Y ≡  e (G)
τ

where

e = K × K × S1 × S1 ,
K + +
|{z} |{z}
=R+ =R+


U
  


 V U + rB ∈ L, 


V − rC ∈ L⊥ ,

M= s


hC, U iE − hB, V iE + s = 0 

 
 
r
is a linear subspace in the space E
e where the cone K
e lives,


P
 
 D 
 ∈ E.
R= e
 d − hC, P iE + hB, DiE 
0
The cone K e is a canonical cone along with K; as such, it is equipped with the corresponding
X
 b 
 Sb 
canonical barrier K(·).
f Let Yb =  σ
 be the strictly feasible solution to (G) given
b 
τb = 1
by 1.(i), and let
Ce = −∇K(
f Yb ).
4.8. EXERCISES FOR LECTURE 4 383

Consider the auxiliary problem


n o
e Y i : Y ∈ (M + R) ∩ K
min hC, E
e . (Aux)
Y e

By the origin of C,
e the point Yb lies on the primal central path Ye∗ (t) of this auxiliary
problem:
Yb = Ye∗ (1).
Let us trace the primal central path Ye∗ (·), but decreasing the value of the penalty instead of
increasing it, thus enforcing the penalty to approach 0. What will happen in this process?
Recall that the point Ye∗ (t) of the primal central path of (Aux) minimizes the aggregate
e Y i + K(Y
thC, f )
E
e

over Y ∈ Y. When t is small, we, essentially, are trying to minimize just K(Y f ). But
the canonical barrier, restricted to an unbounded intersection of an affine plane and the
associated canonical cone, is not bounded below on this intersection (see Proposition 4.8.2).
Therefore, if we were minimizing the barrier K f over Y, the minimum “would be achieved
at infinity”; it is natural to guess (and this is indeed true) that when minimizing a slightly
perturbed barrier, the minimum will run away to infinity as the level of perturbations goes
to 0. Thus, we may expect (and again it is indeed true) that kYe∗ (t)k → ∞ as t → +0, so
that when tracing the path Ye (t) as t → 0, we are achieving our goal of running away to
infinity along Y.

Now let us implement the outlined approach.

Specifying P ,D,d. Given the data of (CP), let us choose somehow P >K B, D >K −C,
σ
b > 0 and set
d = hC, P − BiE − hB, D + CiE + σ
b.

Exercise 4.16 Prove that with the above setup, the point

X =P −B
 b 
 Sb = C + D 
Yb =  
 σ
b 
τb = 1

is a strictly feasible solution to (Aux). Thus, our setup ensures 1.(i).

Verifying 1.(ii). This step is crucial:

Exercise 4.17 Let (Aux0 ) be the problem dual to (Aux). Prove that (Aux), (Aux0 ) is a strictly
primal-dual feasible pair of problems.

Hint: By construction, (Aux) is strictly feasible; to prove that (Aux0 ) is also strictly feasible,
use Proposition 4.8.3.

Exercise 4.18 Prove that with the outlined setup the feasible set Y of (Aux) is unbounded.
384 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

Hint: Use the criterion of boundedness of the feasible set of a feasible conic problem (Exercise
1.15) which as applied to (Aux) reads as follows: the feasible set of (Aux) is bounded if and
only if M⊥ intersects the interior of the cone dual to K e (since K e is a canonical cone, the
cone dual to it is K itself).
e

The result of Exercise 4.18 establishes the major part of 1.(ii). The remaining part of the
latter property is given by
Exercise 4.19 Let X̄, S̄ be a strictly feasible pair of primal-dual solutions to (P), (D) (recall
that the latter pair of problems was assumed to be strictly primal-dual feasible), so that there
exists γ ∈ (0, 1] such that
γkXkE ≤ hS̄, XiE ∀X ∈ K,
γkSkE ≤ hX̄, SiE ∀S ∈ K.
X
 
S
 σ  is feasible for (Aux), then
Prove that if Y =  

kY kEe ≤ ατ + β,
α = γ −1 hX̄, CiE − hS̄, BiE + 1,
 
(4.8.3)
β = γ −1 hX̄ + B, DiE + hS̄ − C, P iE + d .


Use this result to complete the verification of 1.(ii).

Tracing the path Ye∗ (t) as t → 0. The path Ye∗ (t) is the primal central path of certain
strictly primal-dual feasible primal-dual pair of conic problems associated with a canonical cone
(see Exercise 4.17). The only difference with the situation already discussed in this Lecture is
that now we are interested to trace the path as t → +0, starting the process from the point
Yb = Ye∗ (1) given by 1.(i), rather than to trace the path as t → ∞. It turns out that we have
exactly the same possibilities to trace the path Ye∗ (t) as the penalty parameter approaches 0 as
when tracing the path as t → ∞; in particular, we can use short-step primal and primal-dual
path-following methods with stepsize policies “opposite” to those mentioned, respectively, in
Theorem 4.5.1 and Theorem 4.5.2 (“opposite” means that instead of increasing the penalty at
each iteration in certain ratio, we decrease it in exactly the same ratio). It can be verified (take it
for granted!) that the results of Theorems 4.5.1, 4.5.2 remain valid in this new situation as well.
Thus, in order to generate a triple (t, Y, U ) such that t ∈ (0, 1), Y is strictly feasible for (Aux),
U is strictly feasible for the problem (Aux0 ) dual to (Aux), and dist((Y, U ), Ze∗ (t)) ≤ κ ≤ 0.1, it
suffices to carry out
f ln 1 = O(1) θ(K) ln 1
q q
N (t) = O(1) θ(K)
t t
steps of the path-following method; here Ze∗ (·) is the primal-dual central path of the primal-dual
pair of problems (Aux), (Aux0 ), and dist from now on is the distance to this path, as defined in
Section 4.4.2.2. Thus, we understand what is the cost of arriving at a close-to-the-path triple
(t, Y, U ) with a desired value t ∈ (0, 1) of the penalty. Further, our original scheme explains how
to convert the Y -component of such a triple into a pair Xt , St of approximate solutions to (P),
(D):
1 1
Xt = X[Y ]; St = S[Y ],
τ [Y ] τ [Y ]
4.8. EXERCISES FOR LECTURE 4 385

where
X[Y ]
 
 S[Y ] 
 σ[Y ]  .
Y = 

τ [Y ]
What we do not know for the moment is

(?) What is the quality of the resulting pair (Xt , St ) of approximate solutions to (P),
(D) as a function of t?

Looking at (C0 ), we see that (?) is, essentially, the question of how rapidly the component
τ [Y ] of our “close-to-the-path triple (t, Y, U )” blows up when t approaches 0. In view of the
bound (4.8.3), the latter question, in turn, becomes “how large is kY kEe when t is small”. The
answers to all these questions are given in the following two exercises:

Exercise 4.20 Let (t, Y, U ) be a “close-to-the-path” triple, so that t > 0, Y is strictly feasible
for (Aux), U is strictly feasible for the dual to (Aux) problem (Aux0 ) and

dist((Y, U ), Ze∗ (t)) ≤ κ ≤ 0.1.

Verify that

(a) max{h−∇ f ), HiE : H ∈ M, kHkY ≤ 1} ≥ 1.


K(Y
(4.8.4)
(b) max{ htC + ∇K(Y
f ), HiE : H ∈ M, kHkY ≤ 1} ≤ κ ≤ 0.1.
e

Conclude from these relations that


e HiE : H ∈ M, kHkY ≤ 1} ≥ 0.9.
max{h−tC, (4.8.5)

Hint: To verify (4.8.4.a), use the result of Exercise 4.14. To verify (4.8.4.b), use Corollary
4.8.1 (with (Aux) playing the role of (CP), t playing the role of τ and U playing the role of
S) and the result of Exercise 4.13.

Now consider the following geometric construction. Given a triple (t, Y, U ) satisfying the premise
of Exercise 4.20, let us denote by W 1 the intersection of the Dikin ellipsoid of Yb with the feasible
plane of (Aux), and by W t the intersection of the Dikin ellipsoid of Y with the same feasible
plane. Let us also extend the line segment [Yb , Y ] to the left of Yb until it crosses the boundary
of W 1 at certain point Q. Further, let us choose H ∈ M such that kHkY = 1 and

h−tC,
e Hi ≥ 0.9
E
e

(such an H exists in view of (4.8.5)) and set

kYb − P k2
M = Y + H; N = Yb + ωH, ω= .
kY − P k2

The cross-section of the entities involved by the 2D plane passing through Q, Y, M looks as
shown on Fig. 4.5.

Exercise 4.21 1) Prove that the points Q, M, N belong to the feasible set of (Aux).
386 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS

P Yb Y

Figure 4.5: Illustration for considerations on p. 385

Hint: Use Exercise 4.8 to prove that Q, M are feasible for (Aux); note that N is a convex
combination of Q, M .

2) Prove that
ω e Hi ≥ 0.9ω
h∇K(
f Yb ), N − Yb iE = h−tC, E
t e t
e = −∇K(
Hint: Recall that by definition C e Yb ).

3) Derive from 1-2) that


θ(K)t
f
ω≤ .
0.9
Conclude from the resulting bound on ω that

kY kEe ≥ Ω
t − Ω0 ,
0.9 min[kDk
e | kDkYb =1]
E
(4.8.6)
Ω= D
, Ω0 = max[kDkEe | kDkYb = 1]
θ(K)
e D

Note that Ω and Ω0 are positive quantities depending on our “starting point” Yb and completely
independent of t!

Hint: To derive the bound on ω, use the result of Exercise 4.9.2.

Exercise 4.22 Derive from the results of Exercises 4.21, 4.19 that there exists a positive con-
stant Θ (depending on the data of (Aux)) such that
4.8. EXERCISES FOR LECTURE 4 387

(#) Whenever a triple (t, Y, U ) is “close-to-the-path” (see Exercise 4.20) and Y =


X
 
S
 , one has
σ
τ
1
τ≥ − Θ.
Θt
Consequently, when t ≤ 1
2Θ2
, the pair (Xτ = τ −1 X, Sτ = τ −1 S) satisfies the relations
(cf. (C0 ))

Xτ ∈ K ∩ (L − B + 2tΘP ) [“primal O(t)-feasibility”]


Sτ ∈ K ∩ (L⊥ + C + 2tΘD) [“dual O(t)-feasibility”] (+)
hC, Xτ iE − hB, Sτ iE ≤ 2tΘd [“O(t)-duality gap”]

(#) says that in order to get an “-primal-dual feasible -optimal” solution to (P), (D), it suf-
fices to trace the primal central path of (Aux), starting at the point Yb (penalty parameter
equals
p 1) until1a close-to-the-path point with penalty parameter O() is reached, which requires
O( θ(K) ln O() ) iterations. Thus, we arrive at a process with the same complexity character-
istics as for the path-following methods discussed in this Lecture; note, however, that now we
have absolutely no troubles with how to start tracing the path.
At this point, a careful reader should protest: relations (+) do say that when t is small, Xτ
is nearly feasible for (P) and Sτ is nearly feasible for (D); but why do we know that Xτ , Sτ are
nearly optimal for the respective problems? What pretends to ensure the latter property, is the
“O(t)-duality gap” relation in (+), and indeed, the left hand side of this inequality looks as the
duality gap, while the right hand side is O(t). But in fact the relation

DualityGap(X, S) ≡ [hC, XiE − Opt(P)] + [Opt(D) − hB, SiE ] = hC, XiE − hB, SiE 17 )

is valid only for primal-dual feasible pairs (X, S), while our Xτ , Sτ are only O(t)-feasible.
Here is the missing element:

Exercise 4.23 Let the primal-dual pair of problems (P), (D) be strictly primal-dual feasible and
be normalized by hC, BiE = 0, let (X∗ , S∗ ) be a primal-dual optimal solution to the pair, and let
X, S “-satisfy” the feasibility and optimality conditions for (P), (D), i.e.,

(a) X ∈ K ∩ (L − B + ∆X), k∆XkE ≤ ,


(b) S ∈ K ∩ (L⊥ + C + ∆S), k∆SkE ≤ ,
(c) hC, XiE − hB, SiE ≤ .

Prove that
hC, XiE − Opt(P) ≤ (1 + kX∗ + BkE ),
Opt(D) − hB, SiE ≤ (1 + kS∗ − CkE ).

Exercise 4.24 Implement the infeasible-start path-following method.

17)
In fact, in the right hand side there should also be the term hC, BiE ; recall, however, that with our setup
this term is zero.
388 LECTURE 4. POLYNOMIAL TIME INTERIOR POINT METHODS
Lecture 5

Simple methods for large-scale


problems

5.1 Motivation: Why simple methods?


The polynomial time Interior Point methods, same as all other polynomial time methods for
Convex Programming known so far, have a not that pleasant common feature: the arithmetic
cost C of an iteration in such a method grows nonlinearly with the design dimension n of the
problem, unless the problem possesses a very favourable structure. E.g., in IP methods, an
iteration requires solving a system of linear equations with (at least) n unknowns. To solve
this auxiliary problem, it costs at least O(n2 ) operations (with the traditional Linear Algebra
– even O(n3 ) operations), except for the cases when the matrix of the system is very sparse
and, moreover, possesses a well-structured sparsity pattern. The latter indeed is the case when
solving most of LPs of decision-making origin, but often is not the case for LPs coming from
Engineering, and nearly never is the case for SDPs. For other known polynomial time methods,
the situation is similar – the arithmetic cost of an iteration, even in the case of extremely simple
objectives and feasible sets, is at least O(n2 ). With n of order of tens and hundreds of thousands,
the computational effort of O(n2 ), not speaking about O(n3 ), operations per iteration becomes
prohibitively large – basically, you never will finish the very first iteration of your method... On
the other hand, the design dimensions of tens and hundreds of thousands is exactly what is met
in many applications, like SDP relaxations of combinatorial problems involving large graphs or
Structural Design (especially for 3D structures). As another important application of this type,
consider 3D Medical Imagingg3 problem arising in Positron Emission Tomography.

Positron Emission Tomography (PET) is a powerful, non-invasive, medical diagnostic imaging


technique for measuring the metabolic activity of cells in the human body. It has been in clinical use
since the early 1990s. PET imaging is unique in that it shows the chemical functioning of organs and
tissues, while other imaging techniques - such as X-ray, computerized tomography (CT) and magnetic
resonance imaging (MRI) - show anatomic structures.
A PET scan involves the use of a radioactive tracer – a fluid with a small amount of a radioactive
material which has the property of emitting positrons. When the tracer is administered to a patient,
either by injection or inhalation of gas, it distributes within the body. For a properly chosen tracer, this
distribution “concentrates” in desired locations, e.g., in the areas of high metabolic activity where cancer
tumors can be expected.
The radioactive component of the tracer disintegrates, emitting positrons. Such a positron nearly
immediately annihilates with a near-by electron, giving rise to two γ-quants flying at the speed of light off
the point of annihilation in nearly opposite directions along a line with a completely random orientation
(i.e., line’s direction is drawn at random from the uniform distribution on the unit sphere in 3D). The

389
390 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

γ-quants penetrate the surrounding tissue and are registered outside the patient by a PET scanner
consisting of circular arrays (rings) of gamma radiation detectors. Since the two gamma rays are emitted
simultaneously and travel in almost exactly opposite directions, we can say a lot on the location of their
source: when a pair of detectors register high-energy γ-quants within a short (∼ 10−8 sec) time window
(“a coincidence event”), we know that the photons came from a disintegration act, and that the act took
place on the line (“line of response” (LOR)) linking the detectors. The measured data set is the collection
of numbers of coincidences counted by different pairs of detectors (“bins”), and the problem is to recover
from these measurements the 3D density of the tracer.
The mathematical model of the process, after appropriate discretization, is

y = P λ + ξ,

where
• λ ≥ 0 is the vector representing the (discretized) density of the tracer; the entries of λ are indexed
by voxels – small cubes into which we partition the field of view, and λj is the mean density of
the tracer in voxel j. Typically, the number n of voxels is in the range from 3 × 105 to 3 × 106 ,
depending on the resolution of the discretization grid;
• y are the measurements; the entries in y are indexed by bins – pairs of detectors, and yi is the
number of coincidences counted by i-th pair of detectors. Typically, the dimension m of y – the
total number of bins – is millions (at least 3 × 106 );
• P is the projection matrix; its entries pij are the probabilities for a LOR originating in voxel j to
be registered by bin i. These probabilities are readily given by the geometry of the scanner;
• ξ is the measurement noise coming mainly from the fact that all physical processes underlying PET
are random. The standard statistical model for PET implies that yi , i = 1, ..., m, are independent
Poisson random variables with the expectations (P λ)i .
The problem we are interested in is to recover tracer’s density λ given measurements y. As far as
the quality of the result is concerned, the most attractive reconstruction scheme is given by the standard
in Statistics Likelihood Ratio maximization: denoting p(·|λ) the density, taken w.r.t. an appropriate
dominating measure, of the probability distribution of the measurements coming from λ, the estimate of
the unknown true value λ∗ of λ is
λ
b = argmin p(y|λ),
λ≥0

where y is the vector of measurements.


For the aforementioned Poisson model of PET, building the Maximum Likelihood estimate is equiv-
alent to solving the optimization problem
nP o
n Pm Pn
min λ p
j=1 j j − y i ln( λ p
j ij ) : λ ≥ 0
λ  i=1  j=1 . (PET)
P
pj = pij
i

This is a nicely structured convex program (by the way, polynomially reducible to CQP and even LP).
The only difficulty – and a severe one – is in huge sizes of the problem: as it was already explained, the
number n of decision variables is at least 300, 000, while the number m of log-terms in the objective is in
the range from 3 × 106 to 25 × 106 .

At the present level of our knowledge, the design dimension n of order of tens and hundreds
of thousands rules out the possibility to solve a nonlinear convex program, even a well-structured
one, by polynomial time methods because of at least quadratic in n “blowing up” the arithmetic
cost of an iteration. When n is really large, all we can use are simple methods with linear in n
cost of an iteration. As a byproduct of this restriction, we cannot utilize anymore our knowledge
5.1. MOTIVATION: WHY SIMPLE METHODS? 391

of the analytic structure of the problem, since all known for the time being ways of doing so are
too expensive, provided that n is large. As a result, we are enforced to restrict ourselves with
black-box-oriented optimization techniques – those which use solely the possibility to compute
the values and the (sub)gradients of the objective and the constraints at a point. In Convex
Optimization, two types of “cheap” black-box-oriented optimization techniques are known:

• techniques for unconstrained minimization of smooth convex functions (Gradient Descent,


Conjugate Gradients, quasi-Newton methods with restricted memory, etc.);

• subgradient-type techniques, called also First Order algorithms, for nonsmooth convex
programs, including constrained ones.

Since the majority of applications are constrained, we restrict our exposition to the techniques
of the second type. We start with investigating of what, in principle, can be expected of black-
box-oriented optimization techniques.

5.1.1 Black-box-oriented methods and Information-based complexity


Consider a Convex Programming program in the form

min {f (x) : x ∈ X} , (CP)


x

where X is a convex compact set in a Euclidean space (which we w.l.o.g. identify with Rn )
and the objective f is a continuous convex function on Rn . Let us fix a family P(X) of convex
programs (CP) with X common for all programs from the family, so that such a program can be
identified with the corresponding objective, and the family itself is nothing but certain family
of convex functions on Rn . We intend to explain what is the Information-based complexity of
P(X) – informally, complexity of the family w.r.t. “black-box-oriented” methods. We start with
defining such a method as a routine B as follows:

1. When starting to solve (CP), B is given an accuracy  > 0 to which the problem should
be solved and knows that the problem belongs to a given family P(X). However, B does
not know what is the particular problem it deals with.

2. In course of solving the problem, B has an access to the First Order oracle for f . This
oracle is capable, given on input a point x ∈ Rn , to report on output what is the value
f (x) and a subgradient f 0 (x) of f at x.
B generates somehow a sequence of search points x1 , x2 , ... and calls the First Order oracle
to get the values and the subgradients of f at these points. The rules for building xt can be
arbitrary, except for the fact that they should be non-anticipating (causal): xt can depend
only on the information f (x1 ), f 0 (x1 ), ..., f (xt−1 ), f 0 (xt−1 ) on f accumulated by B at the
first t − 1 steps.

3. After certain number T = TB (f, ) of calls to the oracle, B terminates and outputs the
result zB (f, ). This result again should depend solely on the information on f accumulated
by B at the T search steps, and must be an -solution to (CP), i.e.,

zB (f, ) ∈ X & f (zB (f, )) − min f ≤ .


X
392 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

We measure the complexity of P(X) w.r.t. a solution method B by the function

ComplB () = max TB (f, )


f ∈P(X)

– by the minimal number of steps in which B is capable to solve within accuracy  every instance
of P(X). Finally, the Information-based complexity of the family P(X) of problems is defined
as
Compl() = min ComplB (),
B
the minimum being taken over all solution methods. Thus, the relation Compl() = N means,
first, that there exists a solution method B capable to solve within accuracy  every instance of
P(X) in no more than N calls to the First Order oracle, and, second, that for every solution
method B there exists an instance of P(X) such that B solves the instance within the accuracy
 in at least N steps.
Note that as far as black-box-oriented optimization methods are concerned, the information-
based complexity Compl() of a family P(X) is a lower bound on “actual” computational effort,
whatever it means, sufficient to find -solution to every instance of the family.

5.1.2 Main results on Information-based complexity of Convex Programming


Main results on Information-based complexity of Convex Programming can be summarized as
follows. Let X be a solid in Rn (a convex compact set with a nonempty interior), and let P(X)
be the family of all convex functions on Rn normalized by the condition

max f − min f ≤ 1. (5.1.1)


X X

For this family,


I. Complexity of finding high-accuracy solutions in fixed dimension is independent of the
geometry of X. Specifically,
 
1
∀( ≤ (X)) : O(1)n ln 2 +  ≤ Compl();

1
 (5.1.2)
∀( > 0) : Compl() ≤ O(1)n ln 2 +  ,

where
• O(1)’s are appropriately chosen positive absolute constants,
1
• (X) depends on the geometry of X, but never is less than n2
, where n is the dimen-
sion of X.
II. Complexity of finding solutions of fixed accuracy in high dimensions does depend on the
geometry of X. Here are 3 typical results:
(a) Let X be an n-dimensional box: X = {x ∈ Rn : kxk∞ ≤ 1}. Then
1 1 1
≤ ⇒ O(1)n ln( ) ≤ Compl() ≤ O(1)n ln( ). (5.1.3)
2  
(b) Let X be an n-dimensional ball: X = {x ∈ Rn : kxk2 ≤ 1}. Then
1 O(1) O(1)
n≥ 2
⇒ 2 ≤ Compl() ≤ 2 . (5.1.4)
  
5.1. MOTIVATION: WHY SIMPLE METHODS? 393

(c) Let X be an n-dimensional hyperoctahedron: X = {x ∈ Rn : kxk1 ≤ 1}. Then

1 O(1) O(ln n)
n≥ 2
⇒ 2 ≤ Compl() ≤ (5.1.5)
  2
1
(in fact, O(1) in the lower bound can be replaced with O(ln n), provided that n  2
).

Since we are interested in extremely large-scale problems, the moral which we can extract from
the outlined results is as follows:
• I is discouraging: it says that we have no hope to guarantee high accuracy, like  = 10−6 ,
when solving large-scale problems with black-box-oriented methods; indeed, with O(n) steps per
accuracy digit and at least O(n) operations per step (this many operations are required already
to input a search point to the oracle), the arithmetic cost per accuracy digit is at least O(n2 ),
which is prohibitively large for really large n.
• II is partly discouraging, partly encouraging. A bad news reported by II is that when X is
a box, which is the most typical situation in applications, we have no hope to solve extremely
large-scale problems, in a reasonable time, to guaranteed, even low, accuracy, since the required
number of steps should be at least of order of n. A good news reported by II is that there
exist situations where the complexity of minimizing a convex function within a fixed accuracy is
independent, or nearly independent, of the design dimension. Of course, the dependence of the
complexity bounds in (5.1.4) and (5.1.5) on  is very bad and has nothing in common with being
polynomial in ln(1/); however, this drawback is tolerable when we do not intend to get high
accuracy. Another drawback is that there are not that many applications where the feasible set
is a ball or a hyperoctahedron. Note, however, that in fact we can save the most important
for us upper complexity bounds in (5.1.4) and (5.1.5) when requiring from X to be a subset of
a ball, respectively, of a hyperoctahedron, rather than to be the entire ball/hyperoctahedron.
This extension is not costless: we should simultaneously strengthen the normalization condition
(5.1.1). Specifically, we shall see that

B. The upper complexity bound in (5.1.4) remains valid when X ⊂ {x : kxk2 ≤ 1} and

P(X) = {f : f is convex and |f (x) − f (y)| ≤ kx − yk2 ∀x, y ∈ X};

S. The upper complexity bound in (5.1.5) remains valid when X ⊂ {x : kxk1 ≤ 1} and

P(X) = {f : f is convex and |f (x) − f (y)| ≤ kx − yk1 ∀x, y ∈ X}.

Note that the “ball-like” case mentioned in B seems to be rather artificial: the Euclidean norm
associated with this case is a very natural mathematical entity, but this is all we can say in its
favour. For example, the normalization of the objective in B is that the Lipschitz constant of f
w.r.t. k · k2 is ≤ 1, or, which is the same, that the vector of the first order partial derivatives of f
should, at every point, be of k·k2 -norm not exceeding 1. In order words, “typical” magnitudes of
the partial derivatives of f should become smaller and smaller as the number of variables grows;
what could be the reasons for such a strange behaviour? In contrast to this, the normalization
condition imposed on f in S is that the Lipschitz constant of f w.r.t. k · k1 is ≤ 1, or, which is
the same, that the k · k∞ -norm of the vector of partial derivatives of f is ≤ 1. In other words,
the normalization is that the magnitudes of the first order partial derivatives of f should be ≤ 1,
and this normalization is “dimension-independent”. Of course, in B we deal with minimization
over subsets of the unit ball, while in S we deal with minimization over the subsets of the unit
394 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

hyperoctahedron, which is much smaller than the unit ball. However, there do exist problems
in reality where we should minimize over the standard simplex
X
∆n = {x ∈ Rn : x ≥ 0, xi = 1},
x

which indeed is a subset of the unit hyperoctahedron. For example, it turns out that the PET
Image Reconstruction problem (PET) is in fact the problem of minimization over the standard
simplex. Indeed, the optimality condition for (PET) reads
 
X pij 
λj pj − yi P  = 0, j = 1, ..., n;

i
pi` λ`
`

summing up these equalities, we get


X X
pj λj = B ≡ yi .
j i

It follows that the optimal solution to (PET) remains unchanged when we add to the non-
negativity constraints λj ≥ 0 also the constraint
P
pj λj = B. Passing to the new variables
j
xj = B −1 pj λj , we further convert (PET) to the equivalent form
( )
min f (x) ≡ − : x ∈ ∆n
P P
yi ln( qij xj )
x
hi j i , (PET0 )
Bpij
qij = pj

which is a problem of minimizing a convex function over the standard simplex.


Another example is `1 minimization arising in sparsity-oriented signal processing, in partic-
ular, in Compressed Sensing, see section 1.3.1. The optimization problems arising here can be
reduced to small series of problems of the form

min kAx − bkp , [A ∈ Rm×n ]


x:kxk1 ≤1

where p = ∞ or p = 2. Representing x ∈ Rn as u − v with nonnegative u, v, we can rewrite this


problem equivalently as
Pmin P kA[u − v] − bkp ;
u,v≥0, i
ui + i
vi =1

the domain of this problem is the standard 2n-dimensional simplex ∆2n .

Intermediate conclusion. The discussion above says that this perhaps is a good idea to
look for simple convex minimization techniques which, as applied to convex programs (CP)
with feasible sets of appropriate geometry, exhibit dimension-independent (or nearly dimension-
independent) and nearly optimal information-based complexity. We are about to present a
family of techniques of this type.
5.2. THE SIMPLEST: SUBGRADIENT DESCENT AND EUCLIDEAN BUNDLE LEVEL395

5.2 The Simplest: Subgradient Descent and Euclidean Bundle


Level
5.2.1 Subgradient Descent
The algorithm. The “simplest of the simple” algorithm or nonsmooth convex optimization –
Subgradient Descend (SD) discovered by N. Shore in 1967 is aimed at solving a convex program

min f (x), (CP)


x∈X

where

• X is a convex compact set in Rn , and

• f is a Lipschitz continuous on X convex function.

SD is the recurrence
xt+1 = ΠX (xt − γt f 0 (xt )) [x1 ∈ X] (SD)
where

• γt > 0 are stepsizes

• ΠX (x) = argmin kx − yk22 is the standard metric projector on X, and


y∈X

• f 0 (x) is a subgradient of f at x:

f (y) ≥ f (x) + (y − x)T f 0 (x) ∀y ∈ X. (5.2.1)

Note: We always assume that int X 6= ∅ and that the subgradients f 0 (x) reported by the
First Order oracle at points x ∈ X satisfy the requirement

f 0 (x) ∈ cl {f 0 (y) : y ∈ int X}.

With this assumption, for every norm k · k on Rn and for every x ∈ X one has

|f (x) − f (y)|
kf 0 (x)k∗ ≡ max ξ T f 0 (x) ≤ Lk·k (f ) ≡ sup . (5.2.2)
ξ:kξk≤1 x6=y, kx − yk
x,y∈X

When, why and how SD converges? To analyse convergence of SD, we start with a simple
geometric fact:

(!) Let X ⊂ Rn be a closed convex set, and let x ∈ Rn . Then the vector e =
x − ΠX (x) forms an acute angle with every vector of the form y − ΠX (x), y ∈ X:

(x − ΠX (x))T (y − ΠX (x)) ≤ 0 ∀y ∈ X. (5.2.3)

In particular,

y ∈ X ⇒ ky − ΠX (x)k22 ≤ ky − xk22 − kx − ΠX (x)k22 (5.2.4)


396 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Indeed, when y ∈ X and 0 ≤ t ≤ 1, one has


φ(t) = k [ΠX (x) + t(y − ΠX (x))] −xk22 ≥ kΠX (x) − xk22 = φ(0),
| {z }
yt ∈X

whence
0 ≤ φ0 (0) = 2(ΠX (x) − x)T (y − ΠX (x)).
Consequently,
ky − xk22 = ky − ΠX (x)k22 + kΠX (x) − xk22 + 2(y − ΠX (x))T (ΠX (x) − x)
≥ ky − ΠX (x)k22 + kΠX (x) − xk22 .

Corollary 5.2.1 With xt defined by (SD), for every u ∈ X one has


1 1 1
γt (xt − u)T f 0 (xt ) ≤ kxt − uk22 − kxt+1 − uk22 + γt2 kf 0 (xt )k22 . (5.2.5)
2
| {z } |2 {z } 2
dt dt+1

Indeed, by (!) we have


1 1
dt+1 ≤ k[xt − u] − γt f 0 (xt )k22 = dt − γt (xt − u)T f 0 (xt ) + γt2 kf 0 (xt )k22 . 2
2 2

Summing up inequalities (5.2.5) over t = T0 , T0 + 1, ..., T , we get


T T
γt (f (xt ) − f (u)) ≤ dT0 − dT +1 + 1 2 0 2
2 γt kf (xt )k2
P P
t=T0 | {z } t=T0
 ≤Θ 
Θ = max 21 kx − yk22
x,y∈X

Setting u = x∗ ≡ argmin f , we arrive at the bound


X

T
Θ+ 12
P
γt2 kf 0 (xt )k22
t=T0
∀(T, T0 , T ≥ T0 ≥ 1) : T ≡ min f (xt ) − f∗ ≤ T (5.2.6)
t≤T P
γt
t=T0

Note that T is the non-optimality, in terms of f , of the best (with the smallest value of f )
solution xTbst found in course of the first T steps of SD.
Relation (5.2.6) allows to arrive at various convergence results.

Let γt → 0 as t → ∞, while γt = ∞. Then


P
Example 1: “Divergent Series”.
t
lim T = 0. (5.2.7)
T →∞
Proof. Set T0 = 1 and note that
T T
γt2 kf 0 (xt )k22 γ2
P P
t
t=1
T
≤ L2k·k2 (f ) t=1
T
→ 0, T → ∞.
P P
γt γt
t=1 t=1
5.2. THE SIMPLEST: SUBGRADIENT DESCENT AND EUCLIDEAN BUNDLE LEVEL397

Example 2: “Optimal stepsizes”: When




γt = √ (5.2.8)
kf 0 (x t )k t

one has √
Lk·k2 (f ) Θ
T ≡ min f (xt ) − f∗ ≤ O(1) √
T
, T ≥1 (5.2.9)
t≤T

Proof. Setting T0 = bT /2c, we get


T T
1 1
P P
Θ+Θ Θ+Θ √
t t √ 1+O(1) L (f ) Θ
2
t=T0 t=T0
T ≤ ≤ P ≤ Lk·k2 (f ) Θ O(1)√ = O(1) k·k2√
T √ T √ T T
Θ Θ
P
√ √
tkf 0 (xt )k2 tLk·k (f )
t=T0 t=T0 2

Good news: We have arrived at efficiency estimate which is dimension-independent, provided


that the “k · k2 -variation” of the objective on the feasible domain

Vark·k2 ,X (f ) = Lk·k2 (f ) max kx − yk2


x,y∈X

is fixed. Moreover, when X is a Euclidean ball in Rn , this efficiency estimate “is as good as
an efficiency estimate of a black-box-oriented method can be”, provided that the dimension is
large:
!2
Vark·k2 ,X (f )
n≥

Bad news: Our “dimension-independent” efficiency estimate

• is pretty slow

• is indeed dimension-independent only for problems with “Euclidean geometry” – those


with moderate k · k2 -variation. As a matter of fact, in applications problems of this type
are pretty rare.

A typical “convergence pattern” is exhibited on the plot below:


2
10

1
10

0
10

−1
10

−2
10
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

SD as applied to min kAx − bk1 , A : 50 × 50


kxk2 ≤1
[red: efficiency estimate; blue: actual error]

We see that after rapid progress at the initial iterates, the method “gets stuck,” reflecting the
slow convergence rate.
398 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

5.2.2 Incorporating memory: Euclidean Bundle Level Algorithm


An evident drawback of SD is that all information on the objective accumulated so far is “sum-
marized” in the current iterate, and this “summary” is very incomplete. With better usage of
past information, one arrives at bundle methods which outperform SD significantly in practice,
while preserving the most attractive theoretical property of SD – dimension-independent and
optimal, in favourable circumstances, rate of convergence.

Bundle-Level algorithm. The Bundle Level algorithms works as follows.


• At the beginning of step t of BL, we have at our disposal

• the first-order information {f (xτ ), f 0 (xτ )}1≤τ <t on f along the previous search points
xτ ∈ X, τ < t;

• current iterate xt ∈ X.

• At step t we

1. compute f (xt ), f 0 (xt ); this information, along with the past first-order information on f ,
provides is with the current model of the objective

ft (x) = max[f (xτ ) + (x − xτ )T f 0 (xτ )]


τ ≤t

This model underestimates the objective and is exact at the points x1 , ..., xt ;

2. define the best found so far value of the objective f t = min f (xτ )
τ ≤t

3. define the current lower bound ft on f∗ by solving the auxiliary problem

ft = min ft (x) (LPt )


x∈X

Note that the current gap ∆t = f t − ft is an upper bound on the inaccuracy of the best
found so far approximate solution to the problem;

4. compute the current level `t = ft + λ∆t (λ ∈ (0, 1) is a parameter)

5. build a new search point by solving the auxiliary problem

xt+1 = argmin{kx − xt k22 : x ∈ X, ft (x) ≤ `t } (QPt )


x

and loop to step t + 1.

Why and how BL converges? Let us start with several observations.


• The models ft (x) = max[f (xτ ) + (x − xτ )T f 0 (xτ )] grow with t and underestimate f , while
τ ≤t
the best found so far values of the objective decrease with t and overestimate f∗ . Thus,

f1 ≤ f2 ≤ f3 ≤ ... ≤ f∗
f 1 ≥ f 2 ≥ f 3 ≤ ... ≥ f∗
∆1 ≥ ∆2 ≥ ... ≥ 0
5.2. THE SIMPLEST: SUBGRADIENT DESCENT AND EUCLIDEAN BUNDLE LEVEL399

• Let us say that a group of subsequent iterations J = {s, s + 1, ..., r} form a segment, if
∆r ≥ (1 − λ)∆s . We claim that
If J = {s, s + 1, ..., r} is a segment, then
(i) All the sets Lt = {x ∈ X : ft (x) ≤ `t }, t ∈ J, have a point in common, specifically, (any)
minimizer u of fr (·) over X;
(ii) For t ∈ J, one has kxt − xt+1 k2 ≥ (1−λ)∆
L
r
.
k·k2 (f )

Indeed,
(i): for t ∈ J we have

ft (u) ≤ fr (u) = fr = f r − ∆r ≤ f t − ∆r ≤ f t − (1 − λ)∆s


≤ f t − (1 − λ)∆t = `t .

(ii): We have ft (xt ) = f (xt ) ≥ f t , and ft (xt+1 ) ≤ `t = f t − (1 − λ)∆t . Thus, when


passing from xt to xt+1 , t-th model decreases by at least (1 − λ)∆t ≥ (1 − λ)∆r . It
remains to note that ft (·) is Lipschitz continuous w.r.t. k · k2 with constant Lk·k2 (f ).

• Main observation:

(!) The cardinality of a segment J = {s, s + 1, ..., r} of iterations can be bounded as


follows:
Var2k·k2 ,X (f )
Card(J) ≤ .
(1 − λ)2 ∆2r

Indeed, when t ∈ J, the sets Lt = {x ∈ X : ft (x) ≤ `t } have a point u in common, and xt+1 is
the projection of xt onto Lt . It follows that

kxt+1 − uk22 ≤ kxt − uk22 − kxt − xt+1 k22 ∀t ∈ J

X
⇒ kxt − xt+1 k22 ≤ kxs − uk22 ≤ max kx − yk22
x,y∈X
t∈J

max kx − yk22
x,y∈X
⇒ Card(J) ≤
min kxt − xt+1 k22
t∈J

L2k·k2 (f ) max kx − yk22


x,y∈X
⇒ Card(J) ≤ [by (ii)]
(1 − λ)2 ∆2r

Corollary 5.2.2 For every , 0 <  < ∆1 , the number N of steps before a gap ≤  is obtained
(i.e., before an -solution is found) does not exceed the bound

Var2k·k2 ,X (f )
N () = . (5.2.10)
λ(1 − λ)2 (2 − λ)2

Proof. Assume that N is such that ∆N > , and let us bound N from above. Let us split the
set of iterations I = {1, ..., N } into segments J1 , ..., Jm as follows:
400 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

J1 is the maximal segment which ends with iteration N :

J1 = {t : t ≤ N, (1 − λ)∆t ≤ ∆N }

J1 is certain group of subsequent iterations {s1 , s1 + 1, ..., N }. If J1 differs from I:


s1 > 1, we define J2 as the maximal segment which ends with iteration s1 − 1:

J2 = {t : t ≤ s1 − 1, (1 − λ)∆t ≤ ∆s1 −1 } = {s2 , s2 + 1, ..., s1 − 1}

If J1 ∪ J2 differs from I: s2 > 1, we define J3 as the maximal segment which ends


with iteration s2 − 1:

J3 = {t : t ≤ s2 − 1, (1 − λ)∆t ≤ ∆s2 −1 } = {s3 , s3 + 1, ..., s2 − 1}

and so on.

As a result, I will be partitioned “from the end to the beginning” into segments of iterations
J1 , J2 ,...,Jm . Let d` be the gap corresponding to the last iteration from J` . By maximality of
segments J` , we have
d1 ≥ ∆N > 
d`+1 > (1 − λ)−1 d` , ` = 1, 2, ..., m − 1
whence
d` > (1 − λ)−(`−1) .
We now have
m m Var2 Var2k·k2 ,X (f ) P
m
k·k2 ,X (f )
Card(J` ) ≤ ≤ (1 − λ)2(`−1) −2
P P
N = (1−λ)2 d2` (1−λ)2
`=1 `=1 `=1 2
Var2k·k2 ,X (f ) P
∞ Var2k·k2 ,X (f )
≤ (1−λ)2 2
(1 − λ)2(`−1) = (1−λ)2 [1−(1−λ)2 ]2
= N ().
`=1

Discussion. 1. We have seen that the Bundle-Level method shares the dimension-
independent (and optimal in the “favourable” large-scale case) theoretical complexity bound

For every  > 0, the number of steps before an -solution to convex program min f (x)
x∈X
 2
Vark·k2 ,X (f )
is found, does not exceed O(1)  .

There exists quite convincing experimental evidence that the Bundle-Level method obeys the
optimal in fixed dimension “polynomial time” complexity bound

For every  ∈ (0, VarX (f ) ≡ max f − min f ), the number of steps before an -solution
X X  
to convex program min n
f (x) is found does not exceed n ln VarX (f ) + 1.
x∈X⊂R

Experimental rule: When solving convex program with n variables by BL, every n steps add
new accuracy digit.
5.2. THE SIMPLEST: SUBGRADIENT DESCENT AND EUCLIDEAN BUNDLE LEVEL401

Illustration: Consider the problem

f∗ = min f (x) ≡ kAx − bk1 ,


x:kxk2 ≤1

with somehow generated 50 × 50 matrix A; in the problem, f (0) = 2.61, f∗ = 0. This is how SD
and BL work on this problem
2
10

1
10

0
10

−1
10

−2
10
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

SD, accuracy vs. iteration count


Vark·k
√2 ,X
(f )
blue: errors; red: efficiency estimate 3 t
)
10000 = 0.084
1
10

0
10

−1
10

−2
10

−3
10

−4
10

−5
10
0 50 100 150 200 250

BL, accuracy vs. iteration count


t
(blue: errors; red: efficiency estimate e− n VarX (f ))
233 < 1.e − 4
Bundle-Level vs. Subgradient Descent

2. In BL, the number of linear constraints in the auxiliary problems

ft = min ft (x) (LPt )


x∈X
xk22

xt+1 = argmin kxt − : x ∈ X, ft (x) ≤ `t (QPt )
x

is equal to the size t of the current bundle – the collection of affine forms gτ (x) = f (xτ ) +
(x − xτ )T f 0 (xτ ) participating in model ft (·). Thus, the complexity of an iteration in BL grows
with the iteration number. In order to suppress this phenomenon, one needs a mechanism for
shrinking the bundle (and thus – simplifying the models of f ).
• The simplest way of shrinking the bundle is to initialize d as ∆1 and to run plain BL until an
iteration t with ∆t ≤ d/2 is met. At such an iteration, we
• shrink the current bundle, keeping in it the minimum number of the forms gτ sufficient to
ensure that
ft ≡ min max gτ (x) = min max gτ (x)
x∈X 1≤τ ≤t x∈X selected τ

(this number is at most n), and


402 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

• reset d as ∆t ,
and proceed with plain BL until the gap is again reduced by factor 2, etc.
Computational experience demonstrates that the outlined approach does not slow BL down,
while keeping the size of the bundle below the level of about 2n.

What is ahead. Subgradient Descent looks very geometric, which simplifies “digesting” of
the construction. However, on a close inspection this “geometricity” is completely misleading;
the “actual” reasons for convergence have nothing to do with projections, distances, etc. What
in fact is going on, is essentially less transparent and will be explained in the next section.

5.3 Mirror Descent algorithm


5.3.1 Problem and assumptions
In the sequel, we focus on the problem

min f (x) (CP)


x∈X

where X is a closed convex set in a Euclidean space E with inner product h·, ·i (unless otherwise
is explicitly stated, we assume w.l.o.g. E to be just Rn ) and assume that

(A.1): The (convex) objective f is Lipschitz continuous on X.

To quantify this assumption, we fix once for ever a norm k · k on E and associate with f the
Lipschitz constant of f |X w.r.t. the norm k · k:

Lk·k (f ) = min {L : |f (x) − f (y)| ≤ Lkx − yk ∀x, y ∈ X} .

Note that from Convex Analysis it follows that f at every point x ∈ X admits a subgradient
f 0 (x) such that
kf 0 (x)k∗ ≤ Lk·k (f ),

where k · k∗ is the norm conjugate to k · k:

kξk∗ = max{hξ, xi : kxk ≤ 1}.


p
For example, the norm conjugate to k · kp is k · kq , where q = p−1 .
0
We assume that this “small norm” subgradient f (x) is exactly the one reported by the First
Order oracle as called with input x ∈ X:

kf 0 (x)k∗ ≤ Lk·k (f ) ∀x ∈ X; (5.3.1)

this is not a severe restriction, since at least in the interior of X all subgradients of f are “small”
in the outlined sense.
Note that by the definition of the conjugate norm, we have

|hx, ξi| ≤ kxkkξk∗ ∀x, ξ ∈ Rn . (5.3.2)


5.3. MIRROR DESCENT ALGORITHM 403

5.3.2 Proximal setup


The setup for the forthcoming generic algorithms for solving (CP) is given by

1. The domain X of the problem,

2. A norm k · k on the space Rn embedding X; this is the norm participating in (A.1);

3. A distance generating function (DGF for short) ω(x) : X → R – a convex continuously


differentiable function on X which is compatible with k · k, compatibility meaning that ω
is strongly convex with modulus of strong convexity 1 w.e.r. k · k, on X:

∀x, y ∈ X : hω 0 (x) − ω 0 (y), x − yi ≥ kx − yk2 ,

or, equivalently,
1
ω(y) ≥ ω(x) + hω 0 (x), y − xi + kx − yk2 ∀(x, y ∈ X). (5.3.3)
2
Note: When ω(·) is twice continuously differentiable on X, compatibility f ω(·) with k · k
is equivalent to the relation

hh, ω 00 (x)hi ≥ khk2 ∀(x ∈ X, h ∈ E),

where ω 00 (x) is the Hessian of ω at x.

The outlined setup (which we refer to as Proximal setup) specifies several other important
entities, specifically

• The ω-center of X — the point

xω = argmin ω(x).
x∈X

This point does exist (since X is closed and convex, and ω is continuous and strongly
convex on X) and is unique due to the strong convexity of ω. Besides this, by optimality
conditions
hω 0 (xω ), x − xω i ≥ 0 ∀x ∈ X. (5.3.4)
Note that (5.3.4) admits an extension as follows:

Fact 5.3.1 Let X, k · k, ω(·) be as above and let U be a nonempty closed convex subset
of X. Then for every p, the minimizer x∗ of hp (x) = hp, xi + ω(x) over U exists and is
unique, and is fully characterized by the relations x∗ ∈ U and

hp + ω 0 (x∗ ), u − x∗ i ≥ 0 ∀u ∈ U. (5.3.5)

The proof is readily given by optimality conditions.

• The prox-term (or local distance, or Bregman distance from x to y), defined as

Vx (y) = ω(y) − hω 0 (x), y − xi − ω(x); [x, y ∈ X]

note that 0 = Vx (x) and Vx (y) ≥ 21 ky − xk2 by (5.3.3);


404 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

• The quantity

1
Θ = sup Vx (y) = sup ω(y) − ω(x) − hω 0 (x), y − xi ≥ sup ky − xk2 ,
 
x,y∈X x,y∈X 2 x,y∈X

where the concluding inequality is due to (5.3.3). Note that Θ is finite if and only if X is
bounded. For a bounded X, the quantity

Ω = 2Θ,

will be called the ω-diameter of X. Observe that from Vx (y) ≥ 12 ky − xk2 it follows
that ky − xk2 ≤ 2Vx (y) whenever x, y ∈ X, that is, Ω upper-bounds the k · k-diameter
maxx,y∈X kx − yk of X.

Remark 5.3.1 The importance of ω-diameter of a domain X stems from the fact that,
as we shall see in the mean time, the efficiency estimate (the dependence of accuracy on
iteration count) of Mirror Descent algorithm as applied to problem (CP) on a bounded
domain X depends on the underlying proximal setup solely via the value Ω of the ω-
diameter of X (the less is this diameter, the better).

• Prox-mapping

Proxx (ξ) = argmin {Vx (y) + hξ, yi} = argmin ω(y) + hξ − ω 0 (x), yi : Rn → X.

y∈X y∈X

Here the parameter of the mapping — prox-center x – is a point from X. Note that by
construction the prox-mapping takes its values in X.
Sometimes we shall use a modification of the prox-mapping as follows:
0
ProxU n

x (ξ) = argmin {Vx (y) + hξ, yi} = argmin ω(y) + hξ − ω (x), yi : R → U
y∈U y∈U

where U is a closed convex subset of X. Here the prox-center x is again a point from X.
By Fact 5.3.1, the latter mapping takes its values in U and satisfies the relation

hξ − ω 0 (x) + ω 0 (x∗ ), u − x∗ i ≥ 0 ∀u ∈ U. (5.3.6)

Note that the methods we are about to present require at every step computing one (or two)
values of the prox-mapping, and in order for the iterations to be simple, this task should be
relatively easy. In other words, from the viewpoint of implementation, we need X and ω to be
“simple” and to “match each other.”

5.3.3 Standard proximal setups


We are about to list the standard proximal setups we will be working with. Justification of
these setups (i.e., verifying that the DGFs ω listed below are compatible with the corresponding
norms, and that the ω-diameters of the domains to follow admit the bounds we are about to
announce) is relegated to section 5.9
5.3. MIRROR DESCENT ALGORITHM 405

5.3.3.1 Ball setup

p Ball (called 1also the Euclidean) setup is as follows: E is a Euclidean space, kxk = kxk2 :=
The
hx, xi, ω(x) = 2 hx, xi.
Here Vx (y) = 12 kx − yk22 , xω is the k · k2 -closest to the origin point of X. For a bounded X,
Ω = max ky − xω k2 is exactly the k · k2 –diameter of X. The prox-mapping is
y∈X

Proxx (ξ) = ΠX (x − ξ),

where
ΠX (h) = argmin kh − yk2
y∈X

is what is called the metric projector onto X. This projector is easy to compute when X is
k · kp -ball, or the “nonnegative part” of k · kp -ball – the set {x ≥ a : kx − akp ≤ r}, or the
standard simplex
X
∆n = {x ∈ Rn : x ≥ 0, xi = 1},
i

of the full-dimensional standard simplex


X
∆+ n
n = {x ∈ R : x ≥ 0, xi ≤ 1}.
i

Finally, the norm conjugate to k · k2 is k · k2 itself.

5.3.3.2 Entropy setup

For Entropy setup, E = Rn , k · k = k · k1 , X is a closed convex subset in of the full-dimensional


standard simplex ∆+n , and ω(x) is the regularized entropy

n
X
ω(x) = (1 + δ) (xi + δ/n) ln(xi + δ/n) : ∆n → R, (5.3.7)
i=1

where the “regularizing parameter” δ > 0 (introduced to make ω continuously differentiable on


∆+ −16 .
n ) is small; for the sake of definiteness, from now on we set δ = 10
The ω-center can be easily pointed out when X is either the standard simplex ∆n , or the
full-dimensional standard simplex ∆+ n ; when X = ∆n , one has

xω = [1/n; ...; 1/n],

when n ≥ 3, the same holds true when X = ∆+ n and n ≥ 3.


An important fact is that for Entropy setup, the ω-diameter of X ⊂ ∆+
n is nearly dimension-
independent, specifically,
q
Ω ≤ O(1) ln(n + 1), (5.3.8)

where O(1) is a positive absolute constant. Finally, note that the norm conjugate to k · k1 is the
uniform norm k · k∞ .
406 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

5.3.3.3 `1 /`2 and Simplex setups


Now let E = Rk1 +...+kn = Rk1 × ... × Rkn , so that x ∈ E can be represented as a block vector
with n blocks xi [x] ∈ Rki . The norm k · k is the so called `1 /`2 , or block-`1 , norm:
n
X
kxk = kxi [x]k2 .
i=1

Bounded case. When X is a closed convex subset of the unit ball Z = {x ∈ E : kxk ≤ 1},
which we refer to as bounded case1 , a good choice of ω is

p  1, n=1
( 
1 X 2, n≤2
1 n
ω(x = [x ; ...; x ]) = kxj kp2 , p= 1 ,γ= 1
2, n=2 (5.3.9)
pγ 1+ ln n , n ≥3 1
e ln(n) , n > 2

j=1 

which results in q
Ω ≤ O(1) ln(n + 1). (5.3.10)
General case. It turns out that the function
2
n(p−1)(2−p)/p Xn

p
b (x = [x1 ; ...; xn ]) =
ω kxj kp2 : E → R (5.3.11)
2γ j=1

with p, γ given by (5.3.9) is a compatible with k · k-norm DGF for the entire E, so that its
restriction on an arbitrary closed convex nonempty subset X of E is a DGF for X compatible
with k · k. When, in addition, X is bounded, the ω
b |X -radius of X can be bounded as
q
Ω ≤ O(1) ln(n + 1)R, (5.3.12)

where R < ∞ is such that X ⊂ {x ∈ E : kxk ≤ R}.


Computing the prox-mapping is easy when, e.g.,

A. X = {x : kxk ≤ 1} and the DGF (5.3.9) is used, or

B. X = E and the DGF (5.3.11) is used.

Indeed, in the case of A, computing prox-mapping reduces to solving convex program of the
form  
n
X 
[hy j , bj i + ky j kp2 ] :
X
min ky j k2 ≤ 1 ; (P )
y 1 ,...,y n  
j=1 j

clearly, at the optimum y j are nonpositive multiples of bj , which immediately reduces the prob-
lem to  
X 
[spj − βj sj ] : s ≥ 0,
X
min sj ≤ 1 , (P 0 )
s1 ,...,sn  
j j

where βj ≥ 0 are given and p > 1. The latter problem is just a problem of minimizing a
single separable convex function under a single separable constraint. The Lagrange dual of this
problem is just a univariate convex program with easy to compute (just O(n) a.o.) value and
1
note that whenever X is bounded, we can make X a subset of B`1 /`2 by shift and scaling.
5.3. MIRROR DESCENT ALGORITHM 407

derivative of the objective. As a result, (P 0 ) can be solved within machine accuracy in O(n)
a.o., and therefore (P ) can be solved within machine accuracy in O(dim E) a.o.
In the case of B, similar argument reduces computing prox-mapping to solving the problem
 
 X 
min k[s1 ; ...; sn ]k2p − βj sj : s ≥ 0 ,
s1 ,...,sn  
j

where β ≥ 0. This problem admits a closed form solution (find it!), and here again computing
prox-mapping takes just O(dim E) a.o.

Comments. When n = 1, the `1 /`2 norm becomes nothing but the Euclidean norm k · k2
on E = Rm1 , and the proximal setups we have just described become nothing but the Ball
setup. As another extreme, consider the case when m1 = ... = mn = 1, that is, the `1 /`2 norm
becomes the plain k · k1 -norm on E = Rn . Now let X be a closed convex subset of the unit
k · k1 -ball of Rn ; restricting the DGF (5.3.10) for the latter set onto X, we get what we shall call
Simplex setup. Note that when X is a part of the full dimensional simplex ∆+ n , Simplex setup
can be viewed as alternative to Entropy setup. Note that in the case in question both Entropy
and Simplex setups yield basically identical bounds on the ω-diameter of X (and thus result in
algorithms with basically identical efficiency estimates, see Remark 5.3.1).

5.3.3.4 Nuclear norm and Spectahedron setups


The concluding pair of “standard” proximal setups we are about to consider deals with the case
when E is the space of matrices. It makes sense to speak about block-diagonal matrices of a
given block-diagonal structure. Specifically,

• Given a collection µ, ν of n pairs of positive integers µi , νi , let Mµν be the space of all real
matrices x = Diag{x1 , ..., xn } with n diagonal blocks xi of sizes µi × νi , i = 1, ..., n. We
equip this space with natural linear operations and Frobenius inner product

hx, yi = Tr(xy T ).

The nuclear norm k · knuc on Mµν is defined as


n
X
kx = Diag{x1 , ..., xn }knuc = kσ(x)k1 = kσ(xi )k1 ,
i=1

where σ(y) is the collection of singular values of a matrix y.


Note: We always assume that µi ≤ νi for all i ≤ n 2 . With this agreement, we can
think of σ(x), x ∈ Mµν , as of m = ni=1 -dimensional vector with entries arranged in the
P

non-ascending order.

• Given a collection ν of n positive integers νi , i = 1, ..., n, let Sν be the space of symmetric


block-diagonal matrices with n diagonal blocks of sizes νi × νi , i = 1, ..., n. Sν = Sν1 × ... ×
Sνn is just a subspace in Mνν ; the restriction of the nuclear norm on Mµµ is sometimes
called the trace norm on Sν ; it is nothing but the k · k1 -norm of the vector of eigenvalues
of a matrix x ∈ Sν .
2
When this is not the case, we can replace “badly sized” diagonal blocks with their transposes, with no effect
on the results of linear operations, inner products, and nuclear norm - the only entities we are interested in.
408 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Nuclear norm setup. Here E = Mµν , µi ≤ νi , 1 ≤ i ≤ n, and k · k is the nuclear norm k · knuc
on E. We set
n
X
m= µi ,
i=1

so that m is thus the smaller size of a matrix from E.


Note that the norm k · k∗ is the usual spectral norm (the largest singular value) of a matrix.
Bounded case. When X is a closed convex subset of the unit ball Z = {x ∈ E : kxk ≤ 1},
which we refer to as bounded case, a good choice of ω is

4 e ln(2m) Xm 1+q 1
ω(x) = q σi (x) : X → R, q = , (5.3.13)
2 (1 + q) i=1 2 ln(2m)

which results in q √ q
Ω ≤ 2 2 e ln(2m) ≤ 4 ln(2m). (5.3.14)
General case. It turns out that the function
2
1
X 
m 1+q
ω
b (x) = 2e ln(2m) σ 1+q (x) : E → R, q = (5.3.15)
j=1 j 2 ln(2m)

is a DGF for Z = E compatible with k · knuc , so that the restriction of ω


b on a nonempty closed
and convex subset X of E is a DGF for X compatible with k · knuc . When X is bounded, we
have q
Ω ≤ O(1) ln(2m)R, (5.3.16)
where R < ∞ is such that X ⊂ {x ∈ E : kxknuc ≤ R}.
It is clear how to compute the associated prox mappings in the cases when

A. X = {x : kxk ≤ 1} and the DGF (5.3.13) is used, or

B. X = E and the DGF (5.3.15) is used.

Indeed, let us start with the case of A, where computing the prox-mapping reduces to solving
the convex program
m
( )
p
X X
minµν Tr(yη T ) + σi (y) : σi (y) ≤ 1 (Q)
y∈M
i=1 i

Let η = uhv T be the singular value decomposition of η, so that u ∈ Mµµ , v ∈ Mνν are
block-diagonal orthogonal matrices, and h ∈ Mµν is block-diagonal matrix with diagonal blocks
µ
hj = [Diag{g j }, 0µj ×[νj −µj ] ], g j ∈ R+j . Passing in (Q) from variable y to the variable z = uT yv,
the problem becomes
 
 µj
n X µj
n X 
p
X X
min Tr(z j [hj ]T ) + σk (z j ) : σk (z j ) ≤ 1 .
z j ∈Mµj νj ,1≤j≤n  
j=1 k=1 j=1 k=1

This (clearly solvable) problem has a rich group of symmetries; specifically, due to the diagonal
nature of hj , replacing a block z j in a feasible solution with Diag{}z j Diag{0 }, where  is a
vector of ±1’s of dimension µj , and 0 is obtained from  by adding νj − µj of ±1 entries to the
right of . Since the problem is convex, it has an optimal solution which is preserved by the
5.3. MIRROR DESCENT ALGORITHM 409

above symmetries, that is, such that all z j are diagonal matrices. Denoting by ζ j the diagonals
of these diagonal matrices and setting g = [g 1 ; ...; g n ] ∈ Rm , the problem becomes
m
( )
X X
min ζT g + |ζk |p : |ζk | ≤ 1 ,
ζ=[ζ 1 ;...;ζ n ]∈Rn
k=1 k

which, basically, is the problem we have met when computing prox–mapping in the case A of
`1 /`2 setup; as we remember, this problem can be solved within machine precision in O(m) a.o.
Completely similar reasoning shows that in the case of B, computing prox-mapping reduces,
at the price of a single singular value decomposition of a matrix from Mµν , to solving the same
problem as in the case B of `1 /`2 setup. The bottom line is that in the cases of A, B, the
computational effort of computing prox-mapping is dominated by the necessity to carry out
singular value decomposition of a matrix from Mµν .

Remark 5.3.2 A careful reader should have recognize at this point the deep similarity between
the `1 /`2 and the nuclear norm setups. This similarity has a very simple explanation: the `1 /`2
situation is a very special case of the nuclear norm one. Indeed, a block vector y = [y 1 ; ...; y n ],
y j ∈ Rkj , can be thought of as the block-diagonal matrix y + with n diagonal blocks [y j ]T of sizes
1 × kj , j = 1, ..., n. With this identification of block-vectors y and block-diagonal matrices, the
singular values of y + are exactly the norms ky j k2 , j = 1, ..., m ≡ n, so that the nuclear norm of
y + is exactly the block-`1 norm of y. Up to minor differences in the coefficients in the formulas
for DGFs, our proximal setups respect this reduction of the `1 /`2 situation to the nuclear norm
one.

Spectahedron setup. Consider a space Sν of block-diagonal symmetric matrices of block-


diagonal structure ν, and let ∆ν and ∆+
ν be, respectively, the standard and the full-dimensional
ν
spectahedrons in S :

∆ν = {x ∈ Sν : x  0, Tr(x) = 1}, ∆+ ν
ν = {x ∈ S : x  0, Tr(x) ≤ 1}.

Let X be a nonempty closed convex subset of ∆+ ν . Restricting the DGF (5.3.13) associated
with Mνν onto Sν (the latter space is a linear subspace of Mνν ) and further onto p X, we get a
continuously differentiable DGF for X, and the ω-radius of X does not exceed O(1) ln(m + 1),
where, m = ν1 + ... + νn . When X = ∆ν or X = ∆+ ν , the computational effort for computing
prox-mapping is dominated by the necessity to carry out eigenvalue decomposition of a matrix
from Sν .

5.3.4 Mirror Descent algorithm


5.3.4.1 Basic Fact
In the sequel, we shall use the following, crucial in our context,

Fact 5.3.2 Given proximal setup k · k, X, ω(·) along with a closed convex subset U of X, let
x ∈ X, ξ ∈ Rn and x+ = ProxUx (ξ). Then

∀u ∈ U : hξ, x+ − ui ≤ Vx (u) − Vx+ (u) − Vx (x+ ). (5.3.17)


410 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Proof. By Fact 5.3.1, we have

x+ = argminy∈U {ω(y) + hξ − ω 0 (x), yi}


⇒ ∀u ∈ U : hω 0 (x+ ) − ω 0 (x) + ξ, u − x+ i ≥ 0
⇒ ∀u ∈ U : hξ, x+ − ui ≤ hω 0 (x+ ) − ω 0 (x), u − x+ i
= [ω(u) − ω(x) − hω 0 (x), u − xi] − [ω(u) − ω(x+ ) − hω 0 (x+ ), u − x+ i]
−[ω(x+ ) − ω(x) − hω 0 (x), x+ − xi]
= Vx (u) − Vx+ (u) − Vx (x+ ). 2

A byproduct of the above computation is a wonderful identity due to Marc Teboulle:

Magic Identity: Whenever x+ ∈ X, x ∈ X and u ∈ X, we have

hω 0 (x+ ) − ω 0 (x), u − x+ i = Vx (u) − Vx+ (u) − Vx (x+ ). (5.3.18)

5.3.4.2 Standing Assumption

From now on, if otherwise is not explicitly stated, we assume that the domain X of
problem (CP) in question is bounded.

5.3.4.3 MD: Description

The simplest version of the construction we are developing is the Mirror Descent (MD) algorithm
for solving problem
min f (x). (CP)
x∈X

MD is given by the recurrence

x1 ∈ X; xt+1 = Proxxt (γt f 0 (xt )), t = 1, 2, ... (5.3.19)

where γt > 0 are positive stepsizes.


The points xt in (5.3.19) are what is called search points – the points where we collect the
first order information on f . There is no reason to insist that these search points should be also
the approximate solutions xt generated in course of t = 1, 2, ... steps. In MD as applied to (CP),
there are two good ways to define the approximate solutions xt , specifically:
— the best found so far approximate solution xtbst – the point of the collection x1 , ..., xt with
the smallest value of the objective f , and
— the aggregated solution
" t #−1 t
X X
xt = γτ γτ xτ . (5.3.20)
τ =1 τ =1

Note that xt ∈ X (as a convex combination of the points xτ which by construction belong to
X).

Remark 5.3.3 Note that MD with Ball setup is nothing but SD (check it!).
5.3. MIRROR DESCENT ALGORITHM 411

5.3.4.4 MD: Complexity analysis


Convergence properties of MD are summarized in the following simple
Theorem 5.3.1 For every τ ≥ 1 and every u ∈ X, one has
1
γτ hf 0 (xτ ), xτ − ui ≤ Vxτ (u) − Vxτ +1 (u) + kγτ f 0 (xτ )k2∗ . (5.3.21)
2
As a result, for every t ≥ 1 one has
Ω2 1 Pt 2 0 2
t 2 + 2 τ =1 γτ kf (xτ )k∗
f (x ) − min f ≤ Pt , (5.3.22)
X τ =1 γτ

and similarly for xtbst in the role of xt .


In particular, given a number T of iterations we intend to perform and setting

γt = √ , 1 ≤ t ≤ T, (5.3.23)
kf 0 (xt )k∗ T
we ensure that

T T Ω maxt≤T kf 0 (xt )k∗ ΩLk·k (f )
f (x ) − min f ≤ Ω PT 1
≤ √ ≤ √ , (5.3.24)
X
t=1 kf 0 (xt ))k∗ T T

(Lk·k (f ) is the Lipschitz constant of f w.r.t. k · k), and similarly for xTbst .
Proof. 10 . Let us set ξτ = γτ f 0 (xτ ), so that
xτ +1 = Proxxτ (ξτ ) = argmin hξτ − ω 0 (xτ ), yi + ω(y) .
 
y∈X

We have
hξτ − ω 0 (xτ ) + ω 0 (xτ +1 ), u − xτ +1 i ≥ 0 [by Fact 5.3.1 with U = X]
⇒ hξτ , xτ − ui ≤ hξτ , xτ − xτ +1 i + hω 0 (xτ +1 ) − ω 0 (xτ ), u − xτ +1 i
= hξτ , xτ − xτ +1 i + Vxτ (u) − Vxτ +1 (u) − Vxτ (xτ +1 ) [Magic Identity (5.3.18)]
= hξτ , xτ − xτ +1 i + Vxτ (u) − Vxτ +1 (u) − [ ω(xτ +1 ) − ω(xτ ) − hω 0 (xτ ), xτ +1 − xτ i ]
| {z }
≥ 12 kxτ +1 −xτ k2
h i
= Vxτ (u) − Vxτ +1 (u) + hξτ , xτ − xτ +1 i − 12 kxτ +1 − xτ k2
h i
≤ Vxτ (u) − Vxτ +1 (u) + kξτ k∗ kxτ − xτ +1 k − 12 kxτ +1 − xτ k2 [see (5.3.2)]
≤ Vxτ (u) − Vxτ +1 (u) + 12 kξτ k2∗ [since ab − 21 b2 ≤ 12 a2 ]
Recalling that ξτ = γτ f 0 (xτ ), we arrive at (5.3.21).
20 . Denoting by x∗ a minimizer of f on X, substituting u = x∗ in (5.3.21) and taking into
account that f is convex, we get
1
γτ [f (xτ ) − f (x∗ )] ≤ γτ hf 0 (xτ ), xτ − x∗ i ≤ Vxτ (x∗ ) − Vxτ +1 (x∗ ) + γτ2 kf 0 (xτ )k2∗ .
2
Pt
Summing up these inequalities over τ = 1, ..., t, dividing both sides by Γt ≡ τ =1 γτ and setting
ντ = γτ /Γt , we get
" t Pt
Vx1 (x∗ ) − Vxt+1 (x∗ ) + 21 0
#
2 2
τ =1 γτ kf (xτ )k∗
X
ντ f (xτ ) − f (x∗ ) ≤ . (∗)
τ =1
Γt
412 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Since ντ are positive and sum up to 1, and f is convex, the left hand side in (∗) is ≥ f (xt )−f (x∗ )
2
(recall that xt = tτ =1 ντ xτ ); it clearly is also ≥ f (xtbst ) − f (x∗ ). Since Vx1 (x) ≤ Ω2 due to the
P

definition of Ω and to x1 = xω , while Vxt+1 (·) ≥ 0, the right hand side in (∗) is ≤ the right hand
side in (5.3.22), and we arrive at (5.3.22).
30 . Relation (5.3.24) is readily given by (5.3.22), (5.3.23) and the fact that kf 0 (xt )k∗ ≤
Lk·k (f ), see (5.3.1). 2

5.3.4.5 Refinement
We can slightly modify the MD algorithm and refine its complexity analysis.

Theorem 5.3.2 Let X be bounded, so that the ω-diameter Ω of X is finite. Let λ1 , λ2 , ... be
positive reals such that
λ1 /γ1 ≤ λ2 /γ2 ≤ λ3 /γ3 ≤ ..., (5.3.25)
and let Pt
t τ =1 λτ xτ
x = P t .
τ =1 λτ

Then xt ∈ X and
λt 2 Pt 0 (x 2
τ =1 λτ γτ kf
Pt
t τ =1 λτ [f (xτ ) − minX f ] γt Ω + τ )k∗
f (x ) − min f ≤ Pt ≤ , (5.3.26)
2 tτ =1 λτ
P
X τ =1 λτ

and similarly for xtbst in the role of xt .

Note that (5.3.22) is nothing but (5.3.26) as applied with λτ = γτ , 1 ≤ τ ≤ t.


Proof of Theorem 5.3.2. We still have at our disposal (5.3.21); multiplying both sides of this
relation by λτ /γτ and summing up the resulting inequalities over τ = 1, ..., t, we get

∀(u ∈ X) :
Pt Pt
0
τ =1 λτ hf (xτ), xτ − ui≤
λτ
τ =1 γτ[Vxτ (u) − Vxτ +1 (u)] + 12 tτ =1 λτ γτ kf 0 (xτ )k2∗
P

λ2 λ1 λ3 λ2 λt λt−1
  
= λγ11 Vx1 (u) + − Vx2 (u) + − Vx3 (u) + ... + − Vxt (u)
γ2 γ1 γ3 γ2 γt γt−1
| {z } | {z } | {z }
≥0 ≥0 ≥0
1 Pt 2 Pt
− λγtt Vxt+1 (u) + 0 2 ≤ λγtt Ω2 1 0 (x 2
2 τ =1 λτ γτ kf (xτ )k∗ + 2 τ =1 λτ γτ kf τ )k∗ ,

Ω2
where the concluding inequality is due to 0 ≤ Vxτ (u) ≤ 2 for all τ . As a result,
λt 2 Pt
Ω + λτ γτ
maxu∈X Λ−1
Pt
t τ =1 λ

hf 0 (xτ ), xτ −iui ≤ γt τ =1
2Λt (5.3.27)
Λt = tτ =1 λτ .
P

On the other hand, by convexity of f we have


t t
Λ−1 Λ−1 λτ hf 0 (xτ ), xτ − x∗ i,
X X
t
f (x ) − f (x∗ ) ≤ t λτ [f (xτ ) − f (x∗ )] ≤ t (5.3.28)
τ =1 τ =1
5.3. MIRROR DESCENT ALGORITHM 413

which combines with (5.3.27) to yield (5.3.26). Since by evident reasons


t
f (xtbst ) ≤ Λ−1
X
t λτ f (xτ ),
τ =1

the first inequality in (5.3.28) is preserved when replacing xt with xtbst , and the resulting in-
equality combines with (5.3.27) to imply the validity of (5.3.26) for xtbst in the role of xt . 2

Discussion. As we have already mentioned, Theorem 5.3.1 is covered by Theorem 5.3.2. The
latter, however, sometimes yields more information. For example, the “In particular” part of
Theorem 5.3.1 states that for every given “time horizon” T , specifying γt , t ≤ T , according to
(5.3.23), we can upper-bound the non-optimalities f (xTbst ) − minX f and f (xT ) − minX f of the
k·k ΩL (f )
approximate solutions generated in course of T steps by √ T
, (see (5.3.24)), which is the best
result we can extract from Theorem; note, however, that to get this “best result,” we should
tune the stepsizes to the total number of steps T we intend to perform. What to do if we do not
want to fix the total number T of steps in advance? A natural way is to pass from the stepsize
policy (5.3.23) to its “rolling horizon” analogy, specifically, to use

γt = √ , t = 1, 2, ... (5.3.29)
kf 0 (x
t )k∗ t

With these stepsizes, the efficiency estimate (5.3.24) yields


ΩLk·k (f ) ln(T +1)
max[f (xT ), f (xTbst )] − minX f ≤ O(1) √
T
, T = 1, 2, ...
 PT
γ x
 (5.3.30)
T t=1 t t
x = P T
t=1
γt

(recall that O(1)’s are absolute constants); this estimate is worse that (5.3.24), although just by
logarithmic in T factor. On the other hand, utilizing the stepsize policy (5.3.29) and setting
tp
λt = , t = 1, 2, ... (5.3.31)
kf 0 (xt )k∗
where p > −1/2, we ensure (5.3.25) and thus can apply (5.3.26), arriving at
ΩLk·k (f )
max[f (xt ), f (xtbst )] − minX f ≤ Cp √
t
, t = 1, 2, ...
 Pt
λ τ xτ
 (5.3.32)
xt = Pτ =1
t
τ =1
λτ

where Cp depends solely on p > −1/2 and is continuous in this range of values of p. Note that the
efficiency estimate (5.3.32) is, essentially, the same as (5.3.24) whatever be time horizon t. Note
also that the weights with which we aggregate search points xτ to get approximate solutions
in (5.3.30) and in (5.3.32) are different: in (5.3.30), other things (specifically, kf 0 (xτ )k∗ ) being
equal, the “newer” is a search point, the less is the weight with which the point participates
in the “aggregated” approximate solution. In contrast, when p = 0, the aggregation (5.3.32)
is “time invariant,”, and when p > 0, it pays the more attention to a search point the newer
is the point, this feature being amplified as p > 0 grows. Intuitively, “more attention to the
new search points” seems to be desirable, provided the search trajectory xτ itself, not just the
sequences of averaged solutions xτ or the best found so far solutions xτbst , converges, as τ → ∞,
to the optimal set of the problem minx∈X f (x); on a closest inspection, this convergence indeed
takes place, provided the stepsizes γt > 0 satisfy γt → 0, t → ∞, and ∞ t=1 γt = ∞.
P
414 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

5.3.4.6 MD: Optimality


Now let us look what our complexity analysis says in the case of the standard setups.

Ball setup and optimization over the ball. As we remember, in the case of the ball setup
one has
Ω = Dk·k2 (X),
where Dk·k2 (X) = max kx − yk2 if the k · k2 -diameter of X. Consequently, (5.3.24) becomes
x,y∈X

Dk·k2 (X)Lk·k2 (f )
f (xt ) − min f ≤ √ , (5.3.33)
X T
meaning that the number N () of MD steps needed to solve (CP) within accuracy  can be
bounded as
Dk·k2 (X)L2 (f )
2 k·k2
N () ≤ + 1. (5.3.34)
2
On the other hand, let L > 0, and let Pk·k2 ,L (X) be the family of all convex problems (CP) with
convex Lipschitz continuous, with constant L w.r.t. k · k2 , objectives. It is known that if X is an
2
Dk·k (X)L2
n-dimensional Euclidean ball and n ≥ 2
2
, then the information-based complexity of the
Dk·k (X)L2
2
family Pk·k2 ,L (X) is at least O(1) 2
2
(cf. (5.1.4)). Comparing this result with (5.3.34),
we conclude that
If X is an n-dimensional Euclidean ball, then the complexity of the family Pk·k2 ,L (X)
D2 (X)L2
w.r.t. the MD algorithm with the ball setup in the “large-scale case” n ≥ k·k22
coincides (within an absolute constant factor) with the information-based complexity
of the family.

Entropy/Simplex setup p and minimization over the simplex. In the case of Entropy
setup we have Ω ≤ O(1) ln(n + 1), so that (5.3.24) becomes
p
O(1) ln(n + 1)Lk·k1 (f )
f (xT ) − min f ≤ √ , (5.3.35)
X T
meaning that in order to solve (CP) within accuracy  it suffices to carry out
O(1) ln(n)L2k·k1 (f )
N () ≤ +1 (5.3.36)
2
MD steps.
On the other hand, let L > 0, and let Pk·k1 ,L (X) be the family of all convex problems (CP)
with convex Lipschitz continuous, with constant L w.r.t. k · k1 , objectives. It is known that if
X is the standard n-dimensional simplex ∆n (or the standard full-dimensional simplex ∆+ n ) and
L2 2
n ≥ 2 , then the information-based complexity of the family Pk·k1 ,L (X) is at least O(1) L2 (cf.
(5.1.5)). Comparing this result with (5.3.34), we conclude that
If X is the n-dimensional simplex ∆n (or the full-dimensional simplex ∆+ n ), then the
complexity of the family Pk·k1 ,L (X) w.r.t. the MD algorithm with the simplex setup
2
in the “large-scale case” n ≥ L2 coincides, within a factor of order of ln n, with the
information-based complexity of the family.
5.3. MIRROR DESCENT ALGORITHM 415

Note that the just presented discussion can be word by word repeated when entropy setup is
replaced with Simplex setup.

Spectahedron setup and large-scale semidefinite optimization. All the conclusions we


have made when speaking about the case of Entropy/Simplex setup and X = ∆n (or X = ∆+ n)
remain valid in the case of the spectahedron setup and X defined as the set of all block-diagonal
matrices of a given block-diagonal structure contained in ∆n = {x ∈ Sn : x  0, Tr(x) = 1} (or
contained in ∆+ n
n = {x ∈ S : x  0, Tr(x) ≤ 1}).
We see that with every one of our standard setups, the MD algorithm under appropriate con-
ditions possesses dimension independent (or nearly dimension independent) complexity bound
and, moreover, is nearly optimal in the sense of Information-based complexity theory, provided
that the dimension is large.

Why the standard setups? “The contribution” of ω(·) to the performance estimate (5.3.24)
is in the factor Ω; the less it is, the better. In principle, given X and k · k, we could play with
ω(·) to minimize Ω. The standard setups are given by a kind of such optimization for the cases
when X is the ball and k · k = k · k2 (“the ball case”), when X is the simplex and k · k = k · k1
(“the entropy case”), and when X is the `1 /`2 ball, or the nuclear norm ball, and the norm
k · k is the block `1 norm, respectively, the nuclear norm. We did not try to solve the arising
variational problems exactly; however, it can be p proved in all three cases that the value of Ω
we havepreached (i.e., O(1) in the ball case, O( ln(n)) in the simplex case and the `1 /`2 case,
and O( ln(m) in the nuclear norm case) cannot be reduced by more than an absolute constant
factor.

Adjusting to problem’s geometry. A natural question is: When solving a particular prob-
lem (CP), which version of the Mirror Descent method to use? A “theory-based” answer could
be “look at the complexity estimates of various versions of MD and run the one with the
best complexity.” This “recommendation” usually is of no actual value, since the complexity
bounds involve Lipschitz constants of the objective w.r.t. appropriate norms, and these con-
stants usually are not known in advance. We are about to demonstrate that there are cases
where reasonable recommendations can be made based solely on the geometry of the domain
X of the problem. As an instructive example, consider the case where X is the unit k · kp -ball:
X = X p = {x ∈ Rn : kxkp ≤ 1}, where p ∈ {1, 2, ∞}.

Note that the cases of p = 1 and especially p = ∞ are “quite practical.” Indeed, the first of
them (p = 1) is, essentially, what goes on in problems of `1 minimization (section 1.3.1); the
second (p = ∞) is perhaps the most typical case (minimization under box constraints). The
case of p = 2 seems to be more of academic nature.

The question we address is: which setup — Ball or Entropy one — to choose3 .
First of all, we should explain how to process the domains in question via Entropy setup –
the latter requires X to be a part of the full-dimensional standard simplex ∆+
n , which is not the
case for the domains X we are interested in now. This issue can be easily resolved, specifically,
as follows: setting
xρ [u, v] = ρ(u − v), [u; v] ∈ Rn × Rn ,
3
of course, we could look for more options as well, but as a matter of fact, in the case in question no more
attractive options are known.
416 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

we get a mapping from R2n onto Rn such that the image of ∆+ p


2n under this mapping covers X ,
γ
provided that ρ = ρp = n , where γ1 = 0, γ2 = 1/2, γ∞ = 1. It follows that in order to apply
p

to (CP) the MD with Entropy setup, it suffices to rewrite the problem as

min gp (u, v) ≡ f (ρp (u − v)), [Yp = [u; v] ∈ ∆+


2n : ρp [u − v] ∈ Xp ]
[u;v]∈Y p

the domain Y p of the problem being a part of ∆+ 2n .


The complexity bounds of the MD algorithm with both Ball and Entropy setups are of the
form “N () = C−2 ,” where the factor C is independent of , although depends on the problem at
hand and on the setup we use. In order to understand which setup is more preferable, it suffices
to look at the factors C only. These factors, up to O(1) factors, are listed in following table
(where L1 (f ) and L2 (f ) stand for the Lipschitz constants of f w.r.t. k · k1 - and k · k2 -norms):

Setup X = X1 X = X2 X = X∞
Ball L22 (f ) L22 (f ) nL22 (f )
Entropy L1 (f ) ln(n) L1 (f )n ln(n) L1 (f )n2 ln(n)
2 2 2

kxk1 √
Now note that for 0 6= x ∈ Rn we have 1 ≤ kxk2 ≤ n, whence

L21 (f ) 1
1≥ ≥ .
L22 (f ) n

With this in mind, the data from the above table suggest that in the case of p = 1 (X is the `1
ball X 1 ), the Entropy setup is more preferable; in all remaining cases under consideration, the
preferable setup if the Ball one.
Indeed, when X = X 1 , the Entropy setup can lose to the Ball one at most by the “small”
factor ln(n) and can gain by factor as large as n/ ln(n); these two options, roughly speaking,
correspond to the cases when subgradients of f have just O(1) significant entries (loss), or, on
the contrary, have all entries of the same order of magnitude (gain). Since the potential loss
is small, and potential gain can be quite significant, the Entropy setup in the case in question
seems to be more preferable than the Ball one. In contrast to this, in the cases of X = X 2
and X = X ∞ even the maximal potential gain associated with the Entropy setup (the ratio of
L21 (f )/L22 (f ) as small as 1/n) does not lead to an overall gain in the complexity, this is why in
these cases we would advice to use the Ball setup.

5.3.5 Mirror Descent and Online Regret Minimization


5.3.5.1 Online regret minimization: what is it?
In the usual setting, the purpose of a black box oriented optimization algorithm as applied to a
minimization problem
Opt = min f (x)
x∈X

is to find an approximate solution of a desired accuracy  via calls to an oracle (say, First
Order one) representing f . What we are interested in, is the number N of steps (calls to the
oracle) needed to build an -solution, and the overall computational effort in executing these N
steps. What we ignore completely, is how “good,” in terms of f , are the search points xt ∈ X,
1 ≤ t ≤ N – all we want is “to learn” f to the extent allowing to point out an -minimizer,
5.3. MIRROR DESCENT ALGORITHM 417

and the faster is our learning process, the better, no matter how “expensive,” in terms of their
non-optimality, are the search points. This traditional approach corresponds to the situation
where our optimization is off line: when processing the problem numerically, calling the oracle
at a search point does not incur any actual losses. What indeed is used “in reality,” is the
computed near-optimal solution, be it an engineering design, a reconstructed image, or a near-
optimal portfolio selection. We are interested in losses incurred by this solution only, with no
care of how good “in reality” would be the search points generated in course of the optimization
process.
In online optimization, problem’s setting is different: we run an algorithm on the fly, in “real
time,” and when calling the oracle at t-th step, the query point being xt , incur “real loss,” in
the simplest setting equal to f (xt ). What we are interested in, is to make as close to Opt as
possible our “properly averaged” loss, in the simplest case – the average loss
N
1 X
LN = f (xt ).
N t=1

The “horizon” N here can be fixed, in which case we want to make LN as small as possible, or
can vary, in which case we want to enforce LN to approach Opt as fast as possible as N grows.
In fact, in online regret minimization it is not even assumed that the objective f remains
constant as the time evolves; instead, it is assumed that we are dealing with a sequence of
objectives ft , t = 1, 2, ..., selected “by nature” from a known in advance family F of functions
on a known in advance set X ⊂ Rn . At a step t = 1, 2, ..., we select a point xt ∈ X, and “an
oracle” provides us with certain information on ft at xt and charges us with “toll” ft (xt ). It
should be stressed that in this setting, no assumptions on how the nature selects the sequence
{ft (·) ∈ F : t ≥ 1} are made. Now our “ideal goal” would be to find a non-anticipative policy
(rules for generating xt based solely on the information acquired from the oracle at the first t − 1
steps of the process) which enforces our average toll
N
1 X
LN = ft (xt ) (5.3.37)
N t=1

to approach, as N → ∞, the “ideal” average toll


N
1 X
L∗N = min ft (x), (5.3.38)
N t=1 x∈X

whatever be the sequence {ft (·) ∈ F : t ≥ 1} selected by the nature. It is absolutely clear that
aside of trivial cases (like all functions from F share common minimizer), this ideal goal cannot
be achieved. Indeed, with LN − L∗N → 0, n → ∞, the average, over time horizon N , of (always
nonnegative) non-optimalities ft (xt )−minxt ∈X ft (x) should go to 0 as N → ∞, implying that for
large N , “near all” of these non-optimalities should be “near-zero.” The latter, with the nature
allowed to select ft ∈ F in completely unpredictable manner, clearly cannot be guaranteed,
whatever be non-anticipative policy used to generate x1 , x2 ,...
To make the online setting with “abruptly varying” ft ’s meaningful, people compare LN not
with its “utopian” lower bound L∗N (which can be achieved only by a clairvoyant knowing in
advance the trajectory f1 , f2 , ...), but with a less utopian bound, most notably, with the quantity
N
1 X
LN = min
b ft (x).
x∈X N t=1
418 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Thus, we compare the average, on horizon N , toll of a “capable to move” (capable to select
xt ∈ X in a whatever non-anticipative fashion) “mortal” which cannot predict the future with
the best average toll of a clairvoyant who knows in advance the trajectory f1 , f2 , ... selected by
the nature, but is unable to move (must use the same action x during all “rounds” t = 1, ..., N ).
We arrive at the online regret minimization problem where we are interested in a non-anticipative
policy of selecting xt ∈ X which enforces the associated regret

N
" #
1 X 1 X
∆N = sup ft (xt ) − min ft (x) (5.3.39)
ft ∈F ,1≤t≤N N t=1 x∈X N
t=1

to go to 0, the faster the better, as N → ∞.

Remark. Whether it is “fair” to compare a “mortal capable to move” with “a handicapped


clairvoyant,” this question is beyond the scope of optimization theory, and answer here clearly
depends on the application we are interested in. It suffices to say that there are meaningful
applications, e.g., in Machine Learning, where online regret minimization makes sense and that
even the case where the nature is restricted to select only stationary sequences ft (·) ≡ f (·) ∈ F,
t = 1, 2, ... (no difficulty with “fairness” at all), making ∆N small models the meaningful situation
when we “learn” the objective f online.

5.3.5.2 Online regret minimization via Mirror Descent, deterministic case


Note that the problem of online regret minimization admits various settings, depending on what
are our assumptions on X and F, and what is the information on ft revealed by the oracle at
the search point xt . In this section, we focus on the simple case where

1. X is a convex compact set in Euclidean space E, and (X, E) is equipped with proximal
setup – a norm k · k on E and a compatible with this norm continuously differentiable on
X DGF ω(·);

2. F is the family of all convex and Lipschitz continuous, with a given constant L w.r.t. k · k,
functions on X;

3. the information we get in round t, our search point being xt , and “nature’s selection” being
ft (·) ∈ F, is the first order information ft (xt ), ft0 (xt ) on ft at xt , with the k · k∗ -norm of
the taken at xt subgradient ft0 (xt ) of ft (·) not exceeding L.

Our goal is to demonstrate that Mirror Descent algorithm, in the form presented in section
5.3.4.5, is well-suited for online regret minimization. Specifically, consider algorithm as follows:

1. Initialization: Select stepsizes {γt > 0}t≥1 with γt nonincreasing in t. Select x1 ∈ X.

2. Step t = 1, 2, .... Given xt and a reported by the oracle subgradient ft0 (xt ), of k · k∗ -norm
not exceeding L, of ft at xt , set

xt+1 = Proxxt (γt ft0 (xt )) := argmin hγt ft0 (xt ) − ω 0 (xt ), xi + ω(x)
 
x∈X

and pass to step t + 1.


5.3. MIRROR DESCENT ALGORITHM 419

Convergence analysis. We have the following straightforward modification of Theorem 5.3.2:

Theorem 5.3.3 In the notation and under assumptions of this section, whatever be a sequence
{ft ∈ F : t ≥ 1} selected by the nature, for the trajectory {xt : t ≥ 1} of search points generated
by the above algorithm and for a whatever sequence of weights {λt > 0 : t ≥ 1} such that λt /γt
is nondecreasing in t, it holds for all N = 1, 2, ...:
N N λN 2 PN 2
1 X 1 X γN Ω + t=1 λt γt L
λt ft (xt ) − min λt ft (u) ≤ (5.3.40)
ΛN t=1 u∈X ΛN
t=1
2ΛN

where Ω is the ω-diameter of X, and


N
X
ΛN = λt .
t=1

Postponing for a moment the proof, let us look at the consequences.

Consequences. Let us choose



√ , λt = tα ,
γt =
L t
with α ≥ 0. This choice meets the requirements from the description of our algorithm and
the premise of Theorem 5.3.3, and results in ΛN ≥ O(1)(1 + α)−1 N α+1 and N 2
t=1 λt γt L ≤
P
−1
O(1)(1 + α) N α+1/2 (ΩL), provided N ≥ 1 + α. As a result, (5.3.40) becomes
∀(ft ∈ F, t = 1, 2, ..., N ≥ 1 + α) :
1 PN 1 PN (1+α)ΩL (5.3.41)
t=1 λt ft (xt ) − minu∈X ΛN t=1 λt ft (u) ≤ O(1) .

ΛN N

We arrive at an O(1/ N ) upper bound on a kind of regret (“kind of” stems from the fact
that the averaging in the left hand side uses weights proportional to λt rather than equal to
each other). Setting α = 0, we get a bound on the “true” N -step regret. Note that the policy
of generating the points x1 , x2 , ... is completely independent of α, so that the bound (5.3.41)
contains a “spectrum,” parameterized by α ≥ 0, of regret-like bounds on a fixed decision-making
policy.

Proof of Theorem 5.3.3. With our current algorithm, relation (5.3.21) becomes
1
∀(t ≥ 1, u ∈ X) : γt hft0 (xτ ), xt − ui ≤ Vxt (u) − Vxt+1 (u) + kγτ ft0 (xt )k2∗ (5.3.42)
2
(replace f with ft in the derivation in item 10 of the proof of Theorem 5.3.1). Now let us proceed
exactly as in the proof of Theorem 5.3.2: multiply both sides in (5.3.42) by λt /γt and sum up
the resulting inequalities over t = 1, ..., N , arriving at
∀(u ∈ X) :
PN 0
t=1 λt hft (xt ), xt − ui ≤ N λt
[Vxt (u) − Vxt+1 (u)] + 12 N λ γ ¨kf 0 (x )k2
P P
t=1 γt  t t t t∗
t=1
λ2 λ1 λ3 λ2 λN λN −1
   
= λγ11 Vx1 (u) + − Vx2 (u) + − Vx3 (u) + ... + − VxN (u)
γ2 γ1 γ3 γ2 γN γN −1
| {z } | {z } | {z }
≥0 ≥0 ≥0
− λγN 1 PT
kf 0 (x )k2 ≤ λN Ω2 1 PN 0 2
VxN +1 (u) + λ γ + t=1 λt γt kft (xt )k∗ ,
N
2 t=1 t t t t ∗ γN 2 2
420 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Ω2
where the concluding inequality is due to 0 ≤ Vxt (u) ≤ 2 for all t. As a result, taking into
account that kft0 (xt )k∗ ≤ L, we get
λN PN
Ω2 + λt γt L2
maxu∈X Λ−1
PN 0 γN t=1
N t=1 λht hft (xt ), xt − ui
i
≤ 2ΛN (5.3.43)
PN
ΛN = t=1 λt .

On the other hand, by convexity of ft we have for every u ∈ X:

ft (xt ) − ft (u) ≤ hft0 (xt ), xt − ui,

and thus (5.3.43) implies that


N N λN 2 PN 2
1 X 1 X γN Ω + t=1 λt γt L
λt ft (xt ) − min λt ft (u) ≤ . 2
ΛN t=1 u∈X ΛN
t=1
2ΛN

5.3.6 Mirror Descent for Saddle Point problems


5.3.6.1 Convex-Concave Saddle Point problem
An important generalization of a convex optimization problem (CP) is a convex-concave saddle
point problem
min max φ(x, y) (SP)
x∈X y∈Y

where

• X ⊂ Rnx , Y ⊂ Rny are nonempty convex compact subsets in the respective Euclidean
spaces (which at this point we w.l.o.g. identified with a pair of Rn ’s),

• φ(x, y) is a Lipschitz continuous real-valued cost function on Z = X × Y which is convex


in x ∈ X, y ∈ Y being fixed, and is concave in y ∈ Y , x ∈ X being fixed.

Basic facts on saddle points. (SP) gives rise to a pair of convex optimization problems

Opt(P ) = min φ(x) := max φ(x, y) (P)


x∈X y∈Y
Opt(D) = max φ(y) := min φ(x, y) (D)
y∈Y x∈X

(the primal problem (P) requires to minimize a convex function over a convex compact set, the
dual problem (D) requires to maximize a concave function over another convex compact set).
The problems are dual to each other, meaning that Opt(P ) = Opt(D); this is called strong
duality4 . Note that strong duality is based upon convexity-concavity of φ and convexity of X, Y
(and to a slightly lesser extent – on the continuity of φ on Z = X × Y and compactness of the
latter domain), while the weak duality Opt(P ) ≥ Opt(D) is a completely general (and simple
to verify) phenomenon.
In our case of convex-concave Lipschitz continuous φ the induced objectives in the primal
and the dual problem are Lipschitz continuous as well. Since the domains of these problems are
compact, the problems are solvable. Denoting by X∗ , Y∗ the corresponding optimal sets, the
4
For missing proofs of all claims in this section, see section D.4.
5.3. MIRROR DESCENT ALGORITHM 421

set Z∗ = X∗ × Y∗ admits simple characterization in terms of φ: Z∗ is exactly the set of saddle


points of φ on Z = X × Y , that is, the points (x∗ , y∗ ) ∈ X × Y such that

φ(x, y∗ ) ≥ φ(x∗ , y∗ ) ≥ φ(x∗ , y) ∀(x, y) ∈ Z = X × Y

(in words: with y set to y∗ , φ attains its minimum over x ∈ X at x∗ , and with x set to x∗ , φ
attains its maximum over y ∈ Y at y = y∗ ). These points are considered as exact solutions to
the saddle point problem in question. The common optimal value of (P) and (D) is nothing but
the value of φ at a saddle point:

Opt(P ) = Opt(D) = φ(x∗ , y∗ ) ∀(x∗ , y∗ ) ∈ Z∗ .

The standard interpretation of a saddle point problem is the game where the first player chooses
his action x from the set X, his adversary – the second player – chooses his action y in Y , and
with a choice (x, y) of the players, the first pays the second the sum φ(x, y). Such a game is
called antagonistic (since the interests of the players are completely opposite to each other), or
a zero sum game (since the sum φ(x, y) + [−φ(x, y)] of losses of the players is identically zero).
In terms of the game, saddle points (x∗ , y∗ ) are exactly the equilibria: if the first player sticks
to the choice x∗ , the adversary has no incentive to move away from the choice y∗ , since such a
move cannot increase his profit, and similarly for the first player, provided that the second one
sticks to the choice y∗ .
It is convenient to characterize the accuracy of a candidate (x, y) ∈ X × Y to the role of an
approximate saddle point by the “saddle point residual”

sad (x, y) := max


0
φ(x, y 0 ) − min
0
φ(x0 , y) = φ(x) − φ(y)
hy ∈Y xi∈X h i
= φ(x) − Opt(P ) + Opt(D) − φ(y) ,

where the concluding equality is due to strong duality. We see that the saddle point inaccuracy
sad (x, y) is just the sum of non-optimalities, in terms of the respective objectives, of x as a
solution to the primal, and y as a solution to the dual problem; in particular, the inaccuracy is
nonnegative and is zero if and only if (x, y) is a saddle point of φ.

Vector field associated with (SP). From now on, we focus on problem (SP) with Lipschitz
continuous convex-concave cost function φ : X × Y → R and convex compact X, Y and assume
that φ is represented by a First Order oracle as follows: given on input a point z = (x, y) ∈
Z := X × Y , the oracle returns a subgradient Fx (z) of the convex function φ(·, y) : X → R at
the point x, and a subgradient Fy (x, y) of the convex function −φ(x, ·) : Y → R at the point Y .
It is convenient to arrange Fx , Fy in a single vector

F (z) = [Fx (z); Fy (z)] ∈ Rnx × Rny . (5.3.44)

In addition, we assume that the vector field {F (z) : z ∈ Z} is bounded (cf. (5.3.1)).

5.3.6.2 Saddle point MD algorithm


The setup for the algorithm is a usual proximal setup, but with the set Z = X × Y in the role
of X and the embedding space Rnx × Rny of Z in the role of Rn , so that the setup is given by
a norm k · k on Rnx × Rny and a compatible with this norm continuously differentiable DGF
ω(·) : Z → R.
422 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

The MD algorithm for (SP) is given by the recurrence

z1 := (x1 , y1 ) ∈ Z,
zt+1 := (xt+1 , yt+1 ) = Proxzt (γt F (zt )),
Proxz (ξ) = argminw∈Z [Vz (w) + hξ, wi] , (5.3.45)
Vz (w) = ω(w) − hω 0 (z), w − zi − ω(z) ≥ 12 kw − zk2 ,
hP i−1 P
t t
zt = τ =1 γτ τ =1 γτ zτ ,

where γt > 0 are stepsizes, and z t are approximate solutions built in course of t steps.
The saddle point analogy of Theorem 5.3.1 is as follows:

Theorem 5.3.4 For every τ ≥ 1 and every w = (u, v) ∈ Z, one has

1
γτ hF (zτ ), zτ − wi ≤ Vzτ (w) − Vzτ +1 (w) + kγτ F (zτ )k2∗ . (5.3.46)
2
As a result, for every t ≥ 1 one has
Ω2 1 Pt 2 2
t 2 + 2 τ =1 γτ kF (zτ )k∗
sad (z ) ≤ Pt . (5.3.47)
τ =1 γτ

In particular, given a number T of iterations we intend to perform and setting


γt = √ , 1 ≤ t ≤ T, (5.3.48)
kF (zt )k∗ T

we ensure that

T T Ω maxt≤T kF (xt )k∗ ΩLk·k (F )
sad (z ) ≤ Ω PT 1
≤ √ ≤ √ , Lk·k (F ) = sup kF (z)k∗ .
t=1 kF (xt ))k∗ T T z∈Z
(5.3.49)

Proof. 10 . (5.3.46) is proved exactly as its “minimization counterpart” (5.3.21), see item 10 of
the proof of Theorem 5.3.1; on a closest inspection, the reasoning there is absolutely independent
of where the vectors ξt in the recurrence zt+1 = Proxzt (ξt ) come from.
20 . Summing up relations (5.3.46) over τ = 1, ..., t, plugging in F = [Fx ; Fy ], dividing both
sides of the resulting inequality by Γt = tτ =1 γτ and setting ντ = γτ /Γt , we get
P

T t
" #
Ω2 1 X
Γ−1
X
ντ [hFx (zt ), xt − ui + hFy (zt ), yt − vi] ≤ Q := + γ 2 kF (zτ )k2∗ ; (!)
τ =1
t
2 2 τ =1 τ

this inequality holds true for all (u, v) ∈ X × Y . Now, recalling the origin of Fx and Fy , we have

hFx (xt , yt ), xt − u) ≥ φ(xt , yt ) − φ(u, yt ), hFy (xt , yt ), yt − vi ≥ φ(xt , v) − φ(xt , yt ),

so that (!) implies that


t t
" #
Ω2 1 X
Γ−1
X
ντ [φ(xt , v) − φ(u, yt )] ≤ Q := + γ 2 kF (zτ )k2∗ ,
τ =1
t
2 2 τ =1 τ
5.3. MIRROR DESCENT ALGORITHM 423

which, by convexity-concavity of φ, yields


t
" #
h i Ω2 1 X
t t
φ(x , v) − φ(u, y ) ≤ Q := Γ−1 + γ 2 kF (zτ )k2∗ .
t
2 2 τ =1 τ

Now, the right hand side of the latter inequality is independent of (u, v) ∈ Z; taking maximum
over (u, v) ∈ Z in the left hand side of the inequality, we get (5.3.47). This inequality clearly
implies (5.3.49). 2

In our course, the role of Theorem 5.3.4 (which is quite useful by its own right) is to be a
“draft version” of a much more powerful Theorem 5.5.1, and we postpone illustrations till the
latter Theorem is in place.

5.3.6.3 Refinement
Similarly to the minimization case, Theorem 5.3.4 admits a refinement as follows:
Theorem 5.3.5 Let λ1 , λ2 , ... be positive reals such that (5.3.25) holds true, and let
Pt
λτ zτ
z = Pτ =1
t
t .
τ =1 λτ

Then z t ∈ Z, and Pt
λt 2 2
t γt Ω + τ =1 λt γτ kF (zτ )k∗
sad (z ) ≤ . (5.3.50)
2 tτ =1 λτ
P

In particular, with the stepsizes



γt = √ , t = 1, 2, ... (5.3.51)
kF (zt )k∗ t
and with
tp
λt = , t = 1, 2, ... (5.3.52)
kF (zt )k∗
where p > −1/2, we ensure for t = 1, 2, ... that
ΩLk·k (F )
sad (z t ) ≤ Cp √ , Lk·k (F ) = sup kF (z)k∗ (5.3.53)
t z∈Z

with Cp depending solely and continuously on p > −1/2.


Proof. The inclusion z t ∈ Z is evident. Processing (5.3.46) in exactly the same fashion as we
have processed (5.3.21) when proving Theorem 5.3.2, we get the relation
λt 2 Pt
Ω + λτ γτ kF (zτ )k2∗
supu∈Z Λ−1
Pt γt τ =1
t τ =1 λτ hFh(zt ), zt − ui ≤ i 2Λt
Pt
Λt = τ =1 λτ

Same as in the proof of Theorem 5.3.4, the left hand side in this inequality upper-bounds sad (z t ),
and we arrive at (5.3.50). The “In particular” part of Theorem is straightforwardly implied by
(5.3.50). 2
424 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

5.3.7 Mirror Descent for Stochastic Minimization/Saddle Point problems


While what is on our agenda now is applicable to stochastic convex minimization and stochastic
convex-concave saddle point problems, in the sequel, to save words, we deal focus on the saddle
point case. It makes no harm: convex minimization, stochastic and deterministic alike, is a
special case of solving convex-concave saddle point problem, one where Y is a singleton.

5.3.7.1 Stochastic Minimization/Saddle Point problems


Let we need to solve a saddle point problem

min max φ(x, y) (SP)


x∈X y∈Y

under the same assumptions as above (i.e., X, Y are convex compact sets in Euclidean spaces
Rnx , Rny φ : Z = X × Y → R is convex-concave and Lipschitz continuous), but now, instead
of deterministic First Order oracle which reports the values of the associated with φ vector field
F , see (5.3.44), we have at our disposal a stochastic First Order oracle – a routine which reports
unbiased random estimates of F rather than the exact values of F . A convenient model of the
oracle is as follows: at t-th call to the oracle, the query point being z ∈ Z = X × Y , the oracle
returns a vector G = G(z, ξt ) which is a deterministic function of z and of t-th realization of
oracle’s noise ξt . We assume the noises ξ1 , ξ2 , ... to be independent of each other and identically
distributed: ξt ∼ P , t = 1, 2, .... We always assume that the oracle is unbiased:

Eξ∼P {G(z, ξ)} =: F (z) = [Fx (z); Fy (z)] ∈ ∂x φ(z) × ∂y [−φ(z)] (5.3.54)

(thus, Fx (x, y) is a subgradient of φ(·, y) taken at the point x, while Fy (x, y) is a subgradient of
−φ(x, ·) taken at the point y). Besides this, we assume that the second moment of the stochastic
subgradient is bounded uniformly in z ∈ Z:

sup Eξ∼P {kG(z, ξ)k2∗ } < ∞, (5.3.55)


z∈Z

Note that when Y is a singleton, our setting recovers the problem of minimizing a Lipschitz con-
tinuous convex function over a compact convex set in the case when instead of the subgradients
of the objective, unbiased random estimates of these subgradients are available.

5.3.7.2 Stochastic Saddle Point Mirror Descent algorithm


It turns out that the MD algorithm admits a stochastic version capable to solve (SP) in the
outlined “stochastic environment.” The setup for the algorithm is a usual proximal setup —
a norm k · k on the space Rnx × Rny embedding Z = X × Y and a compatible with this
norm continuously differentiable DGF ω(·) for Z. The Stochastic Saddle Point Mirror Descent
(SSPMD) algorithm is identical to its deterministic counterpart; it is the recurrence
" t #−1 t
X X
t
z1 = zω , zt+1 = Proxzt (γt G(zt , ξt )), z = γτ γτ z τ , (5.3.56)
τ =1 τ =1

where γt > 0 are deterministic stepsizes. This is nothing but the recurrence (5.3.45), with the
only difference that now this recurrence “is fed” by the estimates G(zt , ξt ) of the vectors F (zt )
participating in (5.3.45). Surprisingly, the performance of SSPMD is, essentially, the same as
the performance of its deterministic counterpart:
5.3. MIRROR DESCENT ALGORITHM 425

Theorem 5.3.6 One has


Pt
Ω2 +σ 2 γ2
≤ 72 Pt τ =1 τ .
(z t )

E[ξ1 ;...;ξt ]∼P ×...×P sad
γ
τ =1 τ (5.3.57)
σ 2 = supz∈Z Eξ∼P {kG(z, ξ)k2∗ }
In particular, given a number T of iterations we intend to perform and setting

γt = √ , 1 ≤ t ≤ T, (5.3.58)
σ T
we ensure that
7Ωσ
E{sad (z T )} ≤ √ . (5.3.59)
T
Proof. 10 . By exactly the same reason as in the deterministic case, we have for every u ∈ Z:
t t
X Ω2 1 X
γτ hG(zτ , ξτ ), zτ − ui ≤ + γ 2 kG(zτ , ξτ )k2∗ .
τ =1
2 2 τ =1 τ

Setting
∆τ = G(zτ , ξτ ) − F (zτ ), (5.3.60)
we can rewrite the above relation as
t t t
X Ω2 1 X X
γτ hF (zτ ), zτ − ui ≤ + γτ2 kG(zτ , ξτ )k2∗ + γτ h∆τ , u − zτ i, (5.3.61)
τ =1
2 2 τ =1 τ =1
| {z }
D(ξ t ,u)

where ξ t = [ξ1 ; ...; ξt ]. It follows that


t t
X Ω2 1 X
max γτ hF (zτ ), zτ − ui ≤ + γ 2 kG(zτ , ξτ )k2∗ + max D(ξ t , u).
u∈Z
τ =1
2 2 τ =1 τ u∈Z

hAs i seen when proving Theorem 5.3.4, the left hand side in the latter relation is ≥
we have
t
sad (z t ), so that
P
τ =1 γτ
" t # t
X Ω2 1 X
γτ sad (z t ) ≤ + γ 2 kG(zτ , ξτ )k2∗ + max D(ξ t , u). (5.3.62)
τ =1
2 2 τ =1 τ u∈Z

Now observe that by construction of the method zτ is a deterministic function of ξ τ −1 : zτ =


Zτ (ξ τ −1 ), where Zτ takes its values in Z. Due to this observation and to the fact that γτ are
deterministic, taking expectations of both sides in (5.3.62) w.r.t. ξ t we get
" t # t
Ω2 σ 2 X
X  
t 2 t
γτ E{sad (z )} ≤ + γ + E max D(ξ , u) ; (5.3.63)
τ =1
2 2 τ =1 τ u∈Z

note that in this computation we bounded E kG(zτ , ξτ )k2∗ , τ ≤ t, as follows:




n o n o  n o
E kG(zτ , ξτ )k2∗ = Eξt kG(Zτ (ξ τ −1 ), ξτ )k2∗ = Eξτ −1 Eξτ kG(Zτ (ξ τ −1 ), ξτ )k2∗ ≤ σ2.
| {z }
≤σ 2

20 . It remains to bound E maxu∈Z D(ξ t , u) , which is done in the following two lemmas:

426 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Lemma 5.3.1 For 1 ≤ τ ≤ t, let vτ be a deterministic vector-valued function of ξ τ −1 : vτ =


Vτ (ξ τ −1 ) taking values in Z. Then
v
t
 X  u t
uX
E | γτ h∆τ , vτ − z1 i | ≤ 2σΩt γτ2 . (5.3.64)
τ =1
| {z }
τ =1
ψτ


Proof. Setting S0 = 0, Sτ = s=1 γs h∆s , vs − z1 i, for τ ≥ 1 we have:
n o n o n o
E Sτ2 = E Sτ2−1 + 2Sτ −1 ψτ + ψτ2 ≤ E Sτ2−1 + 4σ 2 Ω2 γτ2 , (5.3.65)

where the concluding ≤ comes from the following considerations:

a) By construction, Sτ −1 depends solely on ξ τ −1 , while ψτ = γτ hG(Zτ (ξ τ −1 ), ξτ ) −


F (Zτ (ξ τ −1 )), Vτ (ξ τ −1 )i. Invoking (5.3.54), the expectation of ψτ w.r.t. ξτ is zero
for all ξ τ −1 , whence the expectation of the product of Sτ −1 (this quantity depends
solely of ξ τ −1 ) and ψτ is zero.
b) Since Eξ∼P {G(z, ξ)} = F (z) for all z ∈ Z and due to the origin of σ, we have
kF (z)k2∗ ≤ σ 2 for all z ∈ Z, whence Eξ∼P {kG(z, ξ) − F (z)k2∗ } ≤ 4σ 2 , whence
E{k∆τ k2∗ } ≤ 4σ 2 as well due to ∆τ = G(Zτ (ξ τ −1 ), ξτ ) − F (Zτ (ξ τ −1 )). Since vτ
takes its values in Z, we have kz1 − vτ k2 ≤ Ω2 , whence |ψτ | ≤ γτ Ωk∆τ k∗ , and
therefore E{ψτ2 } ≤ 4σ 2 Ω2 γτ2 .
Pt
From (5.3.65) it follows that E{St2 } ≤ 4 τ =1 σ
2 Ω2 γ 2 ,
τ whence
v
u t
2
uX
E{|St |} ≤ 2t σ 2 γτ2 Ω2 .
τ =1

Lemma 5.3.2 We have v


  u t
uX
E max D(ξ t , u) ≤ 6σΩt γτ2 . (5.3.66)
u∈Z
τ =1

Proof. Let ρ > 0. Consider an auxiliary recurrence as follows:

v1 = z1 ; vτ +1 = Proxvτ (−ργτ ∆τ ).

Recalling that ∆τ is a deterministic function of ξ τ , we see that vτ is a deterministic function of


ξ τ −1 taking values in Z. Same as at the first step of the proof of Theorem 5.3.4, the recurrence
specifying vτ implies that for every u ∈ Z it holds
t t
X Ω2 ρ 2 X
ργτ h−∆τ , vτ − ui ≤ + γ 2 k∆τ k2∗ ,
τ =1
2 2 τ =1 τ

or, equivalently,
t t
X Ω2 ρ X
γτ h−∆τ , vτ − ui ≤ + γτ2 k∆τ k2∗ ,
τ =1
2ρ 2 τ =1
5.3. MIRROR DESCENT ALGORITHM 427

whence
Pt Pt Pt
D(ξ t , u) = τ =1 γτ h−∆τ , zτ − ui ≤ τ =1 γτ h−∆τ , vτ − ui + γτ h−∆τ , zτ − vτ i
Ω2 ρ Pt Pt Pτt =1
≤ Qρ (ξ t ) := 2ρ + 2
2 2
τ =1 γτ k∆τ k∗ + τ =1 γτ h∆τ , vτ − z1 i + τ =1 γτ h∆τ , z1 − zτ i

Note that Qρ (ξ t ) indeed depends solely on ξ t , so that the resulting inequality says that
 
max D(ξ , u) ≤ Qρ (ξ ) ⇒ E max D(ξ , u) ≤ E{Qρ (ξ t )}.
t t t
u∈Z uiZ

Further, by construction both vτ and zτ are deterministic functions of ξ τ −1 taking values in Z,


and invoking Lemma 5.3.1 and taking into account that E{k∆τ k2∗ } ≤ 4σ 2 , we get
v
t t
Ω2
u
X uX
E{Qρ (ξ t )} ≤ + 2ρσ 2 γτ2 + 4σΩt γτ2 ,
2ρ τ =1 τ =1

whence v
t t
Ω2
  u
X uX
E max D(ξ t , u) ≤ + 2ρσ 2 γτ2 + 4σΩt γτ2 .
uiZ 2ρ τ =1 τ =1

The resulting inequality holds true for every ρ > 0; minimizing its the right hand side in ρ > 0,
we arrive at (5.3.66). 2

30 . Relation (5.3.66) implies that


  t
X
E max D(ξ , u) ≤ 3Ω2 + 3σ 2
t
γτ2 ,
u∈Z
τ =1

which combines with (5.3.63) to yield (5.3.57). The latter relation, in turn, implies immediately
the “in particular” part of Theorem. 2

5.3.7.3 Refinement
Same as its deterministic counterparts, Theorem 5.3.6 admits a refinement as follows:
Theorem 5.3.7 Let λ1 , λ2 , ... be deterministic reals satisfying (5.3.25), and let
Pt
λτ zτ
z = Pτ =1
t
t .
τ =1 λτ

Then λ Pt
t Ω2 +σ 2 λ γ
τ =1 τ τ
≤ 27 γt
(z t )

E[ξ1 ;...;ξt ]∼P ×...×P sad Λτ , (5.3.67)
σ 2 = supz∈Z Eξ∼P {kG(z, ξ)k2∗ }, Λt = Tτ=1 λτ .
P

In particular, setting

γt = √ , t = 1, 2, ... (5.3.68)
σ t
and
λt = tp , t = 1, 2, ... (5.3.69)
428 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

where p > −1/2, we ensure for every t = 1, 2, ... the relation

Ωσ
E{sad (z t )} ≤ Cp √ , (5.3.70)
t

with Cp depending solely and continuously on p > −1/2.

Proof. 10 . Same as in the proof of Theorem 5.3.5, we have

t λt 2 Pt 2
X γt Ω + τ =1 λτ γτ kG(zτ , ξτ )k∗
∀(u ∈ Z) : λτ hG(zτ , ξτ ), zτ − ui ≤ .
τ =1
2

Setting
∆τ = G(zτ , ξτ ) − F (zτ ),
we can rewrite the above relation as
t λt 2 Pt 2 t
X γt Ω + τ =1 λτ γτ kG(zτ , ξτ )k∗ X
λτ hF (zτ ), zτ − ui ≤ + λτ h∆τ , u − zτ i, (5.3.71)
τ =1
2 τ =1
| {z }
D(ξ t ,u)

where ξ t = [ξ1 ; ...; ξt ]. It follows that

t λt 2 Pt 2
X γt Ω + τ =1 λτ γτ kG(zτ , ξτ )k∗
max λτ hF (zτ ), zτ − ui ≤ + max D(ξ t , u).
u∈Z
τ =1
2 u∈Z

As we have seen when proving Theorem 5.3.4, the left hand side in the latter relation is ≥
Λt sad (z t ), so that
λt 2 Pt 2
t γt Ω + τ =1 λτ γτ kG(zτ , ξτ )k∗
Λt sad (z ) ≤ + max D(ξ t , u). (5.3.72)
2 u∈Z

Same as in the proof of Theorem 5.3.6, the latter relation implies that
λt 2 Pt
t γt Ω + σ2 τ =1 λτ γτ

t

Λt E{sad (z )} ≤ + E max D(ξ , u) . (5.3.73)
2 u∈Z

20 . Applying Lemma 5.3.2 with λτ in the role of γτ , we get the first inequality in the
following chain:
qP h i
t λt 2 γt 2 Pt
E{maxu∈Z D(ξ t , u)} ≤ 6σΩ 2
τ =1 λτ ≤3 γt Ω + λt σ
2
τ =1 λτ
h
λt 2 Pt i (5.3.74)
≤3 γt Ω + σ2 τ =1 λτ γτ ,

where the concluding inequality is due to λγtt λ2τ ≤ λτ γτ for τ ≤ t (recall that (5.3.25) holds true).
(5.3.74) combines with (5.3.73) to imply (5.3.67). The latter relation straightforwardly implies
the “In particular” part of Theorem. 2
5.3. MIRROR DESCENT ALGORITHM 429

Illustration: Matrix Game and `1 minimization with uniform fit. “In the nature” there
exist “genuine” stochastic minimization/saddle point problems, those where the cost function is
given as expectation Z
φ(x, y) = Φ(x, y, ξ)dP (ξ)

with efficiently computable integrant Φ and a distribution P of ξ we can sample from. This
is what happens when the actual cost is affected by random factors (prices, demands, etc.).
In such a situation, assuming Φ convex-concave and, say, continuously differentiable in x, y
for every ξ, the vector G(x, y, ξ) = [ ∂Φ(x,y,ξ)
∂x ; − ∂Φ(x,y,ξ)
∂y ] under minimal regularity assumptions
satisfies (5.3.54) and (5.3.55). Thus, we can mimic a stochastic first order oracle for the cost
function by generating a realization ξt ∼ P and returning the vector G(x, y, ξt ) (t is the serial
number of the call to the oracle, (x, y) is the corresponding query point). Consequently, we can
solve the saddle point problem by the SSPMD algorithm. Note that typically in a situation in
question it is extremely difficult, if at all possible, to compute φ in a closed analytic form and thus
to mimic a deterministic first order oracle for φ. We may say that in the “genuine” stochastic
saddle point problems, the necessity in stochastic oracle-oriented algorithms, like SSPMD, is
genuine as well. This being said, we intend to illustrate the potential of stochastic oracles and
associated algorithms by an example of a completely different flavour, where the saddle point
problem to be solved is a simply-looking deterministic one, and the rationale for randomization
is the desire to reduce its computational complexity.
The problem we are interested in is a bilinear saddle point problem on the direct product of
two simplexes:
min max φ(x, y) := aT x + y T Ax + bT y.
x∈∆n y∈∆m

Note that due to the structure of ∆n and ∆m , we can reduce the situation to the one where
a = 0, b = 0; indeed, it suffices to replace the original A with the matrix

A + [1; ...; 1]aT + b[1; ...; 1]T .


| {z } | {z }
m n

By this reason, and in order to simplify notation, we pose the problem of interest as

min max φ(x, y) := y T Ax. (5.3.75)


x∈∆n y∈∆m

This problem arises in several applications, e.g., in matrix games and `1 minimization under
`∞ -loss.

Matrix game interpretation of (5.3.75) (this problem is usually even called “matrix
game”) is as follows:

There are two players, every one selecting his action from a given finite set; w.l.o.g.,
we can assume that the actions of the first player form the set J = {1, ..., n}, and the
actions of the second player form the set I = {1, ..., .m}. If the first player chooses
an action j ∈ J, and the second – an action i ∈ I, the first pays to the second aij ,
where the matrix A = [aij ] 1≤i≤m, is known in advance to both the players. What
1≤j≤n
should rational players (interested to reduce their losses and to increase their gains)
do?
430 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

The answer is easy when the matrix has a saddle point – there exists (i∗ , j∗ ) such
that the entry ai∗ ,j∗ is the minimal one in its row and the maximal one it its column.
You can immediately verify that in this case the maximal entry in every column is at
least ai∗ ,j∗ , and the minimal entry in every row is at most ai∗ ,j∗ , meaning that when
the first player chooses some action j, the second one can take from him at least
ai∗ ,j∗ , and if the second player chooses an action i, the first can pay to the second at
most ai∗ ,j∗ . The bottom line is that rational players should stick to choices j∗ and
i∗ ; in this case, no one of them has an incentive to change his choice, provided that
the adversary sticks to his choice. We see that if A has a saddle point, the rational
players should use its components as their choices. Now, what to do if, as it typically
is the case, A has no saddle point? In this case there is no well-defined solution to
the game as it was posed – there is no pair (i∗ , j∗ ) such that the rational players
will use, as their respective choices, i∗ and j∗ . However, there still is a meaningful
definition of a solution to a slightly modified game – one in mixed strategies, due to
von Neumann and Morgenstern. Specifically, assume that the players play the game
not just once, but many times, and are interested in their average, over the time,
losses. In this case, they can stick to randomized choices, where at t-th tour, the
first player draws his choice  from J according to some probability distribution on
J (this distribution is just a vector x ∈ ∆n , called mixed strategy of the first player),
and the second player draws his choice  according to some probability distribution
on I (represented by a vector y ∈ ∆m , the mixed strategy of the second player). In
this case, assuming the draws independent of each other, the expected per-tour loss
of the first player (which is also the expected per-tour gain of the second player) is
y T Ax. Rational players would now look for a saddle point of φ(x, y) = y T Ax on
the domain ∆n × ∆m comprised of pairs of mixed strategies, and in this domain the
saddle point does exist.

`1 minimization with `∞ fit is the problem min kBξ − bk∞ , or, in the saddle point
ξ:kξk1 ≤1
form, minkξk1 ≤1 maxkηk1 ≤1 η T [Bξ − b]. Representing ξ = p − q, η = r − s with nonnegative
p, q, r, s, the problem can be rewritten in terms of x = [p; q], y = [r; s], specifically, as
" #
h
T T
i B −B
min max y Ax − y a [A = , a = [b; −b]]
x∈∆n y∈∆m −B B

where n = 2dim ξ, m = 2dim η. As we remember, the resulting problem can be represented in


the form of (5.3.75).

Problem (5.3.75) is a nice bilinear saddle point problem. The best known so far algorithms
for solving large-scale instances of this problem within a medium accuracy find an -solution
(x , y ),
sad (x , y ) ≤ 
in O(1) ln(m) ln(n) kAk∞ + steps, where kAk∞ is the maximal magnitude of entries in A. A
p

single step requires O(1) matrix-vector multiplications involving the matrices A and AT , plus
O(1)(m + n) operations of computational overhead. Thus, the overall arithmetic cost of an
-solution is q kAk∞ + 
Idet = O(1) ln(m) ln(n) M

5.3. MIRROR DESCENT ALGORITHM 431

arithmetic operations (a.o.), where M is the maximum of m+n and the arithmetic cost of a pair
of matrix-vector multiplications, one involving A and the other one involving AT . The outlined
bound is achieved, e.g., by the Mirror Prox algorithm with Simplex setup, see section 5.5.2.
For dense m × n matrices A with no specific structure, the quantity M is O(mn), so that the
above complexity bound is quadratic in m, n and can became prohibitively large when m, n are
really large. Could we do better? The answer is “yes” — we can replace time-consuming in the
large scale case precise matrix-vector multiplication by its computationally cheap randomized
version.

A stochastic oracle for matrix-vector multiplication can be built as follows. In order


1
to estimate Bx, we can treat the vector kxk 1
abs[x], where abs[x] is the vector of
magnitudes of entries in x, as a probability distribution on the set J of indexes of
columns in B, draw from this distribution a random index  and return the vector
G = kxk1 sign(x )Col [B]. We clearly have

E{G} = Bx

— our oracle indeed is unbiased.

If you think of how could you implement this procedure, you immediately realize that all you
need is an access to a generator producing uniformly distributed on [0, 1] random number ξ; the
output of the outlined stochastic oracle will be a deterministic function of x and ξ, as required in
our model of stochastic oracle. Moreover, the number of operations to build G modulo generating
ξ is dominated by dim x operations to generate  and the effort to extract from B a column
given its number, which we from now on will assume to be O(1)m, m being the dimension of
the column. Thus, the arithmetic cost of calling the oracle in the case of m × n matrix B is
O(1)(m + n) a.o. (note that for all practical purposes, generating ξ takes just O(1) a.o.). Note
also that the output G of our stochastic oracle satisfies

kGk∞ ≤ kxk1 kBk∞ , kBk∞ = max |Bij |.


i,j

Now, the vector field F (x, y) associated with the bilinear saddle point problem (5.3.75) is
" #
T AT
F (x, y) = [Fx (y) = A y; Fy (x) = −Ax] = A[x; z], A = .
−A

We see that computing F (x, y) at a point (x, y) ∈ ∆n × ∆n can be randomized — we can get an
unbiased estimate G(x, y, ξ) of F (x, y) at the cost of O(1)(m + n) a.o., and the estimate satisfies
the condition
kG(x, y, ξ)k∞ ≤ kAk∞ . (5.3.76)

Indeed, in order to get G given (x, y) ∈ ∆n × ∆m , we treat x as a probability distribution


on {1, ..., n}, draw an index  from this distribution and extract from A the corresponding
column Col [A]. Similarly, we treat y as a probability distribution on {1, ..., m}, draw from this
distribution an index ı and extract from AT the corresponding column Colı [AT ]. The formula
for G is just
G = [Colı [AT ]; −Col [A]].
432 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

5.3.7.4 Solving (5.3.75) via Stochastic Saddle Point Mirror Descent.


A natural setup for SSPMD as applied to (5.3.75) is obtained by combining the Entropy setups
for X = ∆n and Y = ∆m . Specifically, let us define the norm k · k on the embedding space
Rm+n of ∆n × ∆m as q
k[x; y]k = αkxk21 + βkyk21 ,
and the DGF ω(x, y) for Z = ∆n × ∆n as
n
X m
X
ω(x, y) = (1 + δ)α (xj + δ/n) ln(xj + δ/n) + (1 + δ)β (yi + δ/m) ln(yi + δ/m), [δ = 10−16 ]
j=1 i=1

where α, β are positive weight coefficients. With this choice of the norm and the DGF, we
clearly ensure that ω is strongly convex modulus 1 with respect to k · k, while the ω-diameter of
Z = ∆n × ∆m is given by
Ω2 = O(1)[α ln(n) + β ln(m)],
provided that m, n ≥ 3, which we assume from now on. Finally, the norm conjugate to k · k is
q
k[ξ; η]k∗ = α−1 kξk2∞ + β −1 kηk2∞

(why?), so that the quantity σ defined in (5.3.57) is, in view of (5.3.76), bounded from above by
s
1 1
b = kAk∞
σ + .
α β

It makes sense to choose α, β in order to minimize the error bound (5.3.59), or, which is the
same, to minimize Ωσ. One of the optimizers (they clearly are unique up to proportionality) is

1 1
α= p p p ,β = p p p ,
ln(n)[ ln(n) + ln(m)] ln(m)[ ln(n) + ln(m)]

which results in q
Ω = O(1), Ωσ = O(1) ln(mn)kAk∞ .
As a result, the efficiency estimate (5.3.59) takes the form

6Ωσ q kAk∞
E{sad (xT , y T )} ≤ √ = O(1) ln(mn) √ . (5.3.77)
T T

Discussion. The number of steps T needed to make the right hand side in the bound (5.3.77)
at most a desired  ≤ kAk∞ is
2
kAk∞

N () = O(1) ln(mn) ,


and the total arithmetic cost of these N () steps is


2
kAk∞

Crand = O(1) ln(mn)(m + n) a.o.

5.3. MIRROR DESCENT ALGORITHM 433

Recall that for a dense general-type matrix A, to solve (5.3.75) within accuracy  deterministi-
cally costs
kAk∞
q  
Cdet () = O(1) ln(m) ln(n)mn a.o.

We see that as far as the dependence of the arithmetic cost of an -solution on the desired relative
accuracy δ := /kAk∞ is concerned, the deterministic algorithm significantly outperforms the
randomized one. At the same time, the dependence of the cost on m, n is much better for the
randomized algorithm. As a result, when a moderate and fixed relative accuracy is sought and
m, n are large, the randomized algorithm by far outperforms the deterministic one.

A remarkable property of the above result is as follows: in order to find a solution


of a fixed relative accuracy δ, the randomized algorithm carries out O(ln(mn)δ −2 )
steps, and at every step inspects m + n entries of the matrix A (those in a randomly
selected row and in a randomly selected column). Thus, the total number of data
entries inspected by the algorithm in course of building a solution of relative accuracy
δ is O(ln(mn)(m + n)δ −2 ), which for large m, n is incomparably less than the total
number mn of entries in A. We see that our algorithm demonstrates what is called
sublinear time behaviour – returns a good approximate solution by inspecting a
negligible, for m = O(n) large, and tending to zero as m = O(n) → ∞, fraction of
problem’s data5 .

Assessing the quality. Strictly speaking, the outlined comparison of the deterministic and
the randomized algorithms for (5.3.75) is not completely fair: with the deterministic algorithm,
we know for sure that the solution we end up with indeed is an -solution to (5.3.75), while
with the randomized algorithm we know only that the expected inaccuracy of our (random!)
approximate solution is ≤ ; nobody told us that a realization of this random solution zb we
actually get solves the problem within accuracy about . If we were able to quantify the actual
accuracy of zb by computing sad (zb), we could circumvent this difficulty as follows: let us choose
the number T of steps in our randomized algorithm in such a way that the resulting expected
inaccuracy is ≤ /2. After zb is built, we compute sad (zb); if this quantity is ≤ , we terminate
with -solution to (5.3.75) at hand. Otherwise (this “otherwise” may happen with probability
≤ 1/2 only, since the inaccuracy is nonnegative, and the expected inaccuracy is ≤ /2) we rerun
the algorithm, check the quality of the resulting solution, rerun the algorithm again when this
quality still is worse then , and so on, until either an -solution is found, or a chosen in advance
maximum number N of reruns is reached. With this approach, in order to ensure generating
an -solution to the problem with probability at least 1 − β we need N = O(ln(1/β)); thus,
even for pretty small β, N is just a moderate constant6 , and the randomized algorithm still can
outperform by far its 100%-reliable deterministic counterpart. The difficulty, however, is that
in order to compute sad (zb), we need to know F (z) exactly, since in our case

sad (x, y) = max[Ax]i − min[AT y]j ,


i≤m j≤n

5
The first sublinear time algorithm for matrix games was proposed by M. Grigoriadis and L. Khachiyan in
1995 [26]. In retrospect, their algorithm is pretty close, although not identical, to SSPMD; both algorithms share
the same complexity bound. In [26] it is also proved that in the situation in question randomization is essential:
no deterministic algorithm for solving matrix games can exhibit sublinear time behaviour.
6
e.g., N = 20 for β = 1.e-6 and N = 40 for β = 1.e-12; note that β = 1.e-12 is, for all practical purposes, the
same as β = 0.
434 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

and even a single computation of F could be too expensive (and will definitely destroy the
sublinear time behaviour). Can we circumvent somehow the difficulty? The answer is positive.
Indeed, note that
A. For a general bilinear saddle point problem
h i
min max aT x + bT y + y T Ax
x∈X y∈Y

the operator F is of a very special structure:

F (x, y) = [AT y + a; −Ax − b] = A[x; y] + c,


" #
AT
where the matrix A = is skew symmetric;
−A

B. The randomized computation of F we have used in the case of (5.3.75) also is of a very
specific structure, specifically, as follows: we associate with z ∈ Z = X × Y a probability
distribution Pz which is supported on Z and is such that if ζ ∼ Pz , then E{ζ} = z, and
our stochastic oracle works as follows: in order to produce an unbiased estimate of F (z),
it draws a realization ζ from Pz and returns G = F (ζ).

Specifically, in the case of (5.3.75), z = (x, y) ∈ ∆n × ∆m , and Px,y is the


probability distribution on the mn pairs of vertices (ej , fi ) of the simplexes ∆n
and ∆m (ej are the standard basic orths in Rn , fi are the standard basic orths
in Rm ) such that the probability of a pair (ej , fi ) is xj yi ; the expectation of this
distribution is exactly z, the expectation of F (ζ), ζ ∼ Px,y is exactly F (z). The
advantage of this particular distribution Pz is that it “sits” on pairs of highly
sparse vectors (just one nonzero entry in every member of a pair), which makes
it easy to compute F at a realization drawn from this distribution.

The point is that when A and B are met, we can modify the SSPMD algorithm, preserving its
efficiency estimate and making F (z t ) readily available for every t. Specifically, at a step τ of
the algorithm, we have at our disposal a realization ζτ of the distribution Pzτ which we used to
compute G(zτ , ξτ ) = F (ζτ ). Now let us replace the rule for generating approximate solutions,
which originally was
" t #−1 t
X X
t t
z = z̄ := γτ γ τ zτ ,
τ =1 τ =1
with the rule " t #−1 t
X X
t t
z = zb := γτ γ τ ζτ .
τ =1 τ =1

since ζt ∼ Pzt and the latter distribution is supported on Z, we still have z t ∈ Z. Further, we
have computed F (ζτ ) exactly, and since F is affine, we just have
" t #−1 t
X X
t
F (z ) = γτ γτ F (ζτ ),
τ =1 τ =1

meaning that sequential computing F (z t ) costs us just additional O(m + n) a.o. per step and
thus increases the overall complexity by just a close to 1 absolute constant factor. What is
5.3. MIRROR DESCENT ALGORITHM 435

not immediately clear, is why the new rule for generating approximate solutions preserves the
efficiency estimate. The explanation is as follows (work it out in full details by yourself): The
proof of Theorem 5.3.6 started from the observation that for every u ∈ Z we have
t
X
γτ hG(zτ , ξτ ), zτ − ui ≤ Bt (ξ t ) (!)
τ =1

with some particular Bt (ξ t


hP); we then
i verified that the supremum of the left hand side in this
t t t
inequality in u ∈ Z is ≥ τ =1 γτ sad (z̄ ) − Ct (ξ ) with certain explicit Ct (·). As a result, we
got the inequality
t
" #
X
γτ sad (z̄ t ) ≤ B(ξ t ) + Ct (ξ t ), (!!)
τ =1

which, upon taking the expectations, led us to the desired efficiency estimate. Now we should
act as follows: it is immediately seen that (!) remains valid and reads
t
X
γτ hF (ζt ), zt − ui ≤ Bt (ξ t ),
τ =1

whence
t
X
γτ hF (ζt ), ζt − ui ≤ Bt (ξ T ) + Dt (ξ t ),
τ =1

where
t
X
Dt (ξ t ) = γτ hF (ζt ), ζt − zt i,
τ =1

which, exactly as in the original proof, leads us to a slightly modified version of (!!), specifically,
to " t #
X
γτ sad (zbt ) ≤ Bt (ξ t ) + Ct (ξ t ) + Dt (ξ t )
τ =1

with the same as in the original proof upper bound on E{Bt (ξ t ) + Ct (ξ t )}. When taking
expectations of the both sides of this inequality, we get exactly the same upper bound on
E{sad (...)} as in the original proof due to the following immediate observation:

E{Dt (ξ t )} = 0.

Indeed, by construction, ζτ is a deterministic function of ξ τ , zτ is a deterministic


function of ξ τ −1 , and the conditional, ξ τ −1 being fixed, distribution of ζτ is Pzτ We
now have

hF (ζτ ), ζτ − zτ i = hAζτ + c, ζτ − zτ i = −hAζτ , zτ i + hc, ζτ − zτ i

where we used the fact that A is skew symmetric. Now, the quantity

−hAζτ , zτ i + hc, ζτ − zτ i

is a deterministic function of ξ τ , and among the terms participating in the latter


formula the only one which indeed depends on ξτ is ζτ . Since the dependence of the
436 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

quantity on ζτ is affine, the conditional, ξ τ −1 being fixed, expectation of it w.r.t. ξτ


is
hA Eξτ {ζτ }, zτ i + hc, Eξτ {ζτ } − zτ i = −hAzτ , zτ i + hc, zτ − zτ i = 0
| {z }

(we again used the fact that A is skew symmetric). Thus, E{hF (ζτ ), ζτ − zτ i} = 0,
whence E{Dt (ξ t } = 0 as well.

Operational Exercise 5.3.1 The “covering story” for your tasks is as follows:
There are n houses in a city; i-th house contains wealth wi . A burglar chooses
a house, secretly arrives there and starts his business. A policeman also chooses
his location alongside one of the houses; the probability for the policeman to catch
the burglar is a known function of the distance d between their locations, say, it is
exp{−cd} with some coefficient c > 0 (you may think, e.g., that when the burglary
starts, an alarm sounds or 911 is called, and the chances of catching the burglar
depend on how long it takes from the policeman to arrive at the scene). This cat
and mouse game takes place every night. The goal is to find numerically near-optimal
mixed strategies of the burglar, who is interested to maximize his expected “profit”
[1−exp{−cd(i, j)}]wi (i is the house chosen by the burglar, j is the house at which the
policeman is located), and of the policeman who has a completely opposite interest.

Remark: Note that the sizes of problems of the outlined type can be really huge. For
example, let there be 1000 locations, and 3 burglars and 3 policemen instead of one
burglar and one policeman. Now there are about 1003 /3! “locations” of the burglar
side, and about 10003 /3! “locations” of the police side, which makes the sizes of A
well above 108 .
The model of the situation is a matrix game where the n × n matrix A has the entries

[1 − exp{−cd(i, j)}]wi ,

d(i, j) being the distance between the locations i and j, 1 ≤ i, j ≤ n. In the matrix game
associated with A, the policeman chooses mixed strategy x ∈ ∆n , and the burglar chooses a
mixed strategy y ∈ ∆n . Your task is to get an -approximate saddle point. We assume for
the sake of definiteness that the houses are located at the nodes of a square k × k grid, and
c = 2/D, where D is the maximal distance between a pair of houses. The distance d(i, j) is
either the Euclidean distance kPi − Pj k2 between the locations Pi , Pj of i-th and j-th houses on
the 2D plane, or d(i, j) = kPi − Pj k1 (“Manhattan distance” corresponding to rectangular road
network).
Task: Solve the problem within accuracy  around 1.e-3 for k = 100 (i.e., n = k 2 = 10000). You
may play with
• “profile of wealth” w. For example, in my experiment reported on Fig. 5.1, I used
wi = max[pi /(k − 1), qi /(k − 1)], where pi and qi are the coordinates of i-th house (in
my experiment the houses were located at the 2D points with integer coordinates p, q
varying from 0 to k − 1). You are welcome to use other profiles as well.

• distance between houses – Euclidean or Manhattan one (I used the Manhattan distance).
5.3. MIRROR DESCENT ALGORITHM 437

0.25 0.35

0.3
0.2
0.25

0.15
0.2

0.15
0.1

0.1
0.05
0.05

0 0
100 100
80 100 80 100
60 80 60 80
40 60 40 60
40 40
20 20
20 20
0 0 0 0

near-optimal policeman’s near-optimal burglar’s


wealth
strategy, SSPMD strategy, SSPMD

0.25 0.35

0.3
0.2
0.25

0.15
0.2

0.15
0.1

0.1
0.05
0.05

0 0
100 100
80 100 80 100
60 80 60 80
40 60 40 60
40 40
20 20
20 20
0 0 0 0

optimal policeman’s optimal burglar’s


strategy, refinement strategy, refinement

Figure 5.1: My experiment: 100 × 100 square grid of houses, T = 50, 000 steps of SSPMD, the stepsize
policy (5.3.78) with α = 20. At the resulting solution (x̄, ȳ), max [y T Ax̄] = 0.5016, min [ȳ T Ax] = 0.4985,
y∈∆n x∈∆n
meaning that the saddle point value is in-between 0.4985 and 0.5016, while the sum of inaccuracies,
in terms of the respective objectives, of the primal solution x̄ and the dual solution ȳ is at most
sad (x̄, ȳ) = 0.5016 − 0.4985 = 0.0031. It took 1857 sec to compute this solution on my laptop.
It took just 3.7 sec more to find a refined solution (x̂, ŷ) such that sad (x̂, ŷ) ≤ 9.e-9, and within 6 decimals
after the dot one has max [y T Ax̂] = 0.499998, min [ŷ T Ax] = 0.499998. It follows that up to 6 accuracy
y∈∆n x∈∆n
digits, the saddle point value is 0.499998 (not exactly 0.5!).

• The number of steps T and the stepsize policy. For example, along with the constant
stepsize policy (5.3.58), you can try other policies, e.g.,


γt = α √ , 1 ≤ t ≤ T. (5.3.78)
σ t

where α is an “acceleration coefficient” which you could tune experimentally (I used α =


20, without much tuning). What is the theoretical efficiency estimate for this policy?

• ∗ Looking at your approximate solution, how could you try to improve its quality at a low
computational cost? Get an idea, implement it and look how it works.

You could also try to guess in advance what are good mixed strategies of the players and then
compare your guesses with the results of your computations.
438 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

5.3.8 Mirror Descent and Stochastic Online Regret Minimization


5.3.8.1 Stochastic online regret minimization: problem’s formulation
Consider a stochastic analogy of online regret minimization setup considered in section 5.3.5,
specifically, as follows. We are given

1. a convex compact set X in Euclidean space E, with (X, E) equipped with proximal setup
– a norm k · k on E and a compatible with this norm continuously differentiable on X
DGF ω(·);

2. a family F of convex and Lipschitz continuous functions on X;

3. access to stochastic oracle as follows: at t-th call to the oracle, the input to the oracle
being a query point xt ∈ X and a function ft (·) ∈ F, the oracle returns vector Gt =
Gt [ft (·), xt , ξt ] and a real gt = gt [ft (·), xt , ξt ], where for every t Gt [f (·), x, ξ] and gt [f (·), x, ξ]
are deterministic functions of f ∈ F, x ∈ X and a realization ξ of “oracle’s noise.” It is
further assumed that

(a) {ξt ∼ P : t ≥ 0} is an i.i.d. sequence of random “oracle noises” (noise distribution P


is not known in advance);
(b) ft (·) ∈ F is selected “by nature” and depends solely on t and the collection ξ t−1 =
(ξ0 , ξ1 , ..., ξt−1 ) of oracle noises at steps preceding the step t:

ft (·) = Ft (ξ t−1 , ·),

where Ft (ξ t−1 , x) is a legitimate deterministic function of ξ t−1 and x ∈ X, with


“legitimate” meaning that Ft (ξ t−1 , x) as a function of x belongs to F for all ξ t−1 .
Sequence {¨Ft (·, ·) : t ≥ 1} is selected by nature prior to our actions. In the sequel, we
refer to such a sequence as to nature’s policy.
Finally, we assume that when the stochastic oracle is called at step t, these are we
(the decision maker) who inputs to the stochastic oracle the query point xt , and this
is the nature inputting ft (·) = Ft (ξ t−1 , ·); we do not see ft (·) directly, all we see is
the returned by oracle stochastic information gt [ft (·), xt , ξt ], Gt [ft (·), xt , ξt ] on ft .
(c) our actions x1 , x2 ,... are yielded by a non-anticipative decision making policy as
follows. At step t we have at our disposal the past search points xτ , τ < t, along with
the answers gτ , Gτ of the oracle queried at the inputs (xτ , Fτ (ξ τ −1 , ·)), 1 ≤ τ < t. xt
should be a deterministic function of t and of the tuple {xτ , gτ , Gτ : τ < t}; a non-
anticipative decision making policy is nothing but the collection of these deterministic
functions.
Note that with a non-anticipative decision making policy fixed, the query points xt
become deterministic functions of ξ t−1 , t = 1, 2, ...

Our crucial assumption (which is in force everywhere in this section) is that whatever be t, x
and f (·) ∈ F, we have

(a) Eξ∼P {gt [f (·), x, ξ]} = f (x)


(b.1) Eξ∼P {Gt [f (·), x, ξ]} ∈ ∂f (xt ) (5.3.79)
(b.2) Eξ∼P {kGt [f (·), x, ξ]k2∗ } ≤ σ 2
5.3. MIRROR DESCENT ALGORITHM 439

with some known in advance positive constant σ.


Finally, we assume that the toll we are paying in round t is exactly the quantity gt [ft (·), xt , ξt ],
and that our goal is to devise a non-anticipative decision making policy which makes small, as
N grows, the expected regret
n PN o n o
1 t−1 , ·), x , ξ ] − inf 1 PN t−1 , ·), x, ξ ]
Eξ N t=1 gt [Ft (ξ t t x∈X Eξ N N t=1 gt [Ft (ξ t
nN PN o n o (5.3.80)
1 t−1 1 PN t−1
= Eξ N N t=1 Ft (ξ , xt ) − inf x∈X EξN N t=1 Ft (ξ , x)

(the equality here stems from (5.3.79.a) combined with the facts that xt is a deterministic
function of ξ t−1 and that ξ0 , ξ1 , ξ2 , ... are i.i.d.). Note that we want to achieve the outlined goal
whatever be nature’s policy, that is, whatever be a sequence {Ft (ξ t−1 , ·) : t ≥ 1} of legitimate
deterministic functions.

5.3.8.2 Minimizing stochastic regret by MD


We have specified the problem of stochastic regret minimization; what is on our agenda now, is
to understand how this problem can be solved by Mirror Descent. We intend to use exactly the
same algorithm as in section 5.3.5:

1. Initialization: Select stepsizes {γt > 0}t≥1 which are nonincreasing in t and select x1 ∈ X.

2. Step t = 1, 2, .... Given xt , call the oracle, xt being the query point, pay the corresponding
toll gt , observe Gt , set

xt+1 = Proxxt (γt Gt ) := argmin hγt Gt − ω 0 (xt ), xi + ω(x)


 
x∈X

and pass to step t + 1.

Note that with the selected by the nature sequence {Ft (ξ t−1 , ·) : t ≥ 1} of legitimate deterministic
functions fixed, xt are deterministic functions of ξ t−1 , t = 1, 2, .... Note also that the decision
making policy yielded by the algorithm indeed is non-anticipative (and even does not require
knowledge of tolls gt ).

Convergence analysis. We are about to prove the following straightforward modification of


Theorem 5.3.3:

Theorem 5.3.8 In the notation and under assumptions of this section, whatever be nature’s
policy, that is, whatever be a sequence {Ft (·, ·) : t ≥ 1} of legitimate deterministic functions,
for the induced by this policy trajectory {xt = xt (ξ t−1 ) : t ≥ 1} of the above algorithm, and for
every deterministic sequence {λt > 0 : t ≥ 1} such that λt /γt is nondecreasing in t, it holds for
N = 1, 2, ...:
n o n o
1 PN t−1 , ·), x , ξ ] 1 PN t−1 , ·), x, ξ ]
Eξ N ΛN t=1 gt [Ft (ξ t t − minx∈X EξN ΛN t=1 gt [Ft (ξ t
λN 2 2
PN
γ
Ω +σ t=1
λ t γt
≤ N
2ΛN
(5.3.81)
Here, same as in Theorem 5.3.3,
N
X
ΛN = λt .
t=1
440 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Proof. Repeating word by word the reasoning in the proof of Theorem 5.3.3, we arrive at the
relation
N N
" #
X
t−1 1 λN 2 X
λt hGt [Ft (ξ , ·), xt , ξt ], xt − xi ≤ Ω + λt γt kGt [Ft (ξ t−1 , ·), xt , ξt ]k2∗ .
t=1
2 γN t=1

which holds true for every x ∈ X. Taking expectations of both sides and invoking the fact that
{ξt } are i.i.d. in combination with (5.3.79.b) and the fact that xt is a deterministic function of
ξ t−1 , we arrive at
(N ) " N
#
1 λN 2
λt hFt0 (ξ t−1 , xt ), xt
X X
Eξ N − xi ≤ Ω + σ2 λ t γt , (5.3.82)
t=1
2 γN t=1

where Ft0 (ξ t−1 , xt ) is a subgradient, taken w.r.t. x, of the function Ft (ξ t−1 , x) at x = xt . Since
Ft (ξ t−1 , x) is convex in x, we have Ft (ξ t−1 , xt )−Ft (ξ t−1 , x) ≤ hFt0 (ξ t−1 , xt ), xt −xi, and therefore
(5.3.82) implies that for all x ∈ X it holds
(N ) " N
#
X
t−1 t−1 1 λN 2 X
Eξ N λt [Ft (ξ , xt ) − Ft (ξ , x)] ≤ Ω + σ2 λt γt ,
t=1
2 γN t=1

whence also
λN 2 PN
N N + σ2
( ) ( )
1 X 1 X γN Ω t=1 λt γt
Eξ N λt Ft (ξ t−1 , xt ) − inf EξN λt Ft (ξ t−1 , x) ≤ .
ΛN t=1 x∈X ΛN t=1 2ΛN
Invoking the second representation of the expected regret in (5.3.80), the resulting relation is
exactly (5.3.81). 2

Consequences. Let us choose



γt = √ , λ t = t α ,
σ t
with α ≥ 0. This choice meets the requirements from the description of our algorithm and from
the premise of Theorem 5.3.8, and, similarly to what happened in section 5.3.5, results in
∀(N ≥ α) :
n
1 PN t−1 , x )
o n (1 + α)Ωσ
1 PN t−1 , x)
o
Eξ N ΛN t=1 λt Ft (ξ t − inf EξN ΛN
√ t=1 λt Ft (ξ
, ≤ O(1)
x∈X N
(5.3.83)
with interpretation completely similar to the interpretation of the bound (5.3.41), see section
5.3.5.

5.3.8.3 Illustration: predicting sequences


To illustrate the stochastic regret minimization, consider Predicting Sequence problem as follows:
“The nature” selects a sequence {zt : t ≥ 1} with terms zt from the d-element set
Zd = {1, ..., d}. We observe these terms one by one, and at every time t want to
predict zt via the past observations z1 , ..., zt−1 . Our loss at step t, our prediction
being zbt , is φ(zbt , zt ), where φ is a given loss function such that φ(z, z) = 0, z = 1, ..., d,
and φ(z 0 , z) > 0 when z 0 6= z. What we are interested in is to make the average, over
time horizon t = 1, ..., N , loss small.
5.3. MIRROR DESCENT ALGORITHM 441

What was just said is a “high level” imprecise setting. Missing details are as follows. We assume
that

1. the nature generates sequences z1 , z2 , ... in a randomized fashion. Specifically, there exists
an i.i.d. sequence of “primitive” random variables ζt , t = 0, 1, 2, ... observed one by one by
the nature, and zt is a deterministic function Zt (ζ t−1 ) of ζ t−1 7 . We refer to the collection
Z ∞ = {Zt (·) : t ≥ 1} as to the generation policy.

2. we are allowed for randomized prediction, where the predicted value zbt of zt is generated at
random according to probability distribution xt – a vector from the probabilistic simplex
X = {x ∈ Rd : x ≥ 0, dz=1 xz = 1}; an entry xzt in xt is the probability for our randomized
P

prediction to take value z, z = 1, ..., d.


We impose on xt the restriction to be a deterministic function of z t−1 : xt = Xt (z t−1 ),
and refer to a collection X ∞ = {Xt (·) : t ≥ 1} of deterministic functions Xt (z t−1 ) taking
values in X as to prediction policy.

Introducing auxiliary sequence η0 , η1 , ... of independent of each other and of ζt ’s random variables
uniformly distributed on [0, 1), we can make our randomized predictions zbt ∼ xt deterministic
functions of xt and ηt :
zbt = Z̆(xt , ηt ),
where, for a probabilistic vector x and a real η ∈ [0, 1), Z̆(x, η) is the integer z ∈ Zd uniquely
defined by the relation z−1 ` Pz `
`=1 x ≤ η <
P
`=1 x (think of how would you sample from a proba-
bility distribution x on {1, ..., d}). As a result, setting ξt = [ζt ; ηt ] and thinking about the i.i.d.
sequence ξ0 , ξ1 , ... as of the input “driving” generation and prediction, the respective policies
being Z ∞ and X ∞ , we see that

• zt ’s are deterministic functions of ξ t−1 = [ζ t−1 ; η t−1 ], specifically, zt = Zt (ζ t−1 );

• xt ’s are deterministic functions of z t−1 , specifically, Xt (z t−1 ), and thus are deterministic
functions of ξ t−1 : xt = X
e t (ξ t−1 );

• predictions zbt ’s are deterministic functions of (ζ t−1 , ηt ) and thus – of (ξ t−1 , ηt ): zbt =
Zbt (ξ t−1 , ηt ).

Note that functions Zt are fully determined by generation policy, while functions X
e t and Z
bt stem
from interaction between the policies for generation and for prediction and are fully determined
by these two policies.
Given a generation policy Z ∞ = {Zt (·) : t ≥ 1}, a prediction policy X ∞ = {Xt (·) : t ≥ 1}
and a time horizon N , we

• quantify the performance of X ∞ by the quantity


N
( )
1 X
Eζ N ,ηN φ(Zbt (ζ t−1 , ηt ), Zt (ζ t−1 )) ,
N t=1

that is, by the averaged over the time horizon expected prediction error as quantified by
our loss function φ, and
7
as always, given a sequence u0 , u1 , u2 , ..., we denote by ut = (u0 , u1 , ..., ut ) the initial fragment, of length t + 1,
of the sequence. Similarly, for a sequence u1 , u2 , ..., ut stands for the initial fragment (u1 , ..., ut ) of length t of the
sequence.
442 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

• define the expected N -step regret associated with Z ∞ , X ∞ as


n Pn o
RN [X ∞ |Z ∞ ] = Eζ N ,ηN 1
N φ(Zbt (ζ t−1 , ηt ), Zt (ζ t−1 ))
t=1 n o (5.3.84)
− inf x∈X Eζ N ,ηN N1 N bt,x (ζ t−1 , ηt ), Zt (ζ t−1 )) ,
P
t=1 φ( Z

where the only yet undefined quantity Zbt,x (ζ t−1 , ηt ) is the “stationary counterpart” of
Zbt (ζ t−1 , ηt ), that is, Zbt,x (ζ t−1 , ηt ) is t-th prediction yielded by interaction between the
generation policy Z ∞ and the stationary prediction policy Xx∞ = {xt ≡ x : t ≥ 1}.
Thus, the regret is the excess of the average (over time and “driving input” ξ0 , ξ1 , ...)
prediction error yielded by prediction policy X ∞ , generation policy being Z ∞ , over the
similar average error of the best possible under the circumstances (under generation policy
Z ∞ ) stationary (xt = x, t ≥ 1) prediction policy.
Put broadly and vaguely, our goal is to build a prediction policy which, whatever be a generation
policy, makes the regret small provided N is large.
To achieve our goal, we are about to “translate” our prediction problem to the language of
stochastic regret minimization as presented in section 5.3.8.1. A sketch of the translation is as
follows. Let F be the set of d linear functions
d
X
`
f (x) = φ(z, `)xz : X → R, ` = 1, ..., d;
z=1

f ` (x) is just the expected error of the randomized prediction, the distribution of the prediction
being x ∈ X, in the case when the true value to be predicted is `. Note that f ` (·) is fully
determined by ` and “remembers” ` (since by our assumption on the loss function, when x is
the `-th basic orth e` in Rd , we have f ` (e` ) = 0, and f ` (x) > 0 when x ∈ X\{e` }). Consequently,
we lose nothing when assuming that the nature, instead of generating a sequence {zt : t ≥ 1}
of points from Zd , generates a sequence of functions ft (·) = f zt (·) from F, t = 1, 2, ...; with
this interpretation, a generation policy Z ∞ becomes what was in section 5.3.8.1 called nature’s
policy, a policy which specifies t-th selection ft (·) of nature as deterministically depending on
ξ t−1 as on a parameter convex (in fact, linear) function Ft (ξ t−1 , ·) : X → R.
Now, at a step t, after the nature shows us zt , we get full information on the function
ft (·) selected by the nature at this step, and thus know the gradient of this affine function;
this gradient can be taken as what in the decision making framework of section 5.3.8.1 was
called Gt . Besides this, after the nature reveals zt , we know which prediction error we have
made, and this error can be taken as what in section 5.3.8.1 was called the toll gt . On a
straightforward inspection which we leave to the reader, the outlined translation converts the
regret minimization form of our prediction problem into the problem of stochastic online regret
minimization considered in section 5.3.8.1 and ensures validity of all assumptions of Theorem
5.3.8. As a result, we conclude that
(!) The MD algorithm from section 5.3.8.2 can be straightforwardly applied to the
Sequence Prediction problem; with proper proximal setup (namely, the Simplex one)
for the probabilistic simplex X and properly selected stepsize policy, this algorithm
∞ which guarantees the regret bound
yields prediction policy XMD
p
ln(d + 1) max |φ(z, `)|
∞ z,`
RN [XMD |Z ∞ ] ≤ O(1) √ . (5.3.85)
N
whatever be N = 1, 2, ... and a sequence generation policy Z ∞ .
5.3. MIRROR DESCENT ALGORITHM 443

We see that MD allows to push regret to 0 as N → ∞ at an O(1/ N ) rate and uniformly
in sequence generation policies of the nature. As a result, when sequence generating policy used
by the nature is “well suited” for stationary prediction, i.e., nature’s policy makes small the
comparator
N
( )
1 X
inf Eζ N ,ηN φ(Zbt,x (ζ t−1 , ηt ), Zt (ζ t−1 )) (5.3.86)
x∈X N t=1
∞ is good provided N is large. In
in (5.3.84), (!) says that the MD-based prediction policy XMD
the examples to follow, we use the simplest possible loss function: φ(z, `) = 0 when z = ` and
φ(z, `) = 1 otherwise.

Example 1: i.i.d. generation policy. Assume that the nature generates zt by drawing it,
independently across t, from some distribution p (let us call this generation policy an i.i.d. one).
It is immediately seen that due to the i.i.d. nature of z1 , z2 , ..., there is no way to predict zt via
z1 , ..., zt−1 too well, even when we know p in advance; specifically, the smallest possible expected
prediction error is
(p) := 1 − max pz ,
1≤z≤d

and the corresponding optimal prediction policy is to say all the time that the predicted value
of zt is the most probable value z∗ (the index of the largest entry in p). Were p known to us
in advance, we could use this trivial optimal predictor from the very beginning. On the other
hand, the optimal predictor is stationary, meaning that with the generating policy in question,
the comparator is equal to (p). As a result, (!) implies that our MD-based predictor policy is
“asymptotically optimal” in the i.i.d. case: whatever be an i.i.d. generation policy used by the
nature, the average, over time horizon N , expected prediction error of the MD-based
p prediction

can be worse than the best one can get under the circumstances by at most O(1) ln(d + 1)/ N .
A good news is that when designing the MD-based policy, we did not assume that the generation
policy is an i.i.d. one; in fact, we made no assumptions on this policy at all!

Example 2: predicting an individual sequence. Note that a whatever individual se-


quence z ∞ = {zt : t ≥ 1} is given by appropriate generation policy, specifically, a policy where
Zt (ζ t−1 ) ≡ zt is independent of ζ t−1 . For this policy and with our simple loss function, the
comparator (5.3.86), as is immediately seen, is

Card({t ≤ N : zt = z})
N [z ∞ ] = 1 − max ,
1≤z≤d N

that is, it is [pN ], where pN is the empirical distribution of z1 , ..., zN . We see that out MD-
based predictor behaves reasonably well on individual sequences which are “nearly stationary,”
yielding results like: “if the fraction of entries different from 1 in any initial fragment, of length
at least N0 , of z ∞ is at most 0.05, then the percent of errors in the MD-based prediction of
the first N ≥ N0 entries of z ∞ will be at most 0.05 + O(1) ln(d + 1)/N .” Not that much of a
p

claim, but better than nothing!


Of course, there exist “easy to predict” individual sequences (like the alternating sequence
1, 2, 1, 2, 1, 2, ...) where the performance of any stationary predictor is poor; in these cases, we
cannot expect much from our MD-based predictor.
444 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

5.4 Bundle Mirror and Truncated Bundle Mirror algorithms


In spite of being optimal in terms of Information-Based Complexity Theory in the large-scale
case, the Mirror Descent algorithm has really poor theoretical rate of convergence, and its typical
practical behaviour is as follows: the objective rapidly improves at few first iterations and then
the method “gets stuck,” perhaps still far from the optimal value; to get further improvement,
one should wait hours, if not days. One way to improve, to some extent, the practical behaviour
of the method is to better utilize the first order information accumulated so far. In this respect,
the MD itself is poorly organized (cf. comments on SD in the beginning of section 5.2.2): all
information accumulated so far is “summarized” in a single entity – the current iterate; we can
say that the method is “memoryless.” Computational experience shows that the first order
methods for nonsmooth convex optimization which do “have memory,” primarily, the so called
bundle algorithms, typically perform much better than the memoryless methods like MD. In
what follows, we intend to present a bundle version of MD – the Bundle Mirror (BM) and the
Truncated Bundle Mirror (TBM) algorithms for solving the problem

f∗ = min f (x) (CP)


x∈X

with Lipschitz continuous convex function f : X → R. Note that BM is in the same relation to
Bundle Level method from section 5.2.2 as MD is to SD – Bundle Level is just the Euclidean
(one corresponding to Ball proximal setup) version of BM.
The assumptions on the problem and the setups for these algorithms are exactly the same as
for MD; in particular, X is assumed to be bounded (see Standing Assumption, section 5.3.4.2).

5.4.1 Bundle Mirror algorithm


5.4.1.1 BM: Description
The setup for BM is the usual proximal setup – a norm k · k on Euclidean space E = Rn and
a compatible with k · k continuously differentiable DGF ω(·) : X → R;

The algorithm is a follows:

1. At the beginning of step t = 1, 2, ... we have at our disposal previous iterates xτ ∈ X,


1 ≤ τ ≤ t, along with subgradients f 0 (xτ ) of the objective at these iterates, and the
current iterate xt ∈ X (x1 is an arbitrary point of X). As always, we assume that

kf 0 (xτ )k∗ ≤ Lk·k (f ),

where Lk·k (f ) is the Lipschitz constant of f w.r.t. k · k.

2. In course of step t, we

(a) compute f (xt ), f 0 (xt ) and build the current model

ft (x) = max[f (xτ ) + hf 0 (xτ ), x − xτ i]


τ ≤t

of f which underestimates the objective and is exact at the points x1 , ..., xt ;


(b) define the best found so far value of the objective f t = minτ ≤t f (xτ )
5.4. BUNDLE MIRROR AND TRUNCATED BUNDLE MIRROR ALGORITHMS 445

(c) define the current lower bound ft on f∗ by solving the auxiliary problem

ft = min ft (x)
x∈X

The current gap ∆t = f t − ft is an upper bound on the inaccuracy of the best found
so far approximate solution;
(d) compute the current level `t = ft + λ∆t (λ ∈ (0, 1) is a parameter)
(e) finally, we set

Lt = {x ∈ X : f t (x) ≤ `t },
xt+1 = ProxL
xt (0) := argminx∈Lt [h−∇ω(xt ), xi + ω(x)]
t

and loop to step t + 1.

Note: With Ball setup, we get

1 1
 
ProxLt
xt (0) = argmin −xTt x + xT x = argmin kx − xt k22 .
x∈Lt 2 x∈Lt 2

i.e., the method becomes exactly the Euclidean Bundle Level algorithm from section 5.2.2.

5.4.1.2 Convergence analysis

Convergence analysis of BM is completely similar to the one of Euclidean Bundle Level algorithm
and is based on the following

Observation 5.4.1 Let J = {s, s + 1, ..., r} be a segment of iterations of BM:

∆r ≥ (1 − λ)∆s

Then the cardinality of J can be upper-bounded as

Ω2 L2k·k (f )
Card(J) ≤ . (5.4.1)
(1 − λ)2 ∆2r

where Ω is the ω-diameter of X.

(cf. Main observation in section 5.2.2).


From Observation 5.4.1, exactly as in the case of Euclidean Bundle Level, one derives

Corollary 5.4.1 For every , 0 <  < ∆1 , the number N of steps of BM algorithm before a gap
≤  is obtained (i.e., before an -solution to (CP) is found) does not exceed the bound

Ω2 L2k·k (f )
N () = . (5.4.2)
λ(1 − λ)2 (2 − λ)2
446 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Verification of Observation 5.4.1: By exactly the same reasons as in the case of Euclidean
Bundle Level, observe that

• For t running through a segment of iterations J, the level sets Lt = {x ∈ X : ft (x) ≤ `t }


have a point in common, namely, v ∈ Argminx∈X fr (x);

• When t ∈ J, the distances γt = kxt − xt+1 k are not too small:

(1 − λ)∆r
γt ≥ . (5.4.3)
Lk·k (f )

Now observe that in our algorithm,

1
Vxt+1 (v) ≤ Vxt (v) − γt2 , t ∈ J. (5.4.4)
2

Indeed, we have
xt+1 = ProxLt
xt (0),

which by (5.3.17) as applied with x = xt , U = Lt , ξ = 0 (which results in x+ = xt+1 ) results in

Vxt (v) − Vxt+1 (v) ≥ Vxt

(recall that v ∈ Lt due to t ∈ J). The right hand side in the latter inequality is ≥ 12 γt2 , and we
arrive (5.4.4).
It remains to note that by evident reasons 2Vxs (v)) ≤ Ω2 and Vxr+1 (v) ≥ 0, so that the
conclusion in Observation 5.4.1 is readily given by (5.4.4) combined with (5.4.3).

5.4.2 Truncated Bundle Mirror


5.4.2.1 TBM: motivation

Main advantage of Bundle Mirror – typical improvement in practical performance due to better
utilization of information on the problem acquired so far – goes at a price: from iteration to
iteration, the piecewise linear model ft (·) of the objective becomes more and more complex (has
more and more linear pieces), implying growing with t computational cost of solving the auxiliary
problems arising at iteration t (those of computing ft and of updating xt into xt+1 ). In Trun-
cated Bundle Mirror alorithm8 the “number of linear pieces” responsible for the computational
complexity of auxiliary problems is under our full control and can be kept below any desired
level. As a result, we can, to some extent, trade off progress in accuracy and computational
effort.

5.4.2.2 TBM: construction

The setup for the TBM algorithm is exactly the same as for the MD one. As applied to problem
(CP), the TBM algorithm works as follows.
8
a.k.a. NERML – Non-Euclidean Restricted Memory Level method
5.4. BUNDLE MIRROR AND TRUNCATED BUNDLE MIRROR ALGORITHMS 447

A. The parameters of TBM are

λ ∈ (0, 1), θ ∈ (0, 1).

The algorithm generates a sequence of search points, all belonging to X, where the First Order
oracle is called, and at every step builds the following entities:
1. the best found so far value of f along with the best found so far search point, which is
treated as the current approximate solution built by the method;

2. a (valid) lower bound on the optimal value f∗ of the problem.

B. The execution is split in subsequent phases. Phase s, s = 1, 2, ..., is associated with


prox-center cs ∈ X and level `s ∈ R such that
• when starting the phase, we already know what is f (cs ), f 0 (cs );

• `s = fs + λ(f s − fs ), where

– f s is the best value of the objective known at the time when the phase starts;
– fs is the lower bound on f∗ we have at our disposal when the phase starts.

In principle, the prox-center c1 corresponding to the very first phase can be chosen in X
in an arbitrary fashion; we shall discuss details in the sequel. We start the entire process with
computing f , f 0 at this prox-center, which results in

f 1 = f (c1 )

and set
f1 = min[f (c1 ) + hf 0 (x1 ), x − c1 i],
x∈X
thus getting the initial lower bound on f∗ .

C. Now let us describe a particular phase s. Let

ωs (x) = ω(x) − hω 0 (cs ), x − cs i;

note that (5.3.3) implies that

ωs (y) ≥ ωs (x) + hωs0 (x), y − xi + 21 ky − xk2 ∀x, y ∈ X.


(5.4.5)
[ωs0 (x) := ω 0 (x) − ω 0 (cs )]

and that cs is the minimizer of ωs (·) on X.


The search points xt = xt,s of the phase s, t = 1, 2, ... belong to X and are generated
according to the following rules:
1. When generating xt , we already have at our disposal xt−1 and a convex compact set
Xt−1 ⊂ X such that

(ot−1 ): Xt−1 ⊂ X is of the same affine dimension as X, meaning that Xt−1


contains a neighbourhood (in X) of a point from the relative interior of X 9
9
The reader will lose nothing by assuming that X is full-dimensional – possesses a nonempty interior. Then
(ot−1 ) simply means that Xt−1 possesses a nonempty interior.
448 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

and
(at−1 ) x ∈ X\Xt−1 ⇒ f (x) > `s ;
(bt−1 ) xt−1 ∈ argmin ωs . (5.4.6)
Xt−1

Here x0 = cs and, say, X0 = X, which ensures (o0 ) and (5.4.6.a0 -b0 ).


2. To update (xt−1 , Xt−1 ) into (xt , Xt ), we solve the auxiliary problem
min gt−1 (x) ≡ f (xt−1 ) + hf 0 (xt−1 ), x − xt−1 i : x ∈ Xt−1 .

(Lt−1 )
x

Our subsequent actions depend on the results of this optimization:


(a) Let fe ≤ +∞ be the optimal value in (Lt−1 ) (as always, fe = ∞ means that the
problem is infeasible). Observe that the quantity
fb = min[fe, `s ]
is a lower bound on f∗ : in X\Xt−1 we have f (x) > `s by (5.4.6.at−1 ), while on Xt−1
we have f (x) ≥ fe due to the inequality f (x) ≥ gt−1 (x) given by the convexity of f .
Thus, f (x) ≥ min[`s , fe] everywhere on X.
In the case of
fb ≥ `s − θ(`s − fs ) (5.4.7)
(“significant” progress in the lower bound), we terminate the phase and update the
lower bound on f∗ as
fs 7→ fs+1 = fb.
The prox-center cs+1 for the new phase can, in principle, be chosen in X in an
arbitrary fashion (see section 5.4.2.4.2).
(b) When no significant progress in the lower bound is observed, we solve the optimization
problem
min {ωs (x) : x ∈ Xt−1 , gt−1 (x) ≤ `s } . (Pt−1 ).
x
This problem is feasible, and moreover, its feasible set is of the same dimension as X.
Indeed, since Xt−1 is of the same dimension as X, the only case when the set
{x ∈ Xt−1 : gt−1 (x) ≤ `s } would lack the same property could be the case of
fe = min gt−1 (x) ≥ `s , whence fb = `s , and therefore (5.4.7) takes place, which is
x∈Xt−1
not the case.
When solving (Pt−1 ), we get the optimal solution xt of this problem and compute
f (xt ), f 0 (xt ). By Fact 5.3.1, for ωs0 (x) := ω 0 (x) − ω 0 (cs ) one has
hωs0 (xt ), x − xt i ≥ 0, ∀(x ∈ Xt−1 : gt (xt ) ≤ `s ). (5.4.8)
It is possible that we get a “significant” progress in the objective, specifically, that
f (xt ) − `s ≤ θ(f s − `s ). (5.4.9)
In this case, we again terminate the phase and set
f s+1 = f (xt );
the lower bound fs+1 on the optimal value is chosen as the maximum of fs and of all
lower bounds fb generated during the phase. The prox-center cs+1 for the new phase,
same as above, can in principle be chosen in X in an arbitrary fashion.
5.4. BUNDLE MIRROR AND TRUNCATED BUNDLE MIRROR ALGORITHMS 449

(c) When (Pt−1 ) is feasible and (5.4.9) does not take place, we continue the phase s,
choosing as Xt an arbitrary convex compact set such that

X t ≡ {x ∈ Xt−1 : gt−1 (x) ≤ `s } ⊂ Xt ⊂ X t ≡ {x ∈ X : (x − xt )T ωs0 (xt ) ≥ 0}.


(5.4.10)
Note that we are in the case when (Pt−1 ) is feasible and xt is the optimal solution to
the problem; as we have claimed, this implies (5.4.8), so that

6 X t ⊂ X t,
∅=

and thus (5.4.10) indeed allows to choose Xt . Since, as we have seen, X t is of the same
dimension as X, this property will be automatically inherited by Xt , as required by
(ot ). Besides this, every choice of Xt compatible with (5.4.10) ensures (5.4.6.at ) and
(5.4.6.bt ); the first relation is clearly ensured by the left inclusion in (5.4.10) combined
with (5.4.6.at−1 ) and the fact that f (x) ≥ gt−1 (x), while the second relation (5.4.6.bt )
follows from the right inclusion in (5.4.10) due to the convexity of ωs (·).

5.4.2.3 Convergence Analysis


Let us define s-th gap as the quantity

s = f s − fs

By its origin, the gap is nonnegative and nonincreasing in s; besides this, it clearly is an upper
bound on the inaccuracy, in terms of the objective, of the approximate solution z s we have at
our disposal at the beginning of phase s.
The convergence and the complexity properties of the TBM algorithm are given by the
following statement.

Theorem 5.4.1 (i) The number Ns of oracle calls at a phase s can be bounded from above as
follows:

5Ω2s L2k·k (f ) r
Ns ≤ , Ωs = 2 max[ω(y) − ω(cs ) − hω 0 (cs ), y − cs i] ≤ Ω, (5.4.11)
θ2 (1 − λ)2 2s y∈X

where Ω is the ω-diameter of X.


(ii) For every  > 0 the total number of oracle calls before the phase s with s ≤  is started
(i.e., before -solution to the problem is built) does not exceed

Ω2 L2k·k (f )
N () = c(θ, λ) (5.4.12)
2
with an appropriate c(θ, λ) depending solely and continuously on θ, λ ∈ (0, 1).
Proof. (i): Assume that phase s did not terminate in course of N steps. Observe that then

θ(1 − λ)s
kxt − xt−1 k ≥ , 1 ≤ t ≤ N. (5.4.13)
Lk·k (f )

Indeed, we have gt−1 (xt ) ≤ `s by construction of xt and gt−1 (xt−1 ) = f (xt−1 ) > `s + θ(f s − `s ), since
otherwise the phase would be terminated at the step t − 1. It follows that gt−1 (xt−1 ) − gt−1 (xt ) >
450 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

θ(f s − `s ) = θ(1 − λ)s . Taking into account that gt−1 (·), due to its origin, is Lipschitz continuous w.r.t.
k · k with constant Lk·k (f ), (5.4.13) follows.
Now observe that xt−1 is the minimizer of ωs on Xt−1 by (5.4.6.at−1 ), and the latter set, by con-
struction, contains xt , whence hxt − xt−1 , ωs0 (xt−1 )i ≥ 0 (see (5.4.8)). Applying (5.4.5), we get
 2
1 θ(1 − λ)s
ωs (xt ) ≥ ωs (xt−1 ) + , t = 1, ..., N,
2 Lk·k (f )
whence  2
N θ(1 − λ)s
ωs (xN ) − ωs (x0 ) ≥ .
2 Lk·k (f )
Ω2s
The latter relation, due to the evident inequality max ωs (x)−min ωs ≤ 2 (readily given by the definition
X X
of Ωs and ωs ) implies that
Ω2s L2k·k (f )
N≤ .
θ2 (1 − λ)2 2s
Recalling the origin of N , we conclude that
Ω2s L2k·k (f )
Ns ≤ + 1.
θ2 (1 − λ)2 2s
All we need in order to get from this inequality the required relation (5.4.11) is to demonstrate that
Ω2s L2k·k (f ) 1
≥ , (5.4.14)
θ2 (1 − λ)2 2s 4
which is immediate: let R = max kx − c1 k. We have s ≤ 1 = f (c1 ) − min[f (c1 ) + (x − c1 )T f 0 (c1 )] ≤
x∈X x∈X
RLk·k (f ), where the concluding inequality is given by the fact that kf 0 (c1 )k∗ ≤ Lk·k (f ). On the other
Ω2s
hand, by the definition of Ωs we have 2 ≥ max [ω(y) − hω 0 (cs ), y − cs i − ω(cs )] and ω(y) − hω 0 (cs ), y −
y∈X
cs i − ω(cs ) ≥ 21 ky − cs k2 , whence ky − cs k ≤ Ωs for all y ∈ X, and thus R ≤ 2Ωs , whence s ≤ Lk·k (f )R ≤
2Lk·k (f )Ωs , as required. (i) is proved.
(ii): Assume that s >  at the phases s = 1, 2, ..., S, and let us bound from above the total number
of oracle calls at these S phases. Observe, first, that the two subsequent gaps s , s+1 are linked by the
relation
s+1 ≤ γs , γ = γ(θ, λ) := max[1 − θλ, 1 − (1 − θ)(1 − λ)] < 1. (5.4.15)
Indeed, it is possible that the phase s was terminated according to the rule 2a; in this case
s+1 = f s+1 − fs+1 ≤ f s − [`s − θ(`s − fs )] = (1 − θλ)s ,
as required in (5.4.15). The only other possibility is that the phase s was terminated when the relation
(5.4.9) took place. In this case,
s+1 = f s+1 − fs+1 ≤ f s+1 − fs ≤ `s + θ(f s − `s ) − fs = λs + θ(1 − λ)s = (1 − (1 − θ)(1 − λ))s ,
and we again arrive at (5.4.15).
From (5.4.15) it follows that s ≥ γ s−S , s = 1, ..., S, since S >  by the origin of S. We now have
S S 5Ω2s L2k·k (f ) S 5Ω
b2 L2k·k (f )γ 2(S−s) b2 L2k·k (f ) P
5Ω ∞
γ 2t
P P P
Ns ≤ θ 2 (1−λ)2 2s ≤ θ 2 (1−λ)2 2 ≤ 2
θ (1−λ)  2 2
s=1 s=1
 s=1  t=0
5 b2 L2k·k (f )

≡ 2
θ2 (1 − λ)2 (1 − γ 2 )
| {z }
c(θ,λ)

and (5.4.12) follows. 2


5.4. BUNDLE MIRROR AND TRUNCATED BUNDLE MIRROR ALGORITHMS 451

5.4.2.4 Implementation issues


5.4.2.4.1. Solving auxiliary problems (Lt ), (Pt ). As far as implementation of the TBM
algorithm is concerned, the major issue is how to solve efficiently the auxiliary problems (Lt ),
(Pt ). Formally, these problems are of the same design dimension as the problem of interest;
what do we gain when reducing the solution of a single large-scale problem (CP) to a long series
of auxiliary problems of the same dimension? To answer this crucial question, observe first that
we have control on the complexity of the domain Xt which, up to a single linear constraint, is
the feasible domain of (Lt ), (Pt ). Indeed, assume that Xt−1 is a part of X given by a finite list
of linear inequalities. Then both sets X t and X t in (5.4.10) also are cut off X by finitely many
linear inequalities, so that we may enforce Xt to be cut off X by finitely many linear inequalities
as well. Moreover, we have full control of the number of inequalities in the list. For example,

A. Setting all the time Xt = X t , we ensure that Xt is cut off X by a single linear inequality;

B. Setting all the time Xt = X t , we ensure that Xt is cut off X by t linear inequalities (so
that the larger is t, the “more complicated” is the description of Xt );

C. We can choose something in-between the above extremes. For example, assume that we
have chosen certain m and are ready to work with Xt ’s cut off X by at most m linear
inequalities. In this case, we could use the policy B. at the initial steps of a phase, until
the number of linear inequalities in the description of Xt−1 reaches the maximal allowed
value m. At the step t, we are supposed to choose Xt in-between the two sets

X t = {x ∈ X : h1 (x) ≤ 0, ..., hm (x) ≤ 0, hm+1 (x) ≤ 0}, X t = {x ∈ X : hm+2 (x) ≤ 0},

where

– the linear inequalities h1 (x) ≤ 0, ..., hm (x) ≤ 0 cut Xt−1 off X;


– the linear inequality hm+1 (x) ≤ 0 is, in our old notation, the inequality gt−1 (x) ≤ `s ;
– the linear inequality hm+2 (x) ≤ 0 is, in our old notation, the inequality hx −
xt , ωs0 (xt )i ≥ 0.

Now we can form a list of m linear inequalities as follows:


• we build m − 1 linear inequalities ei (x) ≤ 0 by aggregating the m + 1 inequalities
hj (x) ≤ 0, j = 1, ..., m + 1, so that every one of the e-inequalities is a convex combination
of the h-ones (the coefficients of these convex combinations can be whatever we want);
• we set Xt = {x ∈ X : ei (x) ≤ 0, i = 1, ..., m − 1, hm+2 (x) ≤ 0}.
It is immediately seen that with this approach we ensure (5.4.10), on one hand, and that
Xt is cut off X by at most m inequalities. And of course we can proceed in the same
fashion.

The bottom line is: we always can ensure that Xt−1 is cut off X by at most m linear inequalities
hj (x) ≤ 0, j = 1, ..., m, where m ≥ 1 is (any) desirable bound. Consequently, we may assume
that the feasible set of (Pt−1 ) is cut off X by m + 1 linear inequalities hj (x) ≤ 0, j = 1, ..., m + 1.
The crucial point is that with this approach, we can reduce (Lt−1 ), (Pt−1 ) to convex programs
with at most m + 1 decision variables. Indeed, replacing Rn ⊃ X with the affine span of X,
452 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

we may assume the X has a nonempty interior. By (ot−1 ) this is also the case with the feasible
domain Xt−1 of the problem

Opt = min {gt−1 (x) : x ∈ Xt−1 = {x ∈ X : hj (x) ≤ 0, 1 ≤ j ≤ m}} (Lt−1 ),


x

implying that the linear constraints hj (x) ≤ 0 (which we w.l.o.g. assume to be nonconstant)
satisfy the Slater condition on int X: there exists x̄ ∈ int X such that hj (x̄) < 0, j ≤ m.
Applying Convex Programming Duality Theorem (Theorem D.2.3), we get
m
X
Opt = max G(λ), G(λ) = min[gt−1 (x) + λj hj (x)]. (D)
λ≥0 x∈X
j=1

Now, assuming that prox-mapping associated with ω(·) is easy to compute, it is equally easy to
minimize an affine function over X (why?), so that G(λ) is easy to compute:
m
X m
X
G(λ) = gt−1 (xλ ) + λj hj (xλ ), xλ ∈ Argmin[gt−1 (x) + λj hj (x)].
j=1 x∈X j=1

By construction, G(λ) is a well defined concave function; its supergradient is given by

G0 (λ) = [h1 (xλ ); ...; hm (xλ )]

and thus is as readily available as the value G(λ) of G. It follows that we can compute Opt (which
is all we need as far as the auxiliary problem (Lt−1 ) is concerned) by solving the m-dimensional
convex program (D) by a whatever rapidly converging first order method10 .
The situation with the second auxiliary problem

min {ωs (x) : x ∈ Xt−1 , gt−1 (x) ≤ `s } (Pt−1 ).


x

is similar. Here again we are interested to solve the problem only in the case when its feasible
domain is full-dimensional (recall that we w.l.o.g. have assumed X to be full-dimensional), and
the constraints cutting the feasible domain off X are linear, so that we can write (Pt−1 ) down
as the problem
min {ωs (x) : hj (x) ≤ 0, 1 ≤ j ≤ m + 1}
x

with linear constraints hj (x) ≤ 0, 1 ≤ j ≤ m + 1, satisfying Slater condition on int X. As before,


we can pass to the dual problem
 
m+1
X
max H(λ), H(λ) = min ωs (x) + λj hj (x) . (Dt−1 )
λ≥0 x∈X
j=1

Here again, the concave and well defined dual objective H(λ) can be equipped with a cheap
First Order oracle (provided, as above, that the prox-mapping associated with ω(·) is easy to
compute), so that we can reduce the auxiliary problem (Pt−1 ) to solving a low-dimensional
(provided m is not large) black-box-represented convex program by a first order method. Note
that now we want to build a high-accuracy approximation to the optimal solution of the problem
of actual interest (Pt−1 ) rather than to approximate well its optimal value, and a high-accuracy
10
e.g., by the Ellipsoid algorithm, provided m is in range of tens.
5.4. BUNDLE MIRROR AND TRUNCATED BUNDLE MIRROR ALGORITHMS 453

solution to the Lagrange dual of a convex program not always allows to recover a good solution
to the program itself. Fortunately, in our case the objective of the problem of actual interest
(Pt−1 ) is strongly convex, and in this case a high accuracy solution λ to the dual problem (Dt−1 ),
the accuracy being measured in terms of the dual objective H, does produce a high-accuracy
solution  
m+1
X
xλ = argmin ωs (x) + λj hj (x)
X j=1

to (Pt−1 ), the accuracy being measured in terms of the distance to the precise solution xt to the
latter problem.
The bottom line is that when the “memory depth” m (which is fully controlled by us!) is
not too big and the prox-mapping associated with ω is easy to compute, the auxiliary problems
arising in the TBM algorithm can be solved, at a relatively low computational cost, via Lagrange
duality combined with good high-accuracy first order algorithm for solving black-box-represented
low dimensional convex programs.

5.4.2.4.2. Updating prox-centers. In principle, we can select cs ∈ X in any way we want.


Practical experience says that a reasonable policy is to use as cs the best, in terms of the values
of f , search point generated so far.

5.4.2.4.3. Accumulating information. The set Xt summarizes, in a sense, all the informa-
tion on f we have accumulated so far and intend to use in the sequel. Relation (5.4.10) allows
for a tradeoff between the quality (and the volume) of this information and the computational
effort required to solve the auxiliary problems (Lt−1 ), (Pt−1 ). With no restrictions on this effort,
the most promising policy for updating Xt ’s would be to set Xt = X t−1 (“collecting information
with no compression of it”). With this policy the TBM algorithm with the ball setup is basically
identical to the Bundle-Level Algorithm of Lemarechal, Nemirovski and Nesterov [34] presented
in section 5.2.2; the “truncated memory” version of the latter method (that is, the generic TBM
algorithm with ball setup) was proposed by Kiwiel [32]. Aside of theoretical complexity bounds
similar to (5.3.34), most of bundle methods (in particular, the Prox-Level one) share the fol-
lowing experimentally observed property: the practical performance of the algorithm is in full
accordance with the complexity bound (5.1.2): every n steps reduce the inaccuracy by at least
an absolute constant factor (something like 3). This property is very attractive in moderate
dimensions, where we indeed are capable to carry out several times the dimension number of
steps.

5.4.2.5 Illustration: PET Image Reconstruction by MD and TBM


To get an impression of the practical performance of the TBM method, let us look at numerical
results related to the 2D version of the PET Image Reconstruction problem.

The model. We process simulated measurements as if they were registered by a ring of 360
detectors, the inner radius of the ring being 1 (Fig. 5.2). The field of view is a concentric circle
of the radius 0.9, and it is covered by the 129×129 rectangular grid. The grid partitions the field
of view into 10, 471 pixels, and we act as if tracer’s density was constant in every pixel. Thus,
the design dimension of the problem (PET0 ) we are interested to solve is “just” n = 10471.
454 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Figure 5.2: Ring with 360 detectors, field of view and a line of response

The number of bins (i.e., number m of log-terms in the objective of (PET0 )) is 39784, while
the number of nonzeros among qij is 3,746,832.
The true image is “black and white” (the density in every pixel is either 1 or 0). The
measurement time (which is responsible for the level of noise in the measurements) is mimicked
as follows: we model the measurements according to the Poisson model as if during the period
of measurements the expected number of positrons emitted by a single pixel with unit density
was a given number M .

The algorithm we are using to solve (PET0 ) is the plain TBM method with the simplex setup
and the sets Xt cut off X = ∆n by just one linear inequality:

Xt = {x ∈ ∆n : (x − xt )T ∇ωs (xt ) ≥ 0}.

The parameters λ, θ of the algorithm were chosen as

λ = 0.95, θ = 0.5.

The approximate solution reported by the algorithm at a step is the best found so far search
point (the one with the best value of the objective we have seen to the moment).

The results of two sample runs we are about to present are not that bad.

Experiment 1: Noiseless measurements. The evolution of the best, in terms of the


objective, solutions xt found in course of the first t calls to the oracle is displayed at Fig. 5.3 (on
pictures, brighter areas correspond to higher density of the tracer). The numbers are as follows.
With the noiseless measurements, we know in advance the optimal value in (PET0 ) – it is easily
seen that without noises, the true image (which in our simulated experiment we do know) is
an optimal solution. In our problem, this optimal value equals to 2.8167; the best value of the
objective found in 111 oracle calls is 2.8171 (optimality gap 4.e-4). The progress in accuracy is
plotted on Fig. 5.4. We have built totally 111 search points, and the entire computation took
180 5100 on a 350 MHz Pentium II laptop with 96 MB RAM 11 .
5.4. BUNDLE MIRROR AND TRUNCATED BUNDLE MIRROR ALGORITHMS 455

True image: 10 “hot spots” x1 = n−1 (1, ..., 1)T x2 – some traces of 8 spots
f = 2.817 f = 3.247 f = 3.185

x3 – traces of 8 spots x5 – some trace of 9-th spot x8 – 10-th spot still missing...
f = 3.126 f = 3.016 f = 2.869

x24 – trace of 10-th spot x27 – all 10 spots in place x31 – that is it...
f = 2.828 f = 2.823 f = 2.818

Figure 5.3: Reconstruction from noiseless measurements


456 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

0
10

−1
10

−2
10

−3
10

−4
10
0 20 40 60 80 100 120
Gap(t)
solid line: Relative gap Gap(1) vs. step number t; Gap(t) is the difference between the
best found so far value f (xt ) of f and the current lower bound on f∗ .
In 111 steps, the gap was reduced by factor > 1600
(xt )−f∗
dashed line: Progress in accuracy ff (x1 )−f

vs. step number t
In 111 steps, the accuracy was improved by factor > 1080

Figure 5.4: Progress in accuracy, noiseless measurements

Experiment 2: Noisy measurements (40 LOR’s per pixel with unit density, to-
tally 63,092 LOR’s registered). The pictures are presented at Fig. 5.5. Here are the
numbers. With noisy measurements, we have no a priori knowledge of the true optimal value
in (PET0 ); in simulated experiments, a kind of orientation is given by the value of the objective
at the true image (which is hopefully a close to f∗ upper bound on f∗ ). In our experiment, this
bound equals to -0.8827. The best value of the objective found in 115 oracle calls is -0.8976
(which is less that the objective at the true image; in fact, the algorithm went below the value
of f at the true image already after 35 oracle calls). The upper bound on the optimality gap
at termination is 9.7e-4. The progress in accuracy is plotted on Fig. 5.6. We have built totally
115 search points; the entire computation took 200 4100 .

3D PET. As it happened, the TBM algorithm was not tested on actual clinical 3D PET
data. However, in late 1990’s we were participating in a EU project on 3D PET Image Recon-
struction; in this project, among other things, we did apply to the actual 3D clinical data an
algorithm which, up to minor details, is pretty close to the MD with Entropy setup as applied to
convex minimization over the standard simplex (details can be found in [11]). In the sequel, with
slight abuse of terminology, we refer to this algorithm as to MD, and present some experimental
data on its the practical performance in order to demonstrate that simple optimization tech-
niques indeed have chances when applied to really huge convex programs. We restrict ourselves
with a single numerical example – real clinical brain study carried out on a very powerful PET
scanner. In the corresponding problem (PET0 ), there are n = 2, 763, 635 design variables (this is
the number of voxels in the field of view) and about 25,000,000 log-terms in the objective. The
11
Do not be surprised by the obsolete hardware – the computations were carried out in 2002
5.4. BUNDLE MIRROR AND TRUNCATED BUNDLE MIRROR ALGORITHMS 457

True image: 10 “hot spots” x1 = n−1 (1, ..., 1)T x2 – light traces of 5 spots
f = −0.883 f = −0.452 f = −0.520

x3 – traces of 8 spots x5 – 8 spots in place x8 – 10th spot still missing...


f = −0.585 f = −0.707 f = −0.865

x12 – all 10 spots in place x35 – all 10 spots in place x43 – ...
f = −0.872 f = −0.886 f = −0.896

Figure 5.5: Reconstruction from noisy measurements


458 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

0
10

−1
10

−2
10

−3
10

−4
10
0 20 40 60 80 100 120
Gap(t)
solid line: Relative gap Gap(1) vs. step number t
In 115 steps, the gap was reduced by factor 1580
f (xt )−f
dashed line: Progress in accuracy f (x1 )−f vs. step number t
(f is the last lower bound on f∗ built in the run)
In 115 steps, the accuracy was improved by factor > 460

Figure 5.6: Progress in accuracy, noisy measurements

step # MD OSMD step # MD OSMD


1 -1.463 -1.463 6 -1.987 -2.015
2 -1.725 -1.848 7 -1.997 -2.016
3 -1.867 -2.001 8 -2.008 -2.016
4 -1.951 -2.015 9 -2.008 -2.016
5 -1.987 -2.015 10 -2.009 -2.016
Table 5.1. Performance of MD and OSMD in Brain study

reconstruction was carried out on the INTEL Marlinespike Windows NT Workstation (500 MHz
1Mb Cache INTEL Pentium III Xeon processor, 2GB RAM; this hardware, completely obsolete
now, was state-of-the-art one in late 1990s). A single call to the First Order oracle (a single
computation of the value and the subgradient of f ) took about 90 min. Pictures of clinically
acceptable quality were obtained after just four calls to the oracle (as it was the case with other
sets of PET data); for research purposes, we carried out 6 additional steps of the algorithm.
The pictures presented on Fig. 5.7 are slices – 2D cross-sections – of the reconstructed 3D
image. Two series of pictures shown on Fig. 5.7 correspond to two different versions, plain
and “ordered subsets” one (MD and OSMD, respectively), of the method, see [11] for details.
Relevant numbers are presented in Table 5.1. The best known lower bound on the optimal value
in the problem is -2.050; MD and OSMD decrease in 10 oracle calls the objective from its initial
value -1.436 to the values -2.009 and -2.016, respectively (optimality gaps 4.e-2 and 3.5e-2) and
reduce the initial inaccuracy in terms of the objective by factors 15.3 (MD) and 17.5 (OSMD).
5.4. BUNDLE MIRROR AND TRUNCATED BUNDLE MIRROR ALGORITHMS 459

Figure 5.7: Brain, near-mid slice of the reconstructions. The top-left missing part is the area
affected by the Alzheimer disease.
460 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

5.4.2.6 Alternative: PET via Krylov subspace minimization


The short story we are about to tell is not about First Order algorithms and even not about large-
scale nonsmooth convex optimization; it is about what can be gained with nontrivial modeling.
Consider PET Image Reconstruction problem (PET), and imagine for a moment that there is
no observation noise, so that our observation is

y = P λ,

where P is a given m×n matrix; assume that m  n (there are much more bins than voxels) and
P is of rank n (these “size and nondegeneracy” assumptions indeed hold true in PET practice).
In this ideal case, image reconstruction reduces to solving the system of linear equations

PTPλ = PTy (5.4.16)

with nonsingular matrix, and in principle we could solve it by whatever algorithm of Linear
Algebra. In particular, if we do not want to assemble and invert large matrix P T P , we could
use Conjugate Gradients (CG) – algorithm described in every Linear Algebra textbook. The
only thing which matters for us now is the “high level” description of the algorithm which states
than when started at the origin, k-th approximate solution λk generated by CG is the minimizer
of the convex quadratic form
1
f (λ) = λT [P T P ]λ − λT P T y
2
one the k-th Krylov subspace of the pair (P T P, P T y), that is, the linear span

Ek = Ek (P, y) := Lin{P T y, [P T P ]P T y; [P T P ]2 P T y, ..., [P T P ]k−1 P T y}

of the first k vectors in the sequence {es = [P T P ]s−1 P T y ∈ Rn , s = 1, 2, ...}. There are
basically complete results on non-asymptotic rate of convergence of CG expressed in terms of
the singular spectrum of P and the norm in which the inaccuracy is measured (usually, norm
of the form kxk = k[P T P ]α xk2 ), but this is not our point here; the point is, that the Krylov
subspaces make sense in the case when y is a noisy observation of P λ rather than P λ exactly,
and one can take, as an approximate solution to (PET), the minimizer over this subspace of a
whatever “meaningful” convex function, and not only the above quadratic form. For example,
in (PET) we are minimizing minus log-likelihood of observed y over all nonnegative λ; why not
to restrict this minimization to the one over nonnegative λ from the Krylov subspace Ek (P, y),
with somehow selected k? Theoretical justification of this approach, its convergence rates, etc.,
etc., are completely out of our scope here; all I want is to outline the approach and to illustrate
it numerically.
Observe that when k is moderate, the outlined approach is extremely simple computationally
— all we need is to build generators g1 , ..., gk of Ek (P, y), which requires k + 1 multiplications of
vectors by P T and k multiplications of vectors by P ; restricting candidate solution to be linear
combination of the generators, we end up with convex problem of small (provided k is so) design
dimension. Thus, one thing for sure – the outlined approach is extremely cheap computationally,
provided k is not large. We are about to see how it works numerically. In the “proof of the
concept” experiments to be reported, we dealt with 2D PET Image Reconstruction with“just”
n = 5, 664 pixels and m = 18, 336 bins; these toy for PET sizes were selected to allow for (PET)
“as is” to be solved in moderate time by commercial Interior Point solver, in order to have
benchmark for our results. Along with these Maximum Likelihood recoveries, we built
5.4. BUNDLE MIRROR AND TRUNCATED BUNDLE MIRROR ALGORITHMS 461

• Krylov Least Squares recoveries yielded by k iterations of CG as applied to the linear


system
[P T P ]λ = P T y
with unknowns λ;

• Krylov Maximum Likelihood recoveries obtained by solving with the Ellipsoid method (!)
the k-dimensional convex problem
nP P  o
n Pm m
min j=1 [λ(µ)]j pj − i=1 yi ln j=1 pij [λ(µ)]j : λ(µ) ≥ 0, kµk∞ ≤ M
µ∈Rk  
P Pk
P = [pij ] i≤m, , pj = i pij , λ(µ) = `=1 µ` g`
j≤n

(cf. (PET)); here M < ∞ makes the feasible set bounded (and thus Ellipsoid algorithm
applicable) and should be selected large enough not to affect the results of optimization.

In our experiments, we used k as small as 10. Typical results of our experiments are illustrated on
Fig. 5.8. In these experiments, true tracer’s density λ ≥ 0 was normalized to have j λj ≤ 1, and
P

matrix P was parameterized by observation time T according to P = T P̄ , where P̄ corresponds


to the case when the expected number of total lines of response emitted by unit amount of traces
in unit time is equal to 1; thus, observation time T means equal to T expected total number of
registered lines of response per unit amount of tracer.
On Fig. 5.8, we present the values of recovery errors
1
1 = kλ − λk
b 1 , ∞ = kλ − λk
b ∞
n

where λ is the true tracer’s density, and λ


b is the recovered density. Surprisingly, computationally
cheap Krylov recoveries — even approximate Least Squares via just 10 steps of CG! — are not
worse, and even better, than much more expensive computationally exact Maximum Likelihood
recovery. We do not know whether the same phenomenon takes place in 3D PET recovery of
practical, and not toy, sizes. However, the outlined approach seems to be worthy of a try in real
life applications.
A reader could ask: how happens that high accuracy Maximum Likelihood recovery is, in terms of its
closeness to the true image as quantified by 1 and ∞ , not much better, and except for one experiment
(“MIRScan”, T = 108 ) is even worse than its “heavily truncated” Krylov version. Seemingly, the reason
is that the problem we are solving is rather ill-posed — the maximal singular value of P is by two orders
of magnitude (specifically, by factor ≈ 88.2) larger than its smallest singular value. As a result, “full”
Least Squares and Maximum Lilkelihood recoveries are heavily affected by observation noise. While
this phenomenon, typical for ill-posed problems, cannot be completely overcome12 , there are various
ways to reduce it by appropriate regularization of the problem in question. One of these ways, well
studied in the case of Least Squares recovery, is “regularization by early termination” of the optimization
algorithm (say, CG) as applied to the minimization problem at hand. Considering this subject in details
is beyond our scope; we, however, do believe that advantages, as demonstrated by the above experiments,
of “truncated” Krylov recoveries as compared to the “full Maximum Likelihood” one stem from implicit
regularization of the ill-posed problem at hand induced by the truncation. To support this guess, we
present on Fig. 5.9 the results of additional experiment where the number n of pixels is larger, and the
ratio m/n – smaller than in the experiments just reported, resulting in worse than before conditioning
12
We strongly believe that it is the source of “grainy structure” of the concluding images on Fig. 5.5
462 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

T=106 ,1=9.8e−5,∞=4.8e−4,CPU=0.6 T=107 ,1=2.7e−5,∞=1.4e−4,CPU=0.6 T=108 ,1=1.2e−5,∞=9.1e−5,CPU=0.6

True image (“MRIScan”) T=106 ,1=9.6e−5,∞=5.8e−4,CPU=97.2 T=107 ,1=3.4e−5,∞=2.2e−4,CPU=95.6 T=108 ,1=1.2e−5,∞=6.7e−5,CPU=100.9

T=106 ,1=4.8e−5,∞=2.8e−5,CPU=3.4 T=107 ,1=2.7e−5,∞=2.6e−4,CPU=3.8 T=108 ,1=1.3e−5,∞=1.3e−4,CPU=3.8

T=106 ,1=1.0e−4,∞=5.5e−4,CPU=0.95 T=107 ,1=2.8e−5,∞=1.4e−4,CPU=0.7 T=108 ,1=8.7e−6,∞=4.5e−5,CPU=0.6

True image (“Lenna”) T=106 ,1=1.0e−4,∞=5.7e−4,CPU=45.8 T=107 ,1=3.9e−5,∞=1.9e−4,CPU=43.7 T=108 ,1=1.3e−5,∞=6.6e−5,CPU=40.0

T=106 ,1=7.7e−5,∞=4.2e−4,CPU=4.0 T=107 ,1=2.6e−5,∞=1.3e−4,CPU=4.1 T=108 ,1=8.8e−6,∞=4.6e−5,CPU=4.3

Figure 5.8: PET recovery via Maximum Likelihood and Krylov subspaces, n = 5, 664 pixels and m = 18, 336 bins.
For both images, top, middle, and bottom rows correspond to Krylov Least Squares, Maximum Likelihood, and Krylov
Maximum Likelihood recoveries, respectively. CPU is in seconds
5.5. SADDLE POINT REPRESENTATIONS AND MIRROR PROX ALGORITHM 463

T=106 ,1=7.3e−5,∞=4.7e−4,CPU=0.8 T=107 ,1=1.9e−5,∞=1.1e−4,CPU=0.8 T=108 ,1=8.8e−6,∞=7.9e−5,CPU=0.6

True image (“MRIScan”) T=106 ,1=8.4e−5,∞=7.1e−4,CPU=454.0 T=107 ,1=4.8e−5,∞=3.3e−4,CPU=429.0 T=108 ,1=3.3e−5,∞=2.7e−4,CPU=478.3

T=106 ,1=2.5e−5,∞=2.2e−4,CPU=4.9 T=107 ,1=1.9e−5,∞=1.4e−4,CPU=4.1 T=108 ,1=9.5e−6,∞=1.1e−4,CPU=3.9

Figure 5.9: PET recovery via Maximum Likelihood and Krylov subspaces, n = 10, 000 pixels, m = 19, 900 bins. Top,
middle, and bottom rows correspond to Krylov Least Squares, Maximum Likelihood, and Krylov Maximum Likelihood
recoveries, respectively. CPU is in seconds.

of P ; “truncation,” same as above, is k = 10. In this experiment the phenomenon we are discussing is
essentially more profound that in the preceding experiments: now the recovery errors of Krylov recoveries
are significantly smaller than those of the full Maximum Likelihood recovery (e.g., Krylov recoveries with
T = 107 are more accurate than full Maximum Likelihood recovery with T = 108 ), as is clearly seen from
pictures, and not from numbers only. Note that disadvantage of full Maximum Likelihood recovery in
terms of quality is augmented by unpleasant computational cost which now is by two orders of magnitude
larger than the cost of Krylov recoveries.

5.5 Saddle Point representations and Mirror Prox algorithm


5.5.1 Motivation
Let us revisit the reasons for our interest in the First Order methods. It stemmed from the fact
that when solving large-scale convex programs, applying polynomial time algorithms becomes
questionable: unless the matrices of the Newton-type linear systems to be solved at iterations
of an IPM possess “favourable sparsity pattern,” a single IPM iteration in the large-scale case
becomes too time-consuming to be practical. Thus, we were looking for methods with cheap
iterations, and stated that in the case of constrained nonsmooth problems (this is what we deal
464 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

with in most of applications), the only “cheap” methods we know are First Order algorithms. We
then observed that as a matter of fact, all these algorithms are black-box-oriented, and thus obey
the limits of performance established by Information-based Complexity Theory, and focused on
algorithms which achieve these limits of performance. A bad news here is that the “limits of
performance” are quite poor; the best we can get here is the complexity like N () = O(1/2 ).
When one tries to understand why this indeed is the best we can get, it becomes clear that
the reason is in “blindness” of the black-box-oriented algorithms we use; they utilize only local
information on the problem (values and subgradients of the objective) and make no use of a priori
knowledge of problem’s structure. Here it should be stressed that as far as Convex Programming
is concerned, we nearly always do have detailed knowledge of problem’s structure – otherwise,
how could we know that the problem is convex in the first place? The First Order methods
as presented so far just do not know how to utilize problem’s structure, in sharp contrast with
IPM’s which are fully “structure-oriented.” Indeed, this is the structure of an optimization
problem which underlies its setting as an LP/CQP/SDP program, and IPM’s, same as the
Simplex method, are quite far from operating with the values and subgradients of the objective
and the constraints; instead, they operate directly on problem’s data. We can say that the
LP/CQP/SDP framework is a “structure-revealing” one, and it allows for algorithms with fast
convergence, algorithms capable to build a high accuracy solution in a moderate number of
iterations. In contrast to this, the black-box-oriented framework is “structure-obscuring,” and
the associated algorithms, al least in the large-scale case, possess really slow rate of convergence.
The bad news here is that the LP/CQP/SDP framework requires iterations which in many
applications are prohibitively time-consuming in the large scale case.
Now, for a long time the LP/CQP/SDP framework was the only one which allowed for uti-
lizing problem’s structure. The situation changed significantly circa 2003, when Yu. Nesterov
discovered a “computationally cheap” way to do so. The starting point in his discovery was
the well known fact that the disastrous complexity N () ≥ O(1/2 ) in large-scale nonsmooth
convex optimization comes from nonsmoothness; passing from nonsmooth problems (minimiz-
ing a convex Lipschitz continuous objective over a convex compact domain) to their smooth
counterparts, where the objective, still black-box-represented one, has a Lipschitz continuous

gradient, reduces the complexity bound to O(1/ ), and for smooth problems with favourable
geometry, this reduced complexity turns out to be nearly- or fully dimension-independent, see

section 5.8.1. Now, the “smooth” complexity O(1/ ), while still not a polynomial-time one,
is a dramatic improvement as compared to its “nonsmooth” counterpart O(1/2 ). All this was
known for a couple of decades and was of nearly no practical use, since problems of minimiz-
ing a smooth convex objective over a simple convex set (the latter should be simple in order
to allow for computationally cheap algorithms) are a “rare commodity”, almost newer met in
applications. The major part of Nesterov’s 2003 breakthrough is the observation, surprisingly
simple in the hindsight13 , that
In convex optimization, the typical source of nonsmoothness is taking maxima, and
usually a “heavily nonsmooth” convex objective f (x) can be represented as

f (x) = max φ(x, y)


y∈Y
13
Let me note that the number of actual breakthroughs which are “trivial in retrospect” is much larger than
one could expect. For example, George Dantzig, the founder of Mathematical Programming, considered as one
of his major contributions (and I believe, rightly so) the very formulation of an LP problem, specifically, the idea
of adding to the constraints an objective to be optimized, instead of making decisions via “ground rules,” which
was standard at the time.
5.5. SADDLE POINT REPRESENTATIONS AND MIRROR PROX ALGORITHM 465

with a pretty simple (quite often, just bilinear) convex-concave function φ and simple
convex compact set Y .

As an immediate example, look at the function f (x) = max1≤i≤m [aTi x + bi ] 14 ;


Pm
this function can be represented as f (x) = maxy∈∆m φ(x, y) = i=1 yi [aTi x + bi ],
so that both φ and Y are as simple as they could be...

As a result, the nonsmooth problem

min f (x) (CP)


x∈X

can be reformulated as the saddle point problem

min max φ(x, y) (SP)


x∈X y∈Y

with a pretty smooth (usually just bilinear) convex-concave cost function φ.

Now, saddle point reformulation (SP) of the problem of interest (CP) can be used in different
ways. Nesterov himself used it to smoothen the function f . Specifically, he assumes (see [47])
that φ is of the form
φ(x, y) = xT (Ay + a) − ψ(y)
with convex ψ, and observes that then the function

fδ (x) = max[φ(x, y) − δd(y)],


y∈Y

where the “regularizer” d(·) is strongly convex, is smooth (possesses Lipschitz continuous gra-
dient) and is close to f when δ is small. He then minimizes the smooth function fδ by his

optimal method for smooth convex minimization – one with the complexity O(1/ ). Since the
Lipschitz constant of the gradient of fδ deteriorates as δ → +0, on one hand, and we need to
work with small δ, those of the order of accuracy  to which we want to solve (CP), on the other
hand, the overall complexity of the outlined smoothing method is O(1/), which still is much
better than the “black-box nonsmooth” complexity O(1/2 ) and in fact is, in many cases, as
good as it could be.
For example, consider the Least Squares problem

Opt = min kBx − bk2 , (LS)


x∈Rn :kxk2 ≤R

where B is an n × n matrix of the spectral norm kBk not exceeding L. This problem can be
easily reformulated in the form of (SP), specifically, as

min max y T [Bx − b],


kxk2 ≤1 kyk2 ≤1

and Nesterov’s smoothing with d(y) = 12 y T y allows to find an -solution to the problem in
O(1) LR
 steps, provided that LR/ ≥ 1; each step here requires two matrix-vector multipli-
cations, one involving B and one involving B T . On the other hand, it is known that when
n > LR/, every method for solving (LS) which is given in advance b and is allowed to
“learn” B via matrix-vector multiplications involving B and B T , needs in the worst (over b
14
these are exactly the functions responsible for the lower complexity bound O(1/2 ).
466 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

and B with kBk ≤ L) case at least O(1)LR/ matrix-vector multiplications in order to find
an -solution to (LS). We can say that in the large-scale case complexity of the pretty well
structured Least Squares problems (LS) with respect to First Order methods is O(1/), and
this is what Nesterov’s smoothing achieves.
Nesterov’s smoothing is one way to utilize a “nice” (with “good” φ) saddle point representation
(SP) of the objective in (CP) to accelerate processing the latter problem. In the sequel, we
present another way to utilize (SP) – the Mirror Prox (MP) algorithm, which needs φ to be
smooth (with Lipschitz continuous gradient), works directly on the saddle point problem (SP)
and solves it with complexity O(1/), same as Nesterov’s smoothing. Let us stress that the
“fast gradient methods” (this seems to become a common name of the algorithms we have just
mentioned and their numerous “relatives” developed in the recent years) do exploit problem’s
structure: this is exactly what is used to build the saddle point reformulation of the problem of
interest; and since the reformulated problem is solved by black-box-oriented methods, we have
good chances to end up with computationally cheap algorithms.

5.5.1.1 Examples of saddle point representations


Here we present instructive examples of “nice” – just bilinear – saddle point representations of
important and “heavily nonsmooth” convex functions. Justification of the examples is left to
the reader. Note that the list can be easily extended.
1. k · kp -norm:
p
kxkp = max hy, xi, q = .
y:kykq ≤1 p−1
Matrix analogy: the Shatten p-norm of a rectangular matrix x ∈ Rm×n is defined as
|x|p = kσ(x)kp , where σ(x) = [σ1 (x); ...; σmin[m,n] (x)] is the vector of the largest min[m, n]
singular values of x (i.e., the singular values which have chances to be nonzero) arranged
in the non-ascending order. We have
p
∀x ∈ Rm×n : |x|p = max hy, xi ≡ Tr(y T x), q = .
y∈Rm×n ,|y|q ≤1 p−1
Variation: p-norm of the “positive part” x+ = [max[x1 , 0]; ...; max[xn , 0]] of a vector x ∈
Rn :
T p
kx+ kp = max hy, xi ≡ y x, q = .
y∈Rn :y≥0,kykq ≤1 p−1
Matrix analogy: for a symmetric matrix x ∈ Sn , let x+ be the matrix with the same
eigenvectors and eigenvalues replaced with their positive parts. Then
p
|x+ |p = max hy, xi = Tr(yx), q = .
y∈Sn :y0,|y|q ≤1 p−1

2. Maximum entry s1 (x) in a vector x and the sum sk (x) of k maximal entries in a vector
x ∈ Rn :
∀(x ∈ Rn , k ≤ n) :
s1 (x) = max hy, xi,
y∈∆n
∆n = {y ∈ Rn : y ≥ 0,
P
i yi = 1};
sk (x) = max hy, xi,
y∈∆n,k
∆n,k = {y ∈ Rn : 0 ≤ yi ≤ 1 ∀i,
P
i yi = k}
5.5. SADDLE POINT REPRESENTATIONS AND MIRROR PROX ALGORITHM 467

Matrix analogies: maximal eigenvalue S1 (x) and the sum of k largest eigenvalues Sk (x) of
a matrix x ∈ Sn :
∀(x ∈ Sn , k ≤ n) :
S1 (x) = max hy, xi ≡ Tr(yx)),
y∈Dn
Dn = {y ∈ Sn : y  0, Tr(y) = 1};
Sk (x) = max hy, xi ≡ Tr(yx),
y∈Dn,k
Dn,k = {y ∈ Sn : 0  y  I, Tr(y) = k}

3. Maximal magnitude s1 (abs[x]) of entries in x and the sum sk (abs[x]) of the k largest
magnitudes of entries in x:
∀(x ∈ Rn , k ≤ n) :
s1 (abs[x]) = max hy, xi;
y∈Rn ,kyk1 ≤1
sk (abs[x]) = max hy, xi ≡ y T x.
y∈Rn :kyk∞ ≤1,kyk1 ≤k

Matrix analogy: the sum Σk (x) of k maximal singular values of a matrix x:


∀(x ∈ Rm×n , k ≤ min[m, n]) :
Σk (x) = max hy, xi ≡ Tr(y T x)).
y∈Rm×n :|y|∞ ≤1,|y|1 ≤k

We augment this sample of “raw materials” with calculus rules. We restrict ourselves with
bilinear saddle point representations:
f (x) = max [hy, Axi + ha, xi + hb, yi + c] ,
y∈Y

where x runs through a Euclidean space Ex , Y is a convex compact subset in a Euclidean space
Ey , x 7→ Ax : Ex → Ey is a linear mapping, and h·, ·i are inner products in the respective spaces.
In the rules to follow, all Yi are nonempty convex compact sets.
1. [Taking conic combinations] Let
fi (x) = max [hyi , Ai xi + hai , xi + hbi , yi i + ci ] , 1 ≤ i ≤ m,
yi ∈Yi

and let λi ≥ 0. Then


P
i λi fi (x) X X X X
= max [ λi hyi , Ai xi + h λi ai , xi + λi hbi , yi i + λi ci ].
y=[y1 ;...;ym ]∈Y =Y1 ×...×Ym
i i i i
| {z } | {z } | {z } | {z }
=:hy,Axi =:ha,xi =:hb,yi =:c

2. [Taking direct sums] Let


fi (xi ) = max [hyi , Ai xi i + hai , xi i + hbi , yi i + ci ] , 1 ≤ i ≤ m.
yi ∈Yi

Then
Pm
f (x ≡ [x1 ; ...; xm ]) = fi (xi )
i=1 X X X X
= max [ λi hyi , Ai xi i + λi hai , xi i + λi hbi , yi i + λi ci ].
y=[y1 ;...;ym ]∈Y =Y1 ×...×Ym
i i i i
| {z } | {z } | {z } | {z }
=:hy,Axi =:ha,xi =:hb,yi =:c
468 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

3. [Affine substitution of argument] For x ∈ Rn , let


f (x) = max [hy, Axi + ha, xi + hb, yi + c] ,
y∈Y

and let ξ 7→ P ξ + p be an affine mapping from Rm to Rn . Then


g(ξ) := f (P ξ + p) = max [hy, AP ξi + hP ∗ a, ξi + hb, yi + [c + ha, pi]].
y∈Y

4. [Taking maximum] Let


fi (x) = max [hyi , Ai xi + hai , xi + hbi , yi i + ci ] , 1 ≤ i ≤ m.
yi ∈Yi

Then
f (x) := maxi fi (x) X X X X
= max [ hui , Ai xi + h ηi ai , xi + hbi , ui i + ci ηi } ],
y=[(η1 ,u1 );...;(ηm ,um )]∈Y
i i i i
| {z } | {z }
n =:hy,Axi =:hb,yi o
Y = y = [(η1 , u1 ); ...; (ηm , um )] : (ηi , ui ) ∈ Ybi , 1 ≤ i ≤ m,
P
i ηi =1 ,

where Ybi are the “conic hats” spanned by Yi :


n o n o
Ybi = cl (η, u) : 0 < η ≤ 1, η −1 u ∈ Yi = {[0, 0]} ∪ (η, u) : 0 < η ≤ 1, η −1 u ∈ Yi

(the second ”=” follows from the compactness of Yi ). Here is the derivation:

P
maxi fi (x) = maxη∈∆ Pm i ηi fi (x) P P P 
= max i hη i yi , Ai xi + i hηi ai , xi + i hbi , ηi yi i + i ηi ci
η∈∆m ,yi ∈Yi ,1≤i≤m |{z}
=:uPi P P P
= max P [[ i hui , Ai xi i + h i ηi ai , xi] + + [ i hbi , ui i + i ci ηi ]] .
(ηi ,ui )∈Y
bi ,1≤i≤m, i ηi =1

5.5.2 The Mirror Prox algorithm


We are about to present the Mirror Prox algorithm [38] for solving saddle point problem
min max φ(x, y) (SP)
x∈X y∈Y

with smooth convex-concave cost function φ. Smoothness means that φ is continuously differ-
entiable on Z = X × Y , and that the associated vector field (cf. (5.3.44))
F (z = (x, y)) = [Fx (x, y) := ∇x φ(x, y); Fy (x, y) = −∇y φ(x, y)] : Z := X × Y → Rnx × Rny
is Lipschitz continuous on Z.

The setup for the MP algorithm as applied to (SP) is the same as for the saddle point Mirror
Descent, see section 5.3.6, specifically, we fix a norm k · k on the embedding space Rnx × Rny of
the domain Z of φ and a DGF ω for this domain. Since F is Lipschitz continuous, we have for
properly chosen L < ∞:
kF (z) − F (z 0 )k∗ ≤ Lkz − z 0 k ∀z, z 0 ∈ Z. (5.5.1)
5.5. SADDLE POINT REPRESENTATIONS AND MIRROR PROX ALGORITHM 469

The MP algorithm as applied to (SP) is the recurrence


z1 := (x1 , y1 ) = zω := argminz∈Z ω(z),
wt := (x et , yet ) = Proxzt (γt F (zt ))
zt+1 := (xt+1 , yt+1 ) = Proxzt (γt F (wt )),
Proxz (ξ) = argminw∈Z [Vz (w) + hξ, wi] , (5.5.2)
Vz (w) = ω(w) − hω 0 (z), w − zi − ω(z) ≥ 21 kw − zk2 ,
hP i−1 P
t t
zt = γ
τ =1 τ τ =1 γτ wτ ,

where γt are positive stepsizes. This looks pretty much as the saddle point MD algorithm, see
(5.3.45); the only difference is in what is called extra-gradient step: what was the updating
zt 7→ zt+1 in the MD, now defines an intermediate point wt , and the actual updating zt 7→ zt+1
uses, instead of the value of F at zt , the value of F at this intermediate point. Another difference
is that now the approximate solutions z t are weighted sums of the intermediate points wτ ,
and not of the subsequent prox-centers zτ , as in MD. These “minor changes” result in major
consequences (provided that F is Lipschitz continuous). Here is the corresponding result:
Theorem 5.5.1 Let convex-concave saddle point problem (SP) be solved by the MP algorithm.
Then
(i) For every τ , one has for all u ∈ Z:
γτ hF (wτ ), wτ − ui ≤ Vzτ (u) − Vzτ +1 (u) + [γτ hF (wτ ), wτ − zτ +1 i − Vzτ (zτ +1 )], (a)
| {z }
=:δτ
δτ ≤ γτ hF (wτ ) − F (zτ ), wτ − zτ +1 i − Vzτ (wτ ) − Vwτ (zτ +1 ) (b)
2
≤ γ2τ kF (wτ ) − F (zτ )k2∗ − 12 kwτ − zτ k2 (c)
(5.5.3)
In particular, in the case of (5.5.1) one has
1
γτ ≤ ⇒ δτ ≤ 0,
L
see (5.5.3.c).
(ii) Let Ω be the ω-diameter of Z = X × Y . For t = 1, 2, ..., one has
Ω2 Pt
t 2 + τ =1 δτ
sad (z ) ≤ Pt . (5.5.4)
τ =1 γτ

In particular, let the stepsizes γτ , 1 ≤ τ ≤ t, be such that γτ ≥ L1 and δτ ≤ 0 for all τ ≤ t (by
(i), this is definitely so for the stepsizes γτ ≡ L1 , provided that (5.5.1) takes place). Then

Ω2 Ω2 L
sad (z t ) ≤ Pt ≤ . (5.5.5)
2 τ =1 γτ 2t
Proof. 10 . To verify (i), let us fix τ and set δ = δτ , z = zτ , ξ = γτ F (zτ ), w = wτ , η = γτ F (wτ ),
z+ = zτ +1 . Note that the relations between the entities in question are given by
z ∈ Z, w = Proxz (ξ), z+ = Proxz (η). (5.5.6)
As we remember, from w = Proxz (ξ) and z+ = Proxz (η) it follows that w, z+ ∈ Z and
(a) hξ − ω 0 (z) + ω 0 (w), u − wi ≥ 0 ∀u ∈ Z,
(b) hη − ω 0 (z) + ω 0 (z+ ), u − z+ i ≥ 0 ∀u ∈ Z,
470 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

whence for every u ∈ Z we have

hη, w − ui = hη, z+ − ui + hη, w − z+ i


≤ hω 0 (z) − ω 0 (z+ ), z+ − ui + hη, w − z+ i [by (b)]
= [ω(u) − ω(z) − hω 0 (z), u − zi] − [ω(u) − ω(z+ ) − hω 0 (z+ ), u − z+ i]
+ [ω(z) − ω(z+ ) + hω 0 (z), z+ − zi + hη, w − z+ i]
= Vz (u) − Vz+ (u) + [hη, w − z+ i − Vz (z+ )] [we have proved (5.5.3.a)]
δ := hη, w − z+ i − Vz (z+ ) = hη − ξ, w − z+ i − Vz (z+ ) + hξ, w − z+ i
≤ hη − ξ, w − z+ i − Vz (z+ ) + hω 0 (z) − ω 0 (w), w − z+ i [by (a) with u = z+ ]
= hη − ξ, w − z+ i − ω(z+ ) + ω(z) + hω 0 (z), z+ − zi + hω 0 (z) − ω 0 (w), w − z+ i
= hη − ξ, w − z+ i − ω(z+ ) + ω(z) + hω 0 (z), z+ i − hω 0 (z), zi
+hω 0 (z), wi − hω 0 (z), z+ i − hω 0 (w), wi + hω 0 (w), z+ i
= hη − ξ, w − z+ i + [−ω(w) + ω(z) + hω 0 (z), wi − hω 0 (z), zi]
+ [−ω(z+ ) + ω(w) + hω 0 (w), z+ i − hω 0 (w), wi]
= hη − ξ, w − z+ i − Vz(w) − Vw (z+ ) [we haveproved (5.5.3.b)]
≤ hη − ξ, w − z+ i − 21 kw − zk2 + kw − z+ k2 [by  strong convexity of ω]
≤ kη − ξk∗ kw − z+ k − 21 kw − zk2 + kw − z+ k2
≤ 12 kη − ξk2∗ − 12 kw − zk2 [since kη − ξk∗ kw − z+ k − 12 kw − z+ k2 ≤ 12 kη − ξk2∗ ],

and we have arrived at (5.5.3.c)15 . Thus, (5.5.3) is proved.


Relation (5.5.4) follows from the first inequality in (5.5.3) in exactly the same fashion as
(5.3.47) was derived from (5.3.46) when proving Theorem 5.3.4.
All remaining claims in (i) and (ii) are immediate consequences of (5.5.3) and (5.5.4). 2

Note that (5.5.5) corresponds to O(1/) upper complexity bound.

5.5.2.1 Refinement
Same as for various versions of MD, the complexity analysis of MP can be refined. The corre-
sponding version of Theorem 5.5.1 is as follows.

Theorem 5.5.2 Let positive reals λ1 , λ2 , ... satisfy (5.3.25). Let convex-concave saddle point
problem (SP) be solved by the MP algorithm, and let
Pt
t τ =1 λ τ wτ
z = P t .
τ =1 λτ

Then z t ∈ Z, and for t = 1, 2, ... one has

λt Ω2 Pt λτ
t γt 2 + τ =1 γτ δτ
sad (z ) ≤ Pt , (5.5.7)
τ =1 λτ

with δτ defined by (5.5.3.a).


In particular, let γτ ≥ L1 be such that δτ ≤ 0 for all τ (as is the case when γτ ≡ 1
L and (5.5.1)
takes place, see (5.5.3.c)). Then
λt 2
t γ Ω
sad (z ) ≤ Ptt . (5.5.8)
2 τ =1 λτ
15
Note that the derivation we have just carried out is based solely on (5.5.6) and is completely independent of
what are ξ and η; this observations underlies numerous extensions and modifications of MP.
5.5. SADDLE POINT REPRESENTATIONS AND MIRROR PROX ALGORITHM 471

Proof. Invoking Theorem 5.5.1, we get (5.5.3). This relation implies for every τ :
λτ λτ
∀(u ∈ Z) : λτ hF (wτ ), wτ − ui ≤ [Vzτ (u) − Vzτ +1 (u)] + δτ ,
γτ γτ
whence, same as in the proof of Theorem 5.3.2, for t = 1, 2, ... we get
λt Ω2 Pt λτ
+ δτ
maxu∈Z Λ−1
Pt γt 2 τ =1 γτ
t τ =1 λτ hF (wτP
), wτ − ui ≤ Λt ,
Λt = tτ =1 λτ .

Same as in the proof of Theorem 5.3.4, the latter inequality implies (5.5.7). The latter relation
under the premise of Theorem immediately implies (5.5.8). 2

Incorporating aggressive stepsize policy. The efficiency estimate (5.5.5) in Theorem 5.5.1
is the better the larger are the stepsizes γτ ; however, this bound is obtained under the assumption
that we maintain the relation δτ ≤ 0, and the proof of Theorem suggests that when the stepsizes
are greater than the “theoretically safe” value γsafe = L1 , there are no guarantees for δτ to stay
nonnegative. This does not mean, however, that all we can do is to use the worst-case-oriented
stepsize policy γτ = γsafe . A reasonable “aggressive” stepsize policy is as follows: we start
iteration τ with a “proposed value” γτ+ ≥ γsafe of γτ and use γτ = γτ+ to compute candidate wτ ,
zτ +1 and the corresponding δτ . If δτ ≤ 0, we consider our trial successful, treat the candidate
wτ and zτ +1 as actual ones and pass to iteration τ + 1, increasing the value of γ + by an absolute
constant factor: γτ++1 = θ+ γτ+ , where θ+ > 1 is a parameter of our strategy. On the other
hand, if with γτ = γτ+ we get δτ > 0, we consider the trial unsuccessful, reset γτ to a smaller
value max[θ− γτ , γsafe ] (θ− ∈ (0, 1) is another parameter of our stepsize policy) and recompute
the candidate wτ and zτ +1 with γt set to the new, reduced value. If we still get δτ > 0, we once
again reduce the value of γt and rerun the iteration, proceeding in this fashion until δτ ≤ 0 is
obtained. When it happens, we pass to iteration τ + 1, now setting γτ++1 to the value of γτ we
have reached.
Computational experience shows that with reasonable choice of θ± (our usual choice is θ+ =
1.2, θ− = 0.8) the aggressive policy outperforms the theoretically safe (i.e., with γt ≡ γsafe ) one:
for the former policy, the “step per oracle call” index defined as the ratio of the quantity Γt =
PT
τ =1 γτ participating in the efficiency estimate (5.5.5) to the total number of computations of
F (·) in course of T steps (including computations of F at the finally discarded trial intermediate
points), where T is the step where we terminate the solution process, usually is essentially larger
(for some problems – larger by orders of magnitude) than the similar ratio (i.e., γsafe /2) for the
safe stepsize policy.

5.5.2.2 Typical implementation


In a typical implementation of the MP algorithm, the components X and Y of the domain of
the saddle problem of interest are convex compact subsets of direct products of finitely many
standard blocks, specifically, Euclidean, `1 /`2 and nuclear norm balls. For example, the usual
box is the direct product of several segments (and thus – of one-dimensional Euclidean balls), a
(standard) simplex is a subset of the `1 ball (that is, of a specific `1 /`2 ball)16 , a spectahedron
16
We could treat the standard simplex as a standard block associated with the Entropy/Simplex proximal setup;
to save words, we prefer to treat standard simplexes as parts of the unit `1 balls.
472 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

is a subset of a specific nuclear norm ball, etc. In the sequel, we assume that X and Y are of
the same dimensions as the direct products of standard blocks embedding these two sets. In the
case in question, Z is a subset of the direct product Zb = Z1 × ... × ZN ⊂ E = E1 × ... × EN (Ej
are the Euclidean spaces embedding Zj ) of standard blocks Zj and the dimension of Z is equal
to the one of Z.
b Recall that we have associated standard blocks and their embedding spaces
with standard proximal setups, see section 5.3.3 The most natural way to get a proximal setup
for the saddle point problem in question is to aggregate the proximal setups for separate blocks
into a proximal setup (ω(·), k · k) for Zb ⊂ E, and then to “reduce” this setup onto Z, meaning
that we keep the norm k · k on the embedding space E (this space is common for Z ⊂ Zb and
Z)
b intact and restrict the DGF ω from Z b⊃Z
b onto Z. The latter is legitimate: since Z and Z
are of the same dimension, ω|Z is a DGF for Z compatible with kcdotk, and the ω|Z -radius of
Z clearly does not exceed the quantity

b = 2Θ, Θ = max ω(·) − min ω(·).

Z
b Z
b

Now, the simplest way to aggregate the standard proximal setups for the standard blocks Zj
into such a setup for Zb is as follows: denoting by k · k(j) the norms, and by ωj (zj ) – the DGFs
as given by setups for blocks, we equip the embedding space of Zb with the norm
v
uN
1 N
uX
k[z ; ...; z ]k = t αj kz j k2(j) ,
j=1

and equip Zb with the DGF


N
X
ω(z 1 , ..., z N ) = βj ωj (z j ).
j=1

Here αj and βj are positive aggregation weights, which we choose in order to end up with a
legitimate proximal setup for Z ⊂ E with the best possible efficiency estimate. The standard
recipe is as follows (for justification, see [38]):

Our initial data are the “partial proximal setups” k · k(j) , ωj (·) along with the upper
bounds Θj on the variations

max ωj (·) − min ω(·).


Zj Zj

Now, the direct product structure of the embedding space of Z allows to represent
the vector field F associated with the convex-concave function in question in the
“block form”
F (z 1 , ..., Z N ) = [F1 (z); ...; FN (z)].
Assuming this field Lipschitz continuous, we can find (upper bounds on) the “partial
Lipschitz constants” Lij of this field, so that
N
X
∀(i, z = [z 1 ; ...; z N ] ∈ Z, w = [w1 ; ...; wn ] ∈ Z) : kFi (z)−Fi (w)k(i,∗) ≤ kz j −wj k(j) ,
j=1

where k · k(i,∗) are the norms conjugate to k · k(i) . W.l.o.g. we assume the bounds
Lij symmetric: Lij = Lji .
5.5. SADDLE POINT REPRESENTATIONS AND MIRROR PROX ALGORITHM 473

In terms of the parameters of the partial setups and the quantities Lij , the aggrega-
tion is as follows: we set
P
j
Mkj σ
, αj = Θjj ,
p
Mij = Lij Θi Θj , σk = P
Mij
i,j
k[z 1 ; ...; z N ]k2 = N j 2 1 j PN j
j=1 αj kz k(j) , ω(z , ..., z ) =
P
j=1 αj ωj (z ).

which results in
Θ = max ω(·) − min ω(·) ≤ 1,
Z
b Z
b

(that is, the ω-radius of Z does not exceed 2). The condition (5.5.1) is now satisfied
with X q
L = L := Lij Θi Θj ,
i,j

and the efficiency estimate (5.5.5) becomes


sad (z t ) ≤ Lt−1 , t = 1, 2, ...

Illustration. Let us look at the problem


Optr,p = min kBx − bkr ,
x∈X={x∈Rn :kxkp ≤1}

where B is an m × n matrix and p ∈ {1, 2}, r ∈ {2, ∞}.


The problem admits an immediate saddle point reformulation:
min max y T (Bx − b)
x∈X={x∈Rn :kxkp ≤1} y∈Y ={y∈Rm :kykr∗ ≤1}

where, as always,
r
. r∗ =
r−1
We are in the situation of Z1 = X, Z2 = Y with
h i
F (z 1 ≡ x, z 2 ≡ y) = F1 (y) = B T y; F2 (x) = b − Bx .

Besides this, the blocks Z1 , Z2 are either unit k · k2 -balls, or unit `1 balls. The associated
quantities Θi , Lij , i, j = 1, 2, within absolute constant factors are as follows:
p=1 p=2 r=2 r=∞
Θ1 ln(n) 1 Θ2 1 ln(m)
 
0 kBkp→r
[Lij ]2i,j=1 = , kBkp→r = max kBxkr .
kBkp→r 0 kxkp ≤1

The matrix norms kBkp→max[r,2] , again within an absolute constant factors, are given in the
following table:
p=1 p=2
r = 1 max kColj [B]k2 σmax (B)
j≤n
r = 2 max kColj [B]k2 σmax (B)
j≤n

where, as always, Colj [B] is j-th column of matrix B. For example, consider the situations of
extreme interest for Compressed Sensing, namely, `1 -minimization with k · k2 -fit and `1 mini-
mization with `∞ -fit (that is, the cases (p = 1, r = 2) and (p = 1, r = ∞), respectively). Here
the achievable for MP efficiency estimates (the best known so far in the large scale case, although
achievable not only with MP) are as follows:
474 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

• `1 -minimization with k · k2 -fit:


p
t t ln(n) maxj kColj [B]k2
kx k1 ≤ 1 & kBx − bk2 ≤ min kBx − xk2 + O(1) ;
kxk1 ≤1 t

• `1 -minimization with k · k∞ -fit:


p
t t ln(m) ln(n) maxi,j |Bij |
kx k1 ≤ 1 & kBx − bk∞ ≤ min kBx − xk∞ + O(1) ;
kxk1 ≤1 t

5.6 Summary on Mirror Descent and Mirror Prox Algorithms


The goal of this section is to summarize the numerous preceding results on (non-bundle versions
of) Mirror Descent and Mirror Prox algorithms as applied to various problems “with convex
structure.” We deliberately make the presentation to follow as self-contained as possible, even
at the price of reproducing more or less verbatim some of preceding constructions and proofs.

5.6.1 Situation
We fix once for ever a nonempty convex compact subset Z of Euclidean space E (it will serve
as the domain of problems we want to solve). We equip Z with proximal setup k · k, ω(·), so
that k · k is a norm on E (its conjugate is denoted k · k∗ ) and ω(z) : Z → R is continuously
differentiable and strongly convex, modulus 1, function:

hω 0 (z) − ω 0 (z 0 ), z − z 0 i ≥ kz − z 0 k2 ∀z, z 0 ∈ Z. (5.6.1)

As a consequence, we arrive at
• Bregman distance from z 0 ∈ Z to z ∈ Z:
1
Vz (z 0 ) = ω(z 0 ) − ω(z) − hz 0 − z, ω 0 (z)i ≥ kz 0 − zk2 , z, z 0 ∈ Z; (5.6.2)
2

• ω-capacity of Z:
Θ = max
0
Vz (z 0 ); (5.6.3)
z,z ∈Z

and ω-diameter of Z √
Ω= 2Θ;
note that Ω ≥ maxz,z 0 ∈Z kz − z 0 k due to Vx (z 0 ) ≥ 21 kz − z 0 k.

• Prox-mapping ProxU
z (ξ) : E → U centered at z ∈ Z, where U is a nonempty closed subset
of Z. The mapping is defined by

ProxU
z (ξ) = argmin [hξ, ui + Vz (u)] ; (5.6.4)
u∈U

we skip the superscript U when U = Z: Proxz (·) = ProxZ


z (·). For this mapping, the Magic
relation holds true:
z ∈ Z, ξ ∈ E, z+ = ProxUz (ξ) (5.6.5)
⇒ hξ, z+ − ui ≤ hω (z+ ) − ω 0 (z), u − z+ i = Vz (u) − Vz+ (u) − Vz (z+ ).
0
5.6. SUMMARY ON MIRROR DESCENT AND MIRROR PROX ALGORITHMS 475

Indeed, the inequality in the declared conclusion is given by optimality conditions stating
that hω 0 (z+ ) + ξ − ω 0 (z), u − z+ i ≥ 0 for all u ∈ U , The subsequent equality is given by
the following computation:

hω 0 (z+ ) − ω 0 (z), u − z+ i =
[ω(u) − ω(z) − hω 0 (z), u − zi]
−[ω(u) − ω(z+ ) − hω 0 (z+ ), u − z+ i]
−[ω(z+ ) − ω(z) − hω 0 (z), z+ − zi]
= Vz (u) − Vz+ (u) − Vz (z+ ).

5.6.2 Mirror Descent and Mirror Prox algorithms


5.6.2.1 First Order oracles and oracle-based algorithms
Algorithms we are interested in operate with oracles providing information on the problem being
processed. A general enough for our purposes model of oracle is a “black box” procedure O; at
t-th call to this procedure, the input being a query point zt ∈ Z, the procedure returns vector
Gt (zt , ξt ) ∈ E which is a deterministic function, perhaps depending on t, of the query point
and t-th “oracle noise” ξt , where {ξt , t ≥ 1} is a sequence of independent identically distributed
realizations of random vector ξ. We assume that for all t ≥ 1 and z ∈ Z the random vector
Gt (z, ξ) has finite expectation and set

gt (z) = Eξ {Gt (z, ξ)} (5.6.6)

In the sequel, we denote by ξ t = (ξ1 , ..., ξt ) the sequence of the first t realizations of the oracle’s
noise.
We call oracle O stationary, if Gt (·, ·) ≡ G(·, ·) is independent of t, in which case we call
G(·, ·) the response of O.

An algorithm B associated with oracle O is a procedure which generates subsequent query


(a.k.a. search ) points zt ∈ Z, t = 1, 2, ... — those where oracle O is queried — based on
information accumulated so far, so that zt is a deterministic function (which one – this depends
on B) of the answers Gτ (zτ , ξτ ), 1 ≤ τ < t, returned by O in the first t − 1 calls to the oracle.
As a result, zt is a deterministic function of ξ t−1 :

zt = Zt (ξ t−1 ).

5.6.2.2 Mirror Descent Algorithm


In the situation of Section 5.6.1 and given a oracle O as described in Section 5.6.2.1, the MD al-
gorithm operating with oracle O is specified by starting point z1 ∈ Z and deterministic sequence
of stepsizes {γt > 0}t≥1 and is given by the recurrence

zt+1 = Proxzt (γt Gt (zt , ξt )), t = 1, 2, ... (5.6.7)

As required from an algorithm, the search points zt are deterministic functions of ξ t−1 :

zt = Zt (ξ t−1 ).

The main result on MD algorithm is as follows:


Proposition 5.6.1 Let, in addition to z1 and {γt > 0}t≥1 , a deterministic sequence of weights
λt ≥ 0 be given. Assume that
476 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

1. for some real M ≥ 0 and σ ≥ 0 one has


n o
∀(z ∈ Z, t ≥ 1) : Eξ kGt (z, ξ) − gt (z)k2∗ ≤ σ 2 (5.6.8)

and
∀(z ∈ Z, t) : kgt (z)k∗ ≤ M. (5.6.9)

2. the weights λt ≥ 0 and the stepsizes γt satisfy

λ1 /γ1 ≤ λ2 /γ2 ≤ λ3 /γ3 ≤ ... (5.6.10)

Then for every t ≥ 1 one has


" #  v
t t t
λt Ω2
  u
X X uX
Eξ t max λτ hgτ (zτ ), zτ − zi ≤ + [M 2 + σ 2 ] λτ γτ + 2σΩt λ2τ , (5.6.11)
 z∈Z
τ =1
 2γt τ =1 τ =1
+

(here, as usual, [a]+ = max[a, 0] is the positive part of a real a).

We shall see in Section 5.6.4 that “esoteric” Proposition 5.6.1 straightforwardly yields all
known to us efficiency estimates for all considered so far versions of MD for deterministic and
stochastic convex minimization and convex-concave saddle point problems and allows to address
much wider families of problems “with convex structure” – convex Nash Equilibrium problems,
variational inequalities with monotone operators, and convex equilibrium problems.
Proof of Proposition 5.6.1. Let z ∈ Z, zτ = Zτ (ξ τ −1 ), and let

∆t (ξ t ) = Gt (zt , ξt ) − gt (zt , ξt ). (5.6.12)

so that n o n o
∀ξ t−1 : Eξt ∆t (ξ t ) = 0 & Eξt k∆t (ξ t )k2∗ ≤ σ 2 . (5.6.13)

10 . We have
zτ +1 = Proxzτ (γτ Gt (zτ , ξτ ))
⇒ γτ hGτ (zτ , ξτ ), zτ +1 − zi ≤ Vzτ (z) − Vzτ +1 (z) − Vzτ (zτ +1 ) [by (5.6.5)] 
⇒ γτ hGτ (zτ , ξτ ), zτ − zi ≤ Vzτ (z) − Vzτ +1 (z) + γτ kGτ (zτ , ξτ )k∗ kzτ − zτ +1 k − Vzτ (zτ +1 )
| {z }
≥ 12 kzτ −zτ +1 k2
≤ Vzτ (z) − Vzτ +1 (z) + 12 γτ2 kG  τ (zτ , ξτ )k∗
2
λτ
⇒ λτ hGτ (zτ , ξτ ), zτ − zi ≤ γτ Vzτ (z) − Vzτ +1 (z) + 21 λτ γτ kGτ (zτ , ξτ )k2∗

Pt
⇒ τ =1 λτ hgτ (zτ ), zτ − zi
t Pt Pt
≤ τ =1 λγττ [Vzτ (z) − Vzτ +1 (z)] + τ =1 12 λτ γτ kGτ (zτ , ξτ )k2∗ + τ =1 λτ h∆τ (ζ τ ), z − zτ i
P
1 Xt
= λγ11 Vz1 (z) + [ λγ22 − λγ11 ]Vz2 (z) + ... + [ λγtt − λγt−1
t−1
]Vzt (z) − λγtt Vzt+1 (z) + λτ γτ kGτ (zτ , ξτ )k2∗
|2
| {z } | {z } τ =1
{z }
≥0 ≥0 =:At (ξ t )
Xt
+ λτ h∆τ (ζ τ ), z − zτ i
τ =1
| {z }
=:Bt (ξ t ,z)
λt
≤ 2γt Ω
2
+ At (ξ t−1 ) + Bt (ξ t , z) [due to 0 ≤ Vzτ (z) ≤ 21 Ω2 ]
(5.6.14)
5.6. SUMMARY ON MIRROR DESCENT AND MIRROR PROX ALGORITHMS 477

Taking into account that At (ξ t ) ≥ 0, we arrive at


   
Pt λt 2 t
max τ =1 λτ hgτ (zτ ), zτ − zi ≤ 2γt Ω + At (ξ ) + max Bt (ξ t , z) ,
z∈Z + z∈Z +
1 Pt
At (ξ t )= 2 τ =1 λτ γ τ kG τ (z τ , ξτ )k2

t
Bt (ξ t , z) = λτ h∆τ (ζ τ ), z − zτ i
P
Xτt=1 Xt
= λτ h∆τ (ζ τ ), z − z1 i + λτ h∆τ (ζ τ ), z1 − zτ i
τ =1 τ =1
| {z } | {z }
=:Ct (ξ t ,z) Dt (ξ t )
(5.6.15)

20 . We have
t t
n o 1X n o
λτ γτ Eξτ −1 Eξt kGτ (Zτ (ξ τ −1 ), ξτ )k2∗ ≤ [M 2 + σ 2 ]
X
Eξt At (ξ t ) = λτ γτ (5.6.16)
2 τ =1 τ =1

(see (5.6.8), (5.6.9)).

30 . Let us make the following observation (cf. Lemma 5.3.2).

Lemma 5.6.1 For 1 ≤ τ ≤ t, let δτ (ξ τ ) be deterministic functions taking values in E and such
that n o
Eξτ {δτ (ξ τ )} ≡ 0 & Eξτ kδτ (ξ τ )k2∗ ≤ σ 2 , ∀ξ τ −1 (5.6.17)

Then for every deterministic sequence of reals µτ , τ ≤ t, and every z1 ∈ Z we have


" #  v
 t
X  u t
uX
E max µτ hδ(ξ τ ), z − z1 i ≤ σΩt µ2τ (5.6.18)
 z∈Z 
τ =1 + τ =1

Proof. First, we can omit [·]+ in the left hand side, since the maximum over z ∈ Z in the
brackets clearly is nonnegative (look what happens when z = z1 ). Now, given ρ > 0, consider
the auxiliary recurrence
v1 = z1 ; vτ +1 = Proxvτ (−ρµτ δτ (ξ τ )). (5.6.19)

We see that vτ is a deterministic function of ξ τ −1 taking values in Z. Similarly to (5.6.14), for


z ∈ Z we have

vτ +1 = Proxvτ (−ρµτ δτ (ξ τ ))
⇒ ρµτ h−δτ (ξ τ ), vτ +1 − zi ≤ Vvτ (z) − Vvτ +1 (z) − Vvτ (vτ +1 ) [by (5.6.5)]
⇒ ρµτ h−δτ (ξ τ ), vτ − zi ≤ Vvτ (z) − Vvτ +1 (z) + [ρµτ kδτ (ξ τ )k∗ kvτ − vτ +1 k − Vvτ (vτ +1 ) ]
| {z }
≥ 12 kvτ −vτ +1 k2
≤ Vvτ (z) − Vvτ +1 (z) + 12 ρ2 µ2τ kδτ (ξ τ )k2∗
Pt
⇒ ρµ h−δτ (ξ τ ), vτ − zi ≤ tτ =1 [Vvτ (z) − Vvτ +1 (z)] + tτ =1 12 ρ2 µ2τ kδτ (ξ τ )k2∗
P P
Pτt =1 τ
⇒ µτ h−δτ (ξ τ ), vτ − zi ≤ 2ρ Ω + 21 ρ tτ =1 ρ2 µ2τ kδτ (ξ τ )k2∗
1 2 P
Ptτ =1 τ 1 2 1 Pt 2 2 τ 2 Pt τ
⇒ τ =1 µτ hδτ (ξ ), z − z1 i ≤ 2ρ Ω + 2 ρ τ =1 ρ µτ kδτ (ξ )k∗ + τ =1 µτ hδτ (ξ ), vτ − z1 i
Pt
⇒ max τ =1 µτ hδτ (ξ τ ), z − z1 i ≤ 2ρ Ω + 12 ρ tτ =1 ρ2 µ2τ kδτ (ξ τ )k2∗ + tτ =1 µτ hδτ (ξ τ ), vτ − z1 i
1 2 P P
z∈Z
478 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Taking expectation over ξ t of both sides in the concluding inequality and recalling that the
conditional, ξ τ −1 given, expectation of δτ (ξ τ ) is zero, and similar expectation of kδτ (ξ τ )k2∗ is
≤ σ 2 , we arrive at
h i  n o
Pt Pt
Eξ t maxz∈Z τ =1 µτ hδτ (ξ τ ), z − z1 i = Eξt maxz∈Z τ
τ =1 µτ hδτ (ξ ), z − z1 i
+
Ω2 Pt
≤ 2ρ + 12 ρ 2
τ =1 µτ .

This inequality holds true for every ρ > 0; optimizing in ρ, we arrive at (5.6.18). 2

40 . By (5.6.15) we have
   
Pt λt 2
max τ =1 λτ hgτ (zτ ), zτ − zi ≤ 2γt Ω + At (ξ t ) + max[Ct (ξ t ; z) + Dt (ξ t )
z∈Z +  z∈Z  +
λt 2
≤ 2γt Ω + At (ξ t ) + max Ct (ξ t ; z) + |Dt (ξ t )|.
z∈Z +
 t t t

1
At (ξ t ) = λτ γτ kGτ (zτ , ξτ )k2∗ , Ct (ξ t , z) = λτ h∆τ (ζ τ ), z − z1 i, Dt (ξ t ) = λτ h∆τ (ζ τ ), z1 − zτ i
P P P
2
τ =1 τ =1 τ =1

Taking expectations of both sides of the resulting inequality, applying Lemma 5.6.1 with δτ = ∆τ ,
µτ = λτ and taking into account (5.6.16), we get
" #  v
t t t
λt Ω2
  u
X X uX
Eξ t max λτ hgτ (zτ ), zτ − zi ≤ + [M 2 + σ 2 ] λτ γτ + σΩt λ2τ + Rt , (5.6.20)
 z∈Z
τ =1
 2γt τ =1 τ =1
+

where n o q
Rt = Eξt |Dt (ξ t )| ≤ Eξt Dt2 (ξ t )


We have
t
X
Dt (ξ t ) = λτ h∆τ (ξ τ ), z1 − zτ i .
τ =1
| {z }
rτ (ξ τ )

Observing that zτ depends on ξ τ −1 only and that the expectation of ∆τ (ξ τ ) w.r.t. ξτ is zero
identically in ξ τ −1 , we have
n o
Eξt Dt2 (ξ t ) = tτ =1 λ2τ Eξτ [h∆τ (ξ τ ), z1 − zτ i]2
 P

≤ tτ =1 λ2τ Eξτ k∆τ (ξ τ )k2∗ kz1 − zτ k2 ≤ σ 2 Ω2 tτ =1 λ2τ


P  P

(recall that the k · k-diameter of Z is ≤ Ω and Eξτ k∆τ (ξ τ )k2∗ ≤ σ 2 ). Thus,




v
u t
uX
Rt ≤ σΩt λ2τ , (5.6.21)
τ =1

and (5.6.20) implies that


" #  v
t t t
λt Ω2
  u
X X uX
Eξ t max λτ hgτ (zτ ), zτ − zi ≤ + [M 2 + σ 2 ] λτ γτ + 2σΩt λ2τ ,
 z∈Z
τ =1
 2γt τ =1 τ =1
+

which is nothing but (5.6.11). 2


5.6. SUMMARY ON MIRROR DESCENT AND MIRROR PROX ALGORITHMS 479

5.6.3 Mirror Prox algorithm


So far, we are acquainted with Mirror Prox algorithm for deterministic convex-concave saddle
point problems (Section 5.5.2). This algorithm, same as Mirror Descent, admits a version, which
we are about to describe, applicable to all problems with convex structure to be defined in Section
5.6.4. In the situation of Section 5.6.1 and given an oracle O as described in Section 5.6.2.1, the
MP algorithm operating with O is specified by starting point z1 ∈ Z and deterministic sequence
of stepsizes {γt > 0}t≥1 and is given by the recurrence17

v1 := z1 ∈ Z
wτ = Proxvτ (γτ G2τ −1 (vτ , ξ2τ −1 )) (5.6.22)
vτ +1 = Proxvτ (γτ G2τ (wτ , ξ2τ ))

The main result on the algorithm is as follows:

Proposition 5.6.2 Let, in addition to z1 and {γt > 0}t≥1 , a deterministic sequence of weights
λt ≥ 0 be given. Assume that

1. for some nonnegative M , L, σ one has

∀(τ ≥ 1, z, z 0 ∈ Z) : kg
 2τ
(z) − g2τ −1 (z 0 )k∗ ≤ M + Lkz − z 0 k,
(5.6.23)
∀(τ ≥ 1, z ∈ Z) : Eξ kGτ (z, ξ) − gt (z)k2∗ ≤ σ 2

2. the stepsizes γt satisfy


1
∀τ ≥ 1 : 0 < γτ ≤ , (5.6.24)
2L

3. weights λt and stepsizes γt satisfy

λ1 /γ1 ≤ λ2 /γ2 ≤ λ3 /γ3 ≤ ... (5.6.25)

Then for every t = 1, 2, ... one has


" #  v
t t u t
 X  λt X uX
Eξ2t max λτ hg2τ (wτ ), wτ − zi ≤ Ω2 +[2σ 2 +M 2 ] γτ λτ +2σΩt λ2τ (5.6.26)
 z∈Z
τ =1
 2γ t τ =1 τ =1
+

Proof. Note that vτ , ωτ are deterministic functions of ξ 2τ −2 , ξ 2τ −1 , respectively:

vτ = Vτ (ξ 2τ −2 ), wτ = Wτ (ξ 2τ −1 ).
17
To rewrite (5.6.22) equivalently to fit our general description of an algorithm operating with oracle O, we
should write
 
vτ , t = 2τ − 1 is odd z2τ = Proxz2τ −1 (γτ G2τ −1 (z2τ −1 , ξ2τ −1 ))
zt = , , τ = 1, 2, ...
wτ , t = 2τ is even z2τ +1 = Proxz2τ −1 (γτ G2τ (z2τ , ξ2τ ))
480 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

10 . Let us make the following observation (cf. item (i) of Theorem 5.5.1):

Lemma 5.6.2 For every τ ≥ 1, every z ∈ Z, and all ξ 2τ one has

γτ hG2τ (wτ , ξ2τ ), wτ − zi ≤ Vvτ (z) − Vvτ +1 (z) + [γτ hG2τ (wτ , ξ2τ ), wτ − vτ +1 i − Vvτ (vτ +1 )], (a)
| {z }
=:δτ
δτ ≤ γτ hG2τ (wτ , ξ2τ ) − Gτ (vτ , ξ2τ −1 ), wτ − vτ +1 i − Vvτ (wτ ) − Vwτ (vτ +1 ) (b)
2
≤ γ2τ kG2τ (wτ , ξ2τ ) − G2τ −1 (vτ , ξ2τ −1 )k2∗ − 12 kwτ − vτ k2 (c)
(5.6.27)
Proof is identical to the one of item (i) of Theorem 5.5.1. Let us fix τ ≥ 1, z ∈ Z, ξ 2τ and set

v = vτ = Vτ (ξ 2τ −2 ), η = γτ G2τ −1 (vτ , ξ2τ −1 ), w = wτ = Wτ (ξ 2τ −1 ), ζ = γτ G2τ (wτ , ξ2τ ), v+ = vτ +1 = Vτ +1 (ξ 2τ ),

so that
w = Proxv (η) & v+ = Proxv (ζ),
implying, by the definition of prox-mapping, that

(a) hη − ω 0 (v) + ω 0 (w), z − wi ≥ 0 ∀z ∈ Z,


(b) hζ − ω 0 (v) + ω 0 (v+ ), z − v+ i ≥ 0 ∀z ∈ Z,

We have
hζ, w − zi = hζ, v+ − zi + hζ, w − v+ i
≤ hω 0 (v) − ω 0 (v+ ), v+ − zi + hζ, w − v+ i [by (b)]
= [ω(z) − ω(v) − hω 0 (v), z − vi] − [ω(z) − ω(v+ ) − hω 0 (v+ ), z − v+ i]
+ [ω(v) − ω(v+ ) + hω 0 (v), v+ − vi + hζ, w − v+ i]
= Vv (z) − Vv+ (z) + [hζ, w − v+ i − Vv (v+ )] [we have proved (5.6.27.a)]
δ := hζ, w − v+ i − Vv (v+ ) = hζ − η, w − v+ i − Vv (v+ ) + hη, w − v+ i
≤ hζ − η, w − v+ i − Vv (v+ ) + hω 0 (v) − ω 0 (w), w − v+ i [by (a) with z = v+ ]
= hζ − η, w − v+ i − ω(v+ ) + ω(v) + hω 0 (v), v+ − vi + hω 0 (v) − ω 0 (w), w − v+ i
= hζ − η, w − v+ i − ω(v+ ) + ω(v) + hω 0 (v), v+ i − hω 0 (v), vi
+hω 0 (v), wi − hω 0 (v), v+ i − hω 0 (w), wi + hω 0 (w), v+ i
= hζ − η, w − v+ i + [−ω(w) + ω(v) + hω 0 (v), wi − hω 0 (v), vi]
+ [−ω(v+ ) + ω(w) + hω 0 (w), v+ i − hω 0 (w), wi]
= hζ − η, w − v+ i − Vv(w) − Vw (v+ ) [we haveproved (5.6.27.b)]
≤ hζ − η, w − v+ i − 21 kw − vk2 + kw − v+ k2 [by  strong convexity of ω]
≤ kζ − ηk∗ kw − v+ k − 21 kw − vk2 + kw − v+ k2
≤ 21 kζ − ηk2∗ − 12 kw − vk2 [since kζ − ηk∗ kw − v+ k − 21 kw − v+ k2 ≤ 21 kζ − ηk2∗ ],

and we have arrived at (5.6.27.c). 2

20 Let us use shorthand notation


vτ = Vτ (ξ 2τ −2 ), wτ = Wτ (ξ 2τ −1 ),
g2τ −1 = g2τ −1 (vτ ), G2τ −1 = G2τ −1 (vτ , ξ2τ −1 ), g2τ = g2τ (wτ ), G2τ = G2τ (wτ , ξ2τ ), (5.6.28)
∆2τ −1 = ∆2τ −1 (ξ 2τ −1 ) := G2τ −1 − g2τ −1 , ∆2τ = ∆2τ (ξ 2τ ) := G2τ − g2τ ,

so that n o n o
∀(t ≥ 1, ξ t−1 ) : Eξt ∆t (ξ t ) = 0 & Eξt k∆t (ξ t )k2∗ ≤ σ 2 . (5.6.29)
5.6. SUMMARY ON MIRROR DESCENT AND MIRROR PROX ALGORITHMS 481

Let z ∈ Z. We have
γτ hg2τ , wτ − zi ≤ Vvτ (z) − Vvτ +1 (z) + γτ h∆2τ , z − wτ i
γ2
+ 2τ kG2τ − G2τ −1 k2∗ − 12 kwτ − vτ k2∗ [by (5.6.27)]
≤ Vvτ (z) − Vvτ +1 (z) + γτ h∆2τ , z − wτ i
+γτ2 2k∆2τ −1 k2∗ + 2k∆2τ k2∗ + kg2τ −1 − g2τ k2∗ − 21 kwτ − vτ k2


≤ Vvτ (z) − Vvτ +1 (z) + γτ h∆2τ , z − wτ i


2
+2γτ2 k∆2τ −1 k2∗ + k∆2τ k2∗ + γτ2 [M + Lkwτ − vτ k] − 21 kwτ − vτ k2 [by (5.6.23)]
 

≤ Vvτ (z) − Vvτ +1 (z) + γτ h∆2τ , z − wτ i


+2γτ2 k∆2τ −1 k2∗ + k∆2τ k2∗ + 2γτ2 M 2 + 2γτ2 L2 kwτ − vτ k2 − 12 kwτ − vτ k2
≤ Vvτ (z) − Vvτ +1 (z) + γτ h∆2τ , z − wτ i
+2γτ2 k∆2τ −1 k2∗ + k∆2τ k2∗ + 2γτ2 M 2 [by (5.6.24)]
⇒ λτ hg2τ , wτ − zi ≤ λγττ Vvτ (z) − λγττ Vvτ +1 (z) + λτ h∆2τ , z − wτ i + 2γτ λτ k∆2τ −1 k2∗ + k∆2τ k2∗ + M 2
 
Pt h i h i
λ1 λ2 λ1 λt λt−1 λt
⇒ τ =1 λτ hg2τ , wτ − zi ≤ γ1 Vv1 (z) + γ2 − γ1 Vv2 (z) + ... + γt − γt−1 Vvt (z) − γt Vvt+1 (z)
Pt Pt  
+ τ =1 λτ h∆2τ , z − wτ i + 2 τ =1 γτ λτ k∆2τ −1 k2∗ + k∆2τ k2∗ + M 2
X t
Pt λt
γτ λτ k∆2τ −1 k2∗ + k∆2τ k2∗ + M 2
2
 
⇒ λ
τ =1 τ 2τhg , wτ − zi ≤ 2γt Ω + 2
τ =1
| {z }
At (ξ 2t )
Pt
+ τ =1 λτ h∆2τ , z − wτ i [due to (5.6.25) and Vz (z) ≤ 21 Ω2 ]

We conclude that
   Xt 
Pt λt 2
max τ =1 λτ hg2τ , wτ − zi ≤ 2γt Ω + At (ξ 2t ) + max λτ h∆2τ , z − v1 i +Rt ,
∈Z + ∈Z τ =1 +
| {z }
Ct (ξ 2t )
Pt
At (ξ 2t ) = 2 k∆2τ −1 k2∗ + k∆2τ k2∗ + M 2 , Rt = | tτ =1 h∆2τ , v1
 
− wτ i|
P
τ =1 γτ λτ
(5.6.30)
Similarly to items 20 -40 of the proof of Proposition 5.6.1,
Pt
Eξ2t At (ξ 2t ) ≤ 2[2σ 2 2
q +M ] τ =1 γτ λτ [by (5.6.29)]
Pt
(ξ 2t )

Eξ2t Ct ≤ σΩ λ2 [by (5.6.29) and Lemma 5.6.1]
qPτ =1 τ
t 2
Rt ≤ σΩ τ =1 λτ [by the same reasons as for (5.6.21)]

which combines with (5.6.30) to imply (5.6.26). 2

5.6.4 Processing problems with convex structure by Mirror Descent and Mir-
ror Prox algorithms
5.6.4.1 Problems with convex structure
We are about to list what we call “problems with convex structure” along with associated
(in)accuracy measures. To avoid complications, we always assume the domain of the problem
Z to be nonempty convex compact subset of Euclidean space E.

5.6.4.1.A. Convex Minimization. This is the most basic problem of minimizing over Z a
Lipschitz continuous convex function f . We identify an instance of this problem with function
f : Z → R we need to minimize and associate with this instance the following entities:
482 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

• the set Z∗ (f ) = ArgminZ f (z) of minimizers of f on Z,

• accuracy measure (“residual in terms of the objective”)

Min [z|f ] = f (z) − min f (u)


u∈Z

quantifying the quality of a candidate solution u ∈ Z,

• vector field – a bounded vector-valued function g f (z) : Z → E such that

g f (z) ∈ ∂f (z), z ∈ Z

is a subgradient of f at z,

• stationary oracle O as defined in Section 5.6.2.1 which represents f , meaning that

gO (z) ≡ g f (z), z ∈ Z

(from now on, for a stationary oracle O, gO (z) = Eξ {G(z, ξ)} : Z → E, G(·, ·) being the
response of O).

5.6.4.1.B. Convex-concave Saddle Point problem. In this case, Z = X × Y is the direct


product of two convex compact sets, and the problem is to find a saddle point (min in x ∈ X ,
max in y ∈ Y) of a Lipschitz continuous convex-concave function on Z. We identify an instance
with Lipschitz continuous convex-concave function f (x, y) : X × Y the saddle point of which is
sought, and associate with this instance the following entities:
• the set Z∗ (f ) of the saddle points of f on X ×Y (it is nonempty by Sion-Kakutani Theorem
(Theorem D.4.2)),

• accuracy measure (“duality gap”)

SP [(x, y)|f ] = f (x) − f (x), f (x) = max f (x, y), f (y) = min f (x, y)
y∈Y x∈X

quantifying the quality of a candidate solution (x, y) ∈ Z. Noting that saddle points of f
are exactly the pairs (x∗ , y∗ ) of optimal solutions to the optimization problems

Opt(P ) = minx∈X f (x) (P )


Opt(D) = maxy∈Y f (y) (D)

with equal optimal values: Opt(P ) = Opt(D), the duality gap at a candidate saddle point
(x, y) ∈ X × Y is nothing but the sum of residuals in terms of the respective objectives of
x as a candidate solution to (P ) and of y as a candidate solution to (D):

SP [(x, y)|f ] = [f (x) − Opt(P )] + [Opt(D) − f (y)].

• vector field – a bounded vector-valued function g f (x, y) : Z → E such that

g f (x, y) = [gxf (x, y); gyf (x, y)] ∈ ∂x f (x, y) × ∂y [−f (x, y)], (x, y) ∈ Z,

that is, the x-block in g f is a subgradient of convex function f (·, y) taken at x, and the
y-block in g f is a subgradient of the convex function −f (x, ·) taken at y,
5.6. SUMMARY ON MIRROR DESCENT AND MIRROR PROX ALGORITHMS 483

• stationary oracle O as defined in Section 5.6.2.1 which represents f , meaning that

gO (x, y) ≡ g f (x, y), (x, y) ∈ Z.

Remark 5.6.1 Convex minimization is reducible to convex-concave saddle point: setting X = Z


and specifying Y as a singleton, a Lipschitz continuous convex function f (z) on Z becomes a
convex-concave Lipschitz continuous convex-concave function on Z = Z × Y, and all entities
associated with f as an instance of Convex Minimization can be straightforwardly converted into
the entities associated with f as a Convex-concave Saddle Point problem. In particular, the
residual in terms of the objective of a candidate solution z ∈ Z to the instance f of Convex
Minimization is exactly the same as the duality gap of z considered as a candidate solution to
the instance f of Convex-concave Saddle Point.

The above problems are well known to us, which is not the case for problems to follow.

5.6.4.1.C. Convex Nash Equilibrium. Consider the situation where K players are making
their choices, the choice of j-th player being a point xj in convex compact subset Zj of Euclidean
space Ej . The block-vector z = [x1 ; ...; xK ] of player’s choices specifies the losses of the players,
the loss of j-th of them being a given function fj (z). A Nash Equilibrium is a vector z ∗ =
[x∗1 ; x∗2 ; ...; x∗K ] ∈ Z := Z1 × ... × ZK of choices of the players such that no player can reduce her
loss by changing her choice, provided that other players stick to their choices, that is,

x∗j ∈ Argmin fj (x∗1 ; ...; x∗j−1 ; xj ; x∗j+1 ; ...; x∗K ), j = 1, 2, ..., K.


xj ∈Zj

The Nash Equilibrium problem is to find Nash Equilibrium, given the domain Z of the problem
(along with its representation Z = Z1 ×...×ZK as a direct product of nonempty convex compact
sets Zj ) and the loss functions fj (z) : Z → R of the players. From now on we denote j-th block
in block-vector z ∈ E1 × ... × EK by [z]j , so that [z]j ∈ Ej and z = [[z]1 ; ...; [z]K ], and denote
by [z]j the vector obtained from z by eliminating j-th block (e.g., with z = [[z]1 ; [z]2 ; [z]3 ],
[z]2 = [[z]1 ; [z]3 ]), and refer to j-th loss function both as to fj (z) and as to fj ([z]j , [z]j ).
From now on, speaking about Nash Equilibrium, we always assume that for all j, the func-
tions fj (z) = fj ([z]j , [z]j ) : Z → R are continuous in z and uniformly Lipschitz continuous in
[z]j :
|fj ([z]j , [z]j ) − fj ([z 0 ]j , [z]j )| ≤ Lj kz − z 0 k, z, z 0 ∈ Z [Lj < ∞, j = 1, ..., K]
Nash Equilibrium problem is called convex, if
• for every j, the function fj ([z]j , [z]j ) is convex in [z]j ∈ Zj and concave in Z j = Z1 × ... ×
Zj−1 × Zj+1 × ... × ZK , and
PK
• the sum j=1 fj (z) of losses is convex on Z.
We shall see in Section 5.6.4.2 that the set of precise solutions to a convex Nash Equilibrium
problem, that is, the set of all corresponding Nash equilibria, is nonempty, convex, and closed.
When speaking about K-player Convex Nash Equilibrium, we assume that the convex com-
pact domain Z of the problem is given as the direct product of K convex compact sets Zj ,
and identify an instance of the problem with the collection f = (f1 , ..., fK ) of players’ loss
functions, with f satisfying the above continuity/Lipschitz continuity and convexity-concavity
assumptions. We associate with such an instance the following entities:
484 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

• the set Z∗ (f ) of the corresponding Nash equilibria:

Z∗ (f ) = {z ∗ ∈ Z : [z ∗ ]j ∈ Argmin fj (zj , [z ∗ ]j ), j = 1, ..., K,


zj ∈Zj

• accuracy measure (“incentive”)


K
" #
fj (zj0 , [z]j )
X
j
Nash [z|f ] = fj ([z]j , [z] ) − min
0
zj ∈Zj
j=1

The incentive clearly is well-defined and nonnegative on Z and is zero at a point z ∈ Z if


and only if z is Nash equilibrium: z ∈ Z∗ (f ). The origin of the name is clear: Nash [z|f ] is
the total, over players, incentive for player j to deviate from her choice [z]j , provided that
all other players j 0 stick to their choices [z]j 0 ;

• vector field – a bounded vector-valued function g f (z) = [g1f (z); ...; gK


f
(z)] : Z → E =
E1 × ... × EK such that

gjf (z) ∈ ∂[z]j fj ([z]j , [z]j ), j = 1, ..., K,

that is, j-th block gjf (z) in g f (z) is a subgradient of convex function fj (·, [z]j ) taken at [z]j

• stationary oracle O as defined in Section 5.6.2.1 which represents f , meaning that

gO (z) ≡ g f (z), z ∈ Z.

Remark 5.6.2 Note that the convex-concave saddle point problem is nothing but the “zero sum”
two-played convex Nash Equilibrium problem. Indeed, given convex-concave Lipschitz continuous
function h(z1 , z2 ) : Z := Z1 × Z2 → R and setting f1 (z1 , z2 ) = h(z1 , z2 ), f2 (z1 , z2 ) = −h(z1 , z2 ),
we, as is immediately seen, get an instance f = (f1 , f2 ) of convex Nash Equilibrium problem;
the entities associated with h as an instance of Convex-concave Saddle Point problem can be
straightforwardly converted into the entities associated with f as an instance of convex Nash
Equilibrium, in particular, the vector field remains the same, and SP [[z1 ; z2 ]|h] = Nash [[z1 ; z2 ]|f ].
Note that the 2-player convex Nash Equilibrium problems f which can be obtained from Convex-
concave Saddle Point ones are exactly those for which f1 + f2 ≡ 0.

5.6.4.1.D. Monotone Variational Inequality. A vector field f (z) : Z → E is called mono-


tone, if
hf (z) − f (z 0 ), z − z 0 i ≥ 0 ∀z, z 0 ∈ Z.
Variational Inequality VI(f, Z) associated with Z and monotone vector field f is18

find z∗ ∈ Z : hf (z), z − z∗ i ≥ 0 ∀z ∈ Z VI(f, Z)


18
in literature, solutions z∗ to VI(f, Z) are called weak solutions, as opposed to strong solutions – points z∗ ∈ Z
such that hf (z∗ ), z − z∗ i ≥ 0 for all z ∈ Z; from monotonicity it immediately follows that strong solutions
automatically are weak ones. Inverse definitely is true when f is continuous. In our context, we will not be
interested in strong solutions, and treat the weak ones as the precise solutions to VI(f, Z).
5.6. SUMMARY ON MIRROR DESCENT AND MIRROR PROX ALGORITHMS 485

As we shall see in a while, the vector fields we have associated with already described prob-
lems with convex structure (Convex Minimization, Convex-Concave Saddle point, Convex Nash
Equilibrium) are monotone, and precise solutions to these problems are exactly the same as
the solutions to VI’s associated with these vector fields. This being said, VI’s with monotone
operators arise in situations different from those just listed, and therefore deserve the status
of a problem with convex structure on their own. An instance of a Monotone VI problem on
a (as always, nonempty convex compact) domain Z is a bounded and monotone vector field
f : Z → E, and we associate with such an instance the following entities:

• the set Z∗ (f ) of the solutions to VI(f, Z)

Z∗ (f ) = {z∗ ∈ Z : hf (z), z − z∗ i ≥ 0 ∀z ∈ Z};

we will see in a while that Z∗ (f ) is nonempty, convex, and closed;

• accuracy measure (“dual gap function”)

VI [z|f ] = suphf (y), z − yi


y∈Z

This accuracy measure clearly is well-defined and nonnegative on Z, and is zero at z if


and only if z ∈ Z∗ (f );

• vector field associated with f is f itself;

• stationary oracle O as defined in Section 5.6.2.1 which represents f , meaning that

gO (z) ≡ f (z), z ∈ Z.

5.6.4.1.E. Convex Equilibrium. 19 An instance of Convex Equilibrium on (nonempty con-


vex compact) domain Z in Euclidean space E is a function

f (y, z) : Z × Z → R

with the following properties:

• f (y, z) is bounded on Z × Z, uniformly Lipschitz continuous in y ∈ Z:

|f (y, z) − f (y 0 , z)| ≤ Lf ky − y 0 k ∀y, y 0 , z ∈ Z [Lf < ∞]

and is convex in y ∈ Z;

• one has
f (y, y) + f (z, z) ≥ f (y, z) + f (z, y) ∀y, z ∈ Z. (5.6.31)

The problem associated with instance f is to find z∗ such that

z∗ ∈ Z & f (z, z) − f (z∗ , z) ≥ 0 ∀z ∈ Z. (5.6.32)


19
What follows is inspired by research of A.S. Antipin, see [1–5] and references therein.
486 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Example: Let g be a bounded monotone vector field on Z. Setting

f (y, z) = hy, g(z)i : Z × Z → R,

we, as is immediately seen, get an instance of Convex Equilibrium problem; in par-


ticular, for our f one has f (y, y) + f (z, z) − f (y, z) − f (z, y) = hg(y) − g(z), y − zi,
so that the validity of (5.6.31) is exactly the same as the monotonicity of g. Further,
for our f relation (5.6.32) reads

z∗ ∈ Z & hz − z∗ , g(z)i ≥ 0 ∀z ∈ Z,

so that the solutions to our Convex Equilibrium instance are exactly the solutions
to VI(g, Z).
We associate with an instance f of Convex Equilibrium problem the following entities:
• the set Z∗ (f ) of the precise solutions

Z∗ (f ) = {z∗ ∈ Z : f (z, z) − f (z∗ , z) ≥ 0 ∀z ∈ Z};

we will see in a while that Z∗ (f ) is nonempty, convex, and closed;

• accuracy measure (“equilibrium gap”)

Equil [z|f ] = sup [f (z, y) − f (y, y)]


y∈Z

This accuracy measure clearly is well-defined and nonnegative on Z, and is zero at z if


and only if z ∈ Z∗ (f );

In Example, Equil [z|f ] = supy∈Z [hz, g(y)i − hy, g(y)i] = VI [z|g].

• vector field g f : A → E associated with f is a bounded vector field satisfying the relation

g f (z) ∈ ∂y |y=z f (y, z),

that is, g f (z) is a subgradient of the convex function f (y, z) of y ∈ Z taken at y = z;

In Example, we can take g f (·) ≡ g(·).

• stationary oracle O as defined in Section 5.6.2.1 which represents f , meaning that

gO (z) ≡ g f (z), z ∈ Z.

Remark 5.6.3 The above Example shows that Monotone Variational Inequality is reducible to
Convex Equilibrium.

Remark 5.6.4 The name “Convex Equilibrium” stems from the following observation:
Consider an instance f (z, y) : Z × Z → R of Convex Equilibrium problem. Then
(i) If x∗ ∈ Z is such that f (x, x∗ ) ≥ f (x∗ , x∗ ) for all x ∈ Z, then x∗ ∈ Z∗ (f ),
and nearly vice versa:
(ii) If f is continuous on Z × Z and x∗ ∈ Z∗ (f ), then f (x, x∗ ) ≥ f (x∗ , x∗ ) for all x ∈ Z.
5.6. SUMMARY ON MIRROR DESCENT AND MIRROR PROX ALGORITHMS 487

Indeed, under the premise of (i) for all y ∈ Z we have f (x∗ , y) + f (y, x∗ ) ≤ f (x∗ , x∗ ) + f (y, y),
whence f (x∗ , y) ≤ f (y, y) + [f (x∗ , x∗ ) − f (y, x∗ )] ≤ f (y, y), so that x∗ ∈ Z∗ (f ); (i) is proved.
| {z }
≤0
Now let x∗ ∈ Z∗ (f ) and let f be continuous on Z × Z. Assume, to the contrary of what
is claimed in (ii), that f (x̄, x∗ ) < f (x∗ , x∗ ) for some x̄ ∈ Z; in particular, x̄ 6= x∗ . Now
let zλ = (1 − λ)x∗ + λx̄, 0 ≤ λ ≤ 1; note that zλ ∈ Z. By continuity of f , the function
φ(λ) = f (zλ , zλ ) − f (x̄, zλ ) is continuous in λ; when λ = 0, this function is positive, so that we
can find λ̄ ∈ (0, 1) such that φ(λ̄) > 0. The segment ∆ with endpoints x∗ and x̄ is not a singleton
and contains the point zλ̄ in its relative interior. The convex function f (u, zλ̄ ) − f (zλ̄ , zλ̄ ) of
u ∈ ∆ is negative when u = x̄ and is zero at the relative interior point u = zλ̄ of ∆, implying
that it is positive at the endpoint x∗ of ∆: f (x∗ , zλ̄ ) − f (zλ̄ , zλ̄ ) > 0. The latter is impossible
due to x∗ ∈ Z∗ (f ), and (ii) is proved. 2
Note that continuity of f in Remark 5.6.4 is essential: the set Z = [0, 1] taken along with the
(
x, 0<y≤1
function f (x, y) = is a legitimate instance of Convex Equilibrium problem
−x, y = 0
where x∗ = 0 ∈ Z∗ (f ), but x∗ is not a minimizer of the function f (·, x∗ ) on Z.

5.6.4.2 Problems with convex structure: basic descriptive results


Basic descriptive results on problems with convex structure as defined above are stated in the
following
Theorem 5.6.1 Let us fix a nonempty convex compact domain Z in Euclidean space E — the
domain of all problems with convex structure to follow. Then
(i) Let f be an instance of Convex Equilibrium problem. The solution set Z∗ (f ) of f is
nonempty, convex and closed.
(ii) Let g be a bounded monotone vector field on Z. The solution set Z∗ (g) of VI(g, Z) is
nonempty, convex and closed.
(iii) The vector fields g f associated with instances of problems with convex structure on Z
as defined in 5.6.4.1.A-E are monotone, and the sets Z∗ (f ) of precise solutions of the instances
are exactly the sets of solutions to VI(g f , Z) (and therefore are nonempty, convex and closed by
(i)).
(iv) For any candidate solution z ∈ Z to a Convex Minimization or Convex-concave Saddle
Point problem f , inaccuracy of the solution as quantified by the associated accuracy measure
(residual in terms of the objective or duality gap) is ≥ the inaccuracy, as quantified by dual gap
function, of z as a candidate solution to VI(g f , Z).
For any candidate solution z ∈ Z to Convex Equilibrium problem f inaccuracy of z as
quantified by the Equilibrium gap is ≥ the inaccuracy of z, as quantified by the dual gap function,
as candidate solution to VI(g f , Z).
Proof. (i): Consider the function

f (x) = inf [f (y, y) − f (x, y)] = −Equil [x|f ];


y∈Z

since f (x, y) is bounded and uniformly Lipschitz continuous and is convex in x ∈ Z, f is well
defined, concave and continuous on Z; note that Z∗ (f ) = {x ∈ Z : f (x) ≥ 0}, whence Z∗ (f )
is closed and convex. All which remains to prove is that this set is nonempty, that is, that the
maximum over x ∈ Z of the continuous function f (x) is ≥ 0. Assuming this is not the case, for
488 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

every x ∈ Z there exists yx ∈ Z such that the continuous function f (yx , yx ) − f (u, yx ) of u ∈ Z
is negative at u = x, and therefore is negative in certain neighbourhood Ux of x in Z. Invoking
compactness of Z, we conclude that the resulting covering of Z admits finite subcovering, that
is, there exists δ > 0 and y1 , ..., yN ∈ Z such that
min[f (yi , yi ) − f (x, yi )] ≤ −δ ∀x ∈ Z.
i
Consider the function X
F (λ, x) = λi [f (yi , yi ) − f (x, yi )]
i
and let λ reside in the probabilistic simplex ∆. F is convex-concave continuous function on
∆ × Z and therefore has a saddle point λ∗ , x∗ on this set, so that
F (λ∗ , x) ≤ F (λ∗ , x∗ ) ≤ F (λ, x∗ ) ∀λ ∈ ∆, x ∈ Z.
In particular, F (λ∗ , x∗ ) = min = mini [f (yi , yi )−f (x∗ , yi )] ≤ −δ. Thus,
P
i λi [f (yi , yi )−f (x∗ , yi )]
λ∈∆

λ∗i [f (yi , yi ) − f (x, yi )] = F (λ∗ , x) ≤ F (λ∗ , x∗ ) ≤ −δ ∀x ∈ Z.


X

Now, by (5.6.32) we have f (yi , yi ) − f (x, yi ) ≥ f (yi , x) − f (x, x) for all x ∈ Z, whence
λ∗i [f (yi , x) − f (x, x)] ≤ sup λ∗i [f (yi , yi ) − f (x, yi )] ≤ −δ.
X X
sup
x∈Z i x∈Z i

Setting ȳ =
P ∗ P ∗ − f (x, x)] ≥ f (ȳ, x) −
i λ i yi , by convexity of f (u, v) in u we have i λi [f (yi , x)
f (x, x), and we get
λ∗i [f (yi , x) − f (x, x)] ≤ −δ,
X
sup [f (ȳ, x) − f (x, x)] ≤ sup
x∈Z x∈Z i
which is impossible, since the function which we maximize in x in the left hand side is zero when
x = ȳ. 2

(ii): By Remark 5.6.3, for a bounded monotone g(·) : Z → E, the solution set of VI(g, Z)
is exactly the same as the solution set of the Convex Equilibrium problem given by f (y, z) =
hy, g(z)i, so that the set in question is nonempty, convex and closed by (i). 2

(iii) As far as the claims in (iii) are concerned, Remarks 5.6.1, 5.6.2 say that it suffices to
consider the cases of
(iii.a): Convex Nash Equilibrium problem, and
(iii.b): Convex Equilibrium problem.
(iii.a:) Let f = (f1 , ..., fK ) be an instance of Convex Nash Equilibrium problem. Let us start
with verifying the monotonicity of g f . Let x, y ∈ Z, and let x̄ = 12 [x + y], ∆ = 12 [x − y]. To verify
that hg f (x)−g f (y), x−yi ≥ 0 is exactly the same as to verify that hg f (x̄+∆)−g f (x̄−∆), ∆i ≥ 0.
P
Setting F (z) = j fj (z), we have

hg f (x̄ + ∆) − g f (x̄ − ∆), ∆i = hgjf (x̄ + ∆), ∆j i hgjf (x̄ − ∆), −∆j i
P
j [ + ]
| {z } | {z }
≥ ≥
f (x̄+∆)−fj ([x̄]j ,[x̄]j +[∆]j )
(a) j
f (x̄−∆)−fj ([x̄]j ,[x̄]j −[∆]j )
(a) j

≥ [fj (x̄ + ∆) + fj (x̄ − ∆) − [ fj ([x̄]j , [x̄]j + [∆]j ) + fj ([x̄]j , [x̄]j − [∆]j ) ]]


P
j
| {z }

(b)
2fj (x̄)
≥ F (x̄ + ∆) + F (x̄ − ∆) − 2F (x̄) ≥ 0,
5.6. SUMMARY ON MIRROR DESCENT AND MIRROR PROX ALGORITHMS 489

where (a) is due to convexity of fj ([x]j , [x]j ) in [x]j and since gjf (x) ∈ ∂[x]j fj ([x]j , [x]j ), (b) is
due to concavity of fj ([x]j , [x]j ) in [x]j , and the concluding ≥ is due to convexity of F . Thus,
g f indeed is monotone.
Now let us verify that the solutions of VI(g f , Z) are exactly the Nash equilibria. In one
direction: let z∗ be a solution to VI(g f , Z), and let us prove that this is a Nash equilibria. For
every j and every zj ∈ Zj , specifying z ∈ Z by the relations [z]j = zj , [z]j = [z∗ ]j ,

hgjf (zj , [z∗ ]j ), zj − [z∗ ]j )i = hg f (z), z − z∗ i ≥ 0,

where the concluding ≥ 0 is due to the fact that z∗ solves VI(g f , Z). Thus, the Lipschitz
continuous convex function φ(zj ) = fj (zj , [z∗ ]j ) of zj ∈ Zj at every point zj ∈ Zj admits a
subgradient φ0 (zj ) = gjf (zj , [z∗ ]j ) such that hφ0 (zj ), zj − [z∗ ]j i ≥ 0, whence, of course, [z∗ ]j is
a minimizer of φ over zj ∈ Zj . 20 Thus, [z∗ ]j minimizes fj (·, [z∗ ]j ) over Zj ; since j ≤ K is
arbitrary, we conclude that z∗ is a Nash equilibrium. Vice versa, let z∗ be a Nash equilibrium;
let us prove that then hg f (z), z − z∗ i ≥ 0 for every z ∈ Z. W.l.o.g. it suffices to consider the case
of z 6= z∗ . Note that the monotonicity of g f was proved independently of how the subgradients
comprising the value of the field at a point are selected in the corresponding subdifferentials.
Since z∗ is a Nash equilibrium, zeros are legitimate subgradients of the functions fj (·, [z∗ ]j ) at
the points [z∗ ]j , so that replacing the value of g f at z∗ with zero and keeping the vector field
intact at all other points, in particular, at z, we preserve the monotonicity of the field, meaning,
in particular, that hg f (z), z − z∗ i ≥ 0. (iii.a) is proved.
(iii.b): Let f (y, z) be an instance of Convex Equilibrium problem. Let us first verify that
the vector field g f (z) is monotone. Indeed, given y, z ∈ Z, by convexity of f (u, v) in u and by
definition of g f we have

f (z, y) ≥ f (y, y) + hg f (y), z − yi, f (y, z) ≥ f (z, z) + hg f (z), y − zi

whence f (y, y)+f (z, z)−hg f (z)−g f (y), z−yi ≤ f (z, y)+f (y, z) ≤ f (z, y)+f (y, z), implying that
hg f (z) − g f (y), z − yi ≥ 0. as required. Now let us prove that the solutions to the Equilibrium
problem are exactly the solutions to VI(g f , Z). By convexity of f (u, v) in u and the origin
of g f we have hg f (y), y − zi ≥ f (y, y) − f (z, y); thus, if the latter quantity for some z = z∗ is
nonnegative for all y ∈ Z, that is, if z∗ solves the Equilibrium problem, the quantity hg f (y), y−z∗ i
is nonnegative for all y, that is, z∗ solves VI(g f , Z). Vice versa, let z∗ solve VI(g f , Z); assume,
on the contrary to what should be proved, that z∗ does not solve the Equilibrium problem, so
that f (z∗ , y) > f (y, y) for some y ∈ Z. Given λ ∈ (0, 1), let us set zλ = z∗ + λ(y − z∗ ). Since f is
continuous in the first argument and f (z∗ , y) = f (z0 , y) > f (y, y), we have f (zλ , y) > f (y, y) for
some fixed λ ∈ (0, 1), whence f (y, zλ ) < f (zλ , zλ ) due to f (y, y) + f (zλ , zλ ) ≥ f (y, zλ ) + f (zλ , y),
Since f (·, zλ ) is convex and f (y, zλ ) < f (zλ , zλ ), we have hg f (zλ ), y − zλ i < 0, and since y − zλ
is a positive multiple of zλ − z∗ , we get hg f (zλ ), zλ − z∗ i < 0, which is the desired contradiction
with z∗ being a solution to VI(g f , Z). The proof of (iii) is complete. 2

20
since for λ ∈ (0, 1) and zj ∈ Zj we have

φ(zj ) ≥ φ([z∗ ]j + λ(zj − [z∗ ]j )) + hφ0 (z λ ), zj − z λ i ≥ φ(z λ ),


| {z }

with the concluding inequality given by the fact that hφ0 (z λ ), zj − z λ i is nonnegative multiple of hφ0 (z λ ), z λ −
[z∗ ]j i ≥ 0, and it remains to look what happens as λ → +0.
490 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

(iv): In view of Remark 5.6.1), it suffices to show that the inaccuracy, as quantified by duality
gap, of a candidate solution z = (x, y) ∈ Z = X × Y to a convex-concave saddle point problem
f dominates the inaccuracy, as quantified by dual gap function, of z as a candidate solution
to VI(g f , Z). Indeed, the vector field associated with f is [gxf (x, y); gyf (x, y)] ∈ ∂x f (x, y) ×
∂y [−f (x, y)], whence
h i
VI [z|f ] = sup hgxf (u, v), x − ui + hgyf (u, v), y − vi
(u,v)∈X ×Y
≤ sup [f (x, v) − f (u, v) + f (u, v) − f (u, y)] = f (x) − f (y) = SP [z|f ]
(u,v)∈X ×Y

Finally, given a candidate solution z to an instant f of Convex Equilibrium problem and recalling
what g f is, we have

Equil [z|f ] = sup [f (z, y) − f (y, y)] ≥ suphg f (y), z − yi = VI [z|g f ]. 2
y∈Z y∈Z

5.6.4.3 Problems with convex structure: basic operational results


As stated by Theorem 5.6.1, solving an oracle-represented instance of problem with convex
structure can be reduced, modulo quantification of inaccuracy, to solving either Monotone Vari-
ational Inequality, or Convex Equilibrium. It turns out that this “unification” captures well the
basic proximal type First Order algorithms of (broadly understood) deterministic and stochas-
tic Convex Optimization and the algorithms’ efficiency estimates. From two outlined “unifying
options” – Monotone VI and Convex Equilibrium – we select the second, the reason being that
while the constructions and efficiency estimates for VI and Equilibrium look completely similar,
Equilibrium operates with “more tuff” accuracy measure (Theorem 5.6.1.(iv)) and thus provides
stronger efficiency estimates.
The main (and in hindsight extremely simple) observation underlying unified constructions
and results on First Order algorithms is given by item (i) of the following Theorem:
Theorem 5.6.2 Consider an instance f of Convex Equilibrium problem on (nonempty convex
compact) domain Z in Euclidean space E, let O be a stationary oracle representing the instance,
the response of the oracle being G(z, ξ), so that

Eξ {G(z, ξ)} = g f (z), z ∈ Z,

where g f (·) is the vector field associated with instance f .


Let, next, B be an algorithm operating with O, as described in Section 5.6.2.1, and let
{zt = Zt (ξ t−1 )}t≥1 be the search trajectory generated by B, so that zt is the query point of t-th
call to O. Finally, let positive integer N be given.
(i) Given deterministic collection µ = {µt ≥ 0, 1 ≤ t ≤ N } of weights with MN := N
P
t=1 µt >
0, let us set
N
" #
M−1
X
N
ResN (µ, ξ ) = N max µt hg f (zt ), zt − zi , (5.6.33)
z∈Z
t=1 +
and specify the approximate solution generated by the algorithm in N calls to O as
N
M−1
X
N N N
z = z (ξ ) := N µt z t . (5.6.34)
t=1
5.6. SUMMARY ON MIRROR DESCENT AND MIRROR PROX ALGORITHMS 491

Then
∀ξ N : z N (ξ N ) ∈ Z & Equil [z N (ξ N )|f ] ≤ ResN (µ, ξ N ). (5.6.35)
If, in addition, the Convex Equilibrium instance f in question represents equivalently a K-player
Convex Nash Equilibrium problem with loss functions hj (z), j ≤ K (see Example in Section
5.6.4.1.E), meaning that Z = Z1 × ... × ZK and

f (y, z) = hy, g f (z)i : Z × Z → R

where the bounded vector field g f satisfies

g f (z) = [g1f (z); ...; gK


f
(z)], gjf (z) ∈ ∂[z]j hj ([z]j , [z]j ), j ≤ K

then, along with (5.6.35), it holds also

∀ξ N : Nash [z N (ξ N )|h] ≤ ResN (µ, ξ N ). (5.6.36)

(ii) Let B be Mirror Descent algorithm from Section 5.6.2.2 operating with the stationary
oracle O in question:
zt+1 = Proxzt (γt G(zt , ξt )), t = 1, 2, ...
where z1 ∈ Z and the sequence of stepsizes {γt > 0}t≥1 are deterministic. Assuming that for
some real M ≥ 0 and σ ≥ 0 and for all z ∈ Z one has
n o
kg f (z)k∗ ≤ M & Eξ kG(z, ξ) − g f (z)k2∗ ≤ σ 2 , (5.6.37)
PN
and that deterministic weights µ = {µt ≥ 0, t ≤ N } with MN = t=1 µt > 0 are such that

µ1 /γ1 ≤ µ2 /γ2 ≤ ... ≤ µN /γN ,

one has
 v 
N uN
n o µN Ω2
EξN ResN (µ, ξ N ) ≤ MDN [γ, µ] := M−1 
X uX
+ [M 2 + σ 2 ] µt γt + 2σΩt µ2t  .
2γN t=1 t=1
(5.6.38)
implying, by (i), that for the approximate solution

N
z N = z N (ξ N ) = M−1
X
N µt zt
t=1

one has n o
z N ∈ Z & EξN Equil [z N (ξ N )|f ] ≤ MDN [γ, µ]. (5.6.39)

(iii) Let N be even: N = 2S, and let B be Mirror Prox algorithm from Section 5.6.3 operating
with the stationary oracle O in question:

v1 := z1 ∈ Z
wτ = z2τ := Proxvτ (γτ G(vτ , ξ2τ −1 )) , 1 ≤ τ ≤ S,
vτ +1 = z2τ +1 = Proxvτ (γτ G(wτ , ξ2τ ))
492 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

where z1 ∈ Z and the sequence of stepsizes {γt > 0}t≥1 are deterministic. Assume that for some
nonnegative M, L, σ one has

∀(z, z 0 ∈ Z) : kg f f 0 0
n (z) − g (z )k∗ ≤ oM + Lkz − z k, (5.6.40)
∀(z ∈ Z) : Eξ kG(z, ξ) − g f (ξ)k2∗ ≤ σ 2

and that the stepsizes γt satisfy


1
γτ ≤ ∀τ.
2L
PS
Given deterministic sequence λ = {λτ ≥ 0, 1 ≤ τ ≤ S} with LS = τ =1 λτ > 0 such that

λ1 /γ1 ≤ λ2 /γ2 ≤ ...λS /γS ,

let us define weights µ = {µt , 1 ≤ t ≤ N } according to

µ2τ −1 = 0, µ2τ = λτ , 1 ≤ τ ≤ S

implying that the approximate solution z N = z N (ξ N ) as defined by (5.6.34) is

S
z N = L−1
X
S λ τ wτ .
τ =1

Then
 v 
S u S
n o λS 2
EξN ResN (µ, ξ N ) ≤ MP N [γ, λ] := L−1
X uX
S
 Ω + [M 2 + 2σ 2 ] γτ λτ + 2σΩt λ2τ  ,
2γS τ =1 τ =1
(5.6.41)
implying, by (i), that for the approximate solution zN one has
n o
z N ∈ Z & EξN Equil [z N (ξ N )|f ] ≤ MP N [γ, λ]. (5.6.42)

Finally, when the Convex Equilibrium instance f in question represents equivalently a K-


player Convex Nash Equilibrium problem with loss functions hj (z), j ≤ K, the equilibrium gap
Equil [z N |f ] in (5.6.39), (5.6.42) can be replaced with the incentive Nash [z N |h].

Proof is immediate.
(i): Since νt = µt /MN are nonnegative and sum up to 1, and zt ∈ Z, we have z N =
PN N and set Res = Res (µ, ξ N ). We have
t−1 νt zt ∈ Z by convexity of Z. Next, let us fix ξ N N

ResN ≥ maxz∈Z N f
t=1 νt hg (zt ), zt − zi
P

⇒ −ResN ≤ minz∈Z N f
t=1 νt hg (zt ), z − zt i
P
PN
≤ minz∈Z t=1 νt [f (z, zt ) − f (zt , zt )] [since gf (z) ∈ ∂y y=z f (y, z)]
≤ inf z∈Z N t=1 νt [f (z, z) − f (zt , z)] [by (5.6.31)]
P

≤ inf z∈Z [f (z, z) − f (z N , z)] [since f (y, z) is convex in y and z N = t νt zt ]


P

⇒ Equil [z N |f ] = sup[f (z N , z) − f (z, z)] ≤ ResN .


z∈Z
5.6. SUMMARY ON MIRROR DESCENT AND MIRROR PROX ALGORITHMS 493

as required in (5.6.35). Now, when f comes from Convex Nash Equilibrium instance h, we have
ResN ≥ maxz∈Z N f
t=1 νt hg (zt ), zt − zi
P
N
⇒ −ResN ≤ minz∈Z t=1 νt hg f (zt ), z − zt i
P
P f
= minz∈Z N hg ([zt ]j , [zt ]j ), [z]j − [zt ]j i
P
t=1 νt
PN Pj j
≤ minz∈Z t=1 νt j [hj ([z]j , [zt ]j ) − hj (z f
 t )] [since gj (y) ∈ ∂[y]j hj ([y]j , [y] )]
j

P h since hj ([x]j , [x]j ) is concave in [x]j and
i
≤ minz∈Z N − hj (z N )
j hj ([z]j , [z ]j )
P
j
hj (u) is convex in u

that is, Xh i
Nash [z N |h] := sup hj (z N ) − hj ([z]j , [z N ]j ) ≤ ResN .
z∈Z j

(i) is proved.
(ii) and (ii) are readily given by Propositions 5.6.1, 5.6.2 as applied to the stationary oracle
O in question. 2

Discussion. The summary of “Summary on MD and MP” suggested by our preceding consid-
erations is as follows (we restrict ourselves to the case of stationary oracle, thus skipping Online
minimization):
• MD and MP “do not care and do not know” what is the problem with convex structure
they are working on. These algorithms operate with a bounded vector field g : Z → E
represented by (in general, stochastic) oracle, and all (N -step versions of) the algorithms
“do carry about” is to generate a sequence zt = Zt (ξ t−1 ), t ≤ N of points where O
is queried, along with a deterministic sequence of weights µ = {µt ≥ 0, t ≤ N } with
Mt = N
P
t=1 µt > 0, resulting in as small as possible expectation of the N -step residual
 N
X  
R N = Eξ N max µt hg(zt ), zt − zi
z∈Z +
t=1
| {z }
Res(g,ξ N )

• The importance of the residual in our intended applications stems from the fact that given
an instance f of problem with convex structure, as defined above, one can associate with
it a bounded vector field g f such that when specifying an approximate solution to f as
N
−1
X
N N
z (ξ ) = M µt z t ,
t=1
we get a point from Z such that
[xN (ξ N )|f ] ≤ Res(g f , ξ N ),
where [z|f ] is the (in)accuracy measure associated with f . As a result, when operating
with stochastic oracle representing g f , small residual means small expected inaccuracy of
the approximate solution we end up with.21
21
even in the stochastic case, small expectation of definitely nonnegative inaccuracy means that the inaccuracy
is “small with overwhelming probability.” More advanced analysis, which we skip, allows to get exponential
bounds on probabilities of “large” values of inaccuracy, provided that instead of bounding the second moment of
kG(z, ξ) − g f (z)k∗ , we impose uniform in z ∈ Z upper bounds on exponential moments, like E{exp{kG(z, ξ) −
g f (z)k2∗ /σ 2 }} ≤ exp{1}.
494 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

• Mirror Descent is well suited for processing instances f associated with just bounded
Ω √
vector fields g f . Specifically, in the case of (5.6.37), setting γt = [M +σ] t
and µt ≡ 1, the
upper bound MDN on the N -step residual (see (5.6.39)), and thus – on the (expected)
inaccuracy of approximate solution z N , becomes
[M + σ]Ω
EξN {[z|f ]} ≤ O(1) √ , (5.6.43)
N
in full accordance with what we have seen in the preceding parts of this Lecture when
speaking about various versions of MD as applied to deterministic and stochastic convex
minimization and convex-concave saddle points.
• Mirror Prox is well suited for processing instances f associated with vector fields g f which
are not just bounded, but possess certain (perhaps, partial) smoothness. Specifically, in
the case of (5.6.40), setting
Ω 1
 
λτ = 1, τ ≤ S, γτ = min √ ,
(M + σ) τ 2L
and applying Theorem 5.6.2.(iii), we get
" #
LΩ2 [M + σ]Ω
EξN {[z|f ]} ≤ O(1) + √ . (5.6.44)
N N
In particular, assume that for a given σ ≥ 0, the second relation in (5.6.40) holds true, and
let g f be the sum of two vector fields, g f (·) = gb (·) + gqs (·), with the first – “nonsmooth”
– of variation on Z bounded by some M < ∞:
kgb (z) − gb (z 0 )k∗ ≤ M ∀(z, z 0 ∈ Z)
and the second “smooth” – Lipschitz continuous, with constant L:
kgs (z) − gs (z 0 )k∗ ≤ Lkz − z 0 k ∀(z, z 0 ∈ Z) [L < ∞]
In this situation our M , L and σ ensure (5.6.40), and (5.6.44) says what are the contribu-
tions of the nonsmooth and the smooth components of g f to the upper
√ bound on expected
inaccuracy: the contribution of the nonsmooth part is the O(1/ N )-term O(1) M √ Ω , and
N
2
the contribution of the smooth part is the O(1/N )-term O(1) LΩ N . On the top of these
two contributions, there is a toll O(1) √σΩ
N
for the random noise in the oracle. We cannot
say in advance what is the leading term in our inaccuracy bound – it depends on interplay
between M, L, σ and N . Note that our preceding analysis of MP was by far less detailed.
Note also that our analysis shows that modulo O(1) factors, MP “uniformly dominates”
MD, since the validity of (5.6.37) with some M and σ implies the validity of (5.6.40) with
the same value of σ, twice larger value of M , and L = 0, in which case the right hand side
in (5.6.44) is within absolute constant factor of the right hand side in (5.6.43).
Finally, we remark that what was said so far about algorithms processing vector fields in order
to achieve small residual and implications for solving problems with convex structure is not
restricted to proximal sublinearly converging first order methods like Mirror Descent and Mirror
Prox. With minor modifications, the outlined explanation of “what indeed is going on” works
also for “fast” black-box oriented algorithms like the Ellipsoid Method, provided the vector field
in question is represented by precise deterministic oracle, see [40].
5.7. WELL-STRUCTURED MONOTONE VECTOR FIELDS 495

5.7 Well-structured monotone vector fields


In the previous Section, we have introduced several types of “problems with convex structure” and have
demonstrated that all of them can be posed as Variational Inequalities with monotone vector fields. We
have demonstrated that when the domain X of VI(F, X ) is convex compact and F is monotone and
bounded on X , the VI can be solved by first order algorithms, e.g., Mirror Descent/Mirror Prox. This
being said, we know that in the case of convex minimization, First Order algorithms is something which
makes sense to use only when sizes of the minimization problem at hand or lack of its structure make
the problem badly suited for processing by interior point polynomial time algorithms. Our current goal
is to find “variational inequality analogy” of the notion of K-representable function/set, see Section 2.3.7
in order to be able to solve “well structured” VI’s with monotone operators of moderate sizes to high
accuracy by existing polynomial time interior point algorithms.
What follows reproduces Section 4.2 of [31].
Same as in Section 2.3.7, from now on we fix a family K of regular cones in Euclidean spaces which
contains nonnegative rays, L3 , and is closed w.r.t. taking finite direct products and passing from a cone
K to its dual K∗ . Unless otherwise is explicitly stated, all cones below belong to K.

5.7.1 Conic representability of monotone vector fields and monotone VI’s in


conic form
5.7.1.1 Conic representation of a monotone vector field
Let F (x) : X → E be a monotone and continuous vector field on nonempty convex subset X of Euclidean
space E. Consider the set

F[F, X ] = {[t; g; x] ∈ R × E × E : x ∈ X , t − hg, yi ≥ hF (y), x − yi ∀y ∈ X } (5.7.1)

and let us make several straightforward observations.


5.7.1.1.A. F[F, X ] is a convex set which contains all triples [hF (x), xi; F (x); x], x ∈ X ; this set is closed
provided X is so. Besides this, F[F, X ] is t-monotone:

[t; g; x] ∈ F[F, X ] and t0 ≥ t ⇒ [t0 ; g; x] ∈ F[F, X ].

Indeed, F[F, X ] is the intersection of solution set of the system of nonstrict linear constraints

t − hg, yi − hF (y), xi ≥ −hF (y), yi, y ∈ X

in variables g, t, x (this set is closed and convex) with the convex set {[t; g; x] : x ∈ X } (which
is closed if so is X ). By monotonicity, for every x ∈ X we have

hF (x), x − yi ≥ hF (y), x − yi ∀y ∈ X ,

so that [hF (x), xi; F (x); x] ∈ F[F, X ]; and t-monotonicity is evident.


5.7.1.1.B. For  ≥ 0, let
X∗ () = {[g; t] ∈ E × R : sup [t − hg, yi] ≤ }, (5.7.2)
y∈X

so that X∗ () is a nonempty closed convex set. Then for every  ≥ 0, -solutions to VI(F, X )—points
x ∈ X such that
VI [x|F ] := sup hF (y), x − yi ≤ 
y∈X

are exactly the points


x : ∃(t, g) : [t; g; x] ∈ F[F, X ] and [t; g] ∈ X∗ (); (5.7.3)
496 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Indeed, let x, t, g be such that [t; g; x] ∈ F[F, X ] and [t; g] ∈ X∗ (). From the first inclusion
it follows that x ∈ X and

t − hg, yi ≥ hF (y), x − yi ∀y ∈ X ,

while from the second inclusion it follows that t − hg, yi ≤  for all y ∈ X . Taken together,
these two relations imply that x ∈ X and hF (y), x − yi ≤  for all y ∈ X , i.e., VI (x|F ) ≤ .
Vice versa, if x ∈ X ,  ≥ 0, and VI [x|F ] ≤ , then the triple t = , g = 0, x clearly belongs
to F[F, X ] and [t; g] ∈ X∗ ().

K-representation of (F, X ). Given a continuous monotone vector field F : X → E on a nonempty


convex subset X of Euclidean space E, let us call a conic constraint22

Xx + Gg + tT + U u ≤K a (5.7.4)

in variables t ∈ R, g ∈ E, x ∈ E, and u ∈ Rk a K-representation of (F, X ), if the set

T := {[t; g; x] : ∃u : Xx + Gg + tT + U u ≤K a} (5.7.5)

possesses the following two properties:

(i) T is contained in F[F, X ] and, along with the latter set, is “t-monotone:”

[t; g; x] ∈ T and t0 ≥ t ⇒ [t0 ; g; x] ∈ T

(note that when T =


6 ∅, t-monotonicity is equivalent to T ≤K 0);

and

(ii) T contains the set


{[hF (x), xi; F (x); x], x ∈ X } [⊂ F[F, X ]]

If the set (5.7.5) satisfies (i) and the relaxed version of (ii), specifically,

(ii’) T contains the set


{[t; F (x); x] : t > hF (x), xi, x ∈ X } [⊂ F[F, X ]]

we say that (5.7.4) is an almost K-representation of (F, X ).


Let us make an immediate observation:

Remark 5.7.1 In the situation described in the beginning of this section, let X be closed, and let Y be
a convex set such that

Conv{[xT F (x); F (x); x], x ∈ X } ⊂ Y ⊂ cl Conv{[xT F (x); F (x); x], x ∈ X }

Then every K-representation of Y represents (F, X ).

5.7.1.2 Conic form of conic-representable monotone VI


Our main observation is as follows:
22
Recall that we have fixed a family K of regular cones in Euclidean spaces, and by our standing convention,
all cones to be considered below belong to K.
5.7. WELL-STRUCTURED MONOTONE VECTOR FIELDS 497

Proposition 5.7.1 Let X ⊂ E be nonempty convex compact given by essentially strictly feasible K-
representation:
X = {x : ∃v : Ax + Bv ≤L b}, [L ∈ K], (5.7.6)
so that X∗ (), see (5.7.2), by conic quality, admits K-representation as follows:

X∗ () = {[t; g] : ∃λ : AT λ + g = 0, B T λ = 0, t + bT λ ≤ , λ ≥L∗ 0}. (5.7.7)

Let, moreover, F : X → E be a continuous monotone vector field, and let (5.7.4) be an almost K-
representation of (F, X ). Then for every  > 0 the system of conic constraints

Xx + Gg + tT + U u ≤K a (a)
AT λ + g = 0 (b.1)
BT λ = 0 (b.2) (5.7.8)
t + bT λ ≤  (b.3)
λ ≥L ∗ 0 (b.4)
in variables x, g, t, u, and λ is feasible, and x-component of any feasible solution belongs to X and is an
-solution to VI(F, X ):
VI [x|F ] := max F T (y)[x − y] ≤ .
y∈X

Therefore, finding an -solution to VI(F, X ) reduces to finding a feasible solution to an explicit feasible
K-conic constraint.
When (5.7.4) is a K-representation of (F, X ), the above conclusion holds true for all  ≥ 0.
Proof. Let us fix  > 0, and let x̄ solve VI(F, X) (a solution exists since X is compact). Then hF (x̄), x̄ −
yi ≤ 0 for all y ∈ X due to continuity of F on X . Consequently, given  > 0, setting t̄ = hF (x̄), x̄i,
ḡ = F (x̄), we have [t̄ + ; ḡ; x̄] ∈ X∗ (), implying by (5.7.7) that there exists λ̄ such that t = t̄ + , g = ḡ,
λ = λ̄ solve (5.7.8.b.1-4). Besides this, by (ii’) there exists ū such that x = x̄, g = ḡ, t = t̄ + , u = ū solve
(5.7.8.a). Thus, (5.7.8) is feasible.
Next, if x, g, t, u, λ is a feasible solution to (5.7.8), then by (5.7.8.a) one has [t; g; x] ∈ T , with T given
by (5.7.5), whence [t; g; x] ∈ F[F, X ], implying that x ∈ X , see (5.7.1). Besides this, (5.7.8.b.1-4) say that
[t; g] ∈ X∗ (). Thus, [t; g; x] ∈ F[F, X ] and [t; g] ∈ X∗ (), implying by 5.7.1.1.B that VI [x|F ] ≤ .
Finally, when (5.7.4) is a K-representation of (F, X ), the above reasoning works for  = 0. 2

5.7.2 Calculus of conic representations of monotone vector fields


K-representations of pairs (F, X ) (X is a nonempty convex subset of Euclidean space E, F : X → E is
a continuous monotone vector field) admit a calculus; for verification of claims to follow, see Appendix
5.7.4.

5.7.2.1 Raw materials


Raw materials for the calculus of K-representable monotone vector fields include:
1. [Affine monotone vector field] Let K contain all Lorentz cones. Then affine monotone vector field
F (x) = Ax + a on X = E = Rn is K-represented by conic constraint (cf. [42, Proposition 7.4.3])

t ≥ xT Āx + aT x, g = Ax + a (5.7.9)

in variables t, g, x, where Ā = 21 [A + AT ].
2. [Gradient field of continuously differentiable K-representable convex function] Let X be nonempty
convex compact set, and f (x) : X → R be continuously differentiable convex function with K-
representable epigraph,

{(x, s) : s ≥ f (x), x ∈ X } = {(x, s) : ∃u : Ax + sp + Qu ≤K a}, (5.7.10)


498 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

the representation being essentially strictly feasible. Then the conic constraint

t ≥ s + r, Ax + sp + Qu ≤K a, r ≥ aT λ, AT λ = g, pT λ = −1, QT λ = 0, λ ≥K∗ 0 (5.7.11)

in variables t, g, x, s, r, λ, and u represents (F (·) := f 0 (·), X ) (cf. [42, Proposition 7.4.4]).


3. [Monotone vector field associated with continuously differentiable K-representable convex-concave
function ψ] Let K contain the 3D Lorentz cone, and let U ⊂ Rnu and V ⊂ Rnv be K-representable
nonempty compact sets:

U = {u : ∃α : Au + Bα ≤K a}, (a)
(5.7.12)
V = {v : ∃β : Cv + Dβ ≤L b}, (b)

both representations being essentially strictly feasible. Assume that ψ(u, v) : U × V 7→ R is


a continuously differentiable convex-concave function which is K-representable on U × V with
essentially strictly feasible representation (see Section 2.3.7.1); that is, ψ admits representation

∀(u ∈ U, v ∈ V) : ψ(u, v) = inf f T v + τ : P f + τ p + Qξ + Ru ≤M c



(5.7.13)
f,τ,ξ

such that the conic constraint P f + τ p + Qξ ≤M c − Ru in variables f, τ, ξ is essentially strictly


feasible for every u ∈ U. Let

F (u, v) = [∇u ψ(u, v); −∇v ψ(u, v)] : X := U × V → E := Rnu × Rnv

be the vector field associated with U, V, ψ; it is well known that this field is monotone. Then
(i) The set
 
r > max ζ T e + ψ(u, ζ)
 
(a) 
ζ∈V 

 

Z = (t ∈ R, g = [h; e] ∈ E, x = [u; v] ∈ X ) : ∃r, s : s > max ω T h − ψ(ω, v) (b) (5.7.14)

 ω∈U 

t≥r+s (c)

satisfies the relations


Z ⊂ F[F, X ] (a)
(5.7.15)
x ∈ X , g = F (x), t > xT F (x) ⇒ [t; g; x] ∈ Z. (b)

(ii) Besides this, Z is nothing but the projection on the plane of (t, g = [h; e], x = [u; v])-variables
of the solution set of the conic constraint
t ≥ r + s, r ≥ θ + γ T b+τ , s ≥ θ0 + cT δ + aT ,
 
 Au + Bα ≤K a, Cv + Dβ ≤L b, P f + τ p + Qξ + Ru ≤M c 
 T 
 C γ = f + e, DT γ = 0, P T δ + v = 0, pT δ = −1, QT δ = 0, RT δ + AT  = h, B T  = 0, 
γ ≥L∗ 0, δ ≥M∗ 0,  ≥K∗ 0, η ≥ 0, θ ≥ 0, ηθ ≥ 1, η 0 ≥ 0, θ0 ≥ 0, η 0 θ0 ≥ 1
(5.7.16)
in variables t, g = [h; e], x = [u; v], r, s, f, τ, ξ, α, β, γ, η, θ, δ, , η 0 , and θ0 .
Taken together, (i) and (ii) say that the conic constraint (5.7.16) almost represents (F, X ).
4. [Univariate monotone rational vector field] Let X = [a, b] ⊂ R (−∞ < a < b < ∞), and let α(x)
and β(x) be real algebraic polynomials such that β(x) > 0 on X . Suppose that the univariate
vector field given on X by the function F (x) = α(x)
β(x) is nondecreasing on X , and that K contain all
semidefinite cones. Then the set

Y := Conv{[xF (x); F (x); x], x ∈ X }

admits explicit K-representation which, by Remark 5.7.1, represents (F, X ).


5.7. WELL-STRUCTURED MONOTONE VECTOR FIELDS 499

5.7.2.2 Calculus rules


Calculus rules are as follows.
1. [Restriction on a K-representable set] Let conic constraint
Xx + Gg + tT + U u ≤K a (5.7.17)
in variables t, g, x, u represent (almost represent) (F, X ), and let set Y ⊂ E be K-representable:
Y = {y ∈ E : ∃v : Ay + Bv ≤L b} (5.7.18)
and have a nonempty intersection Z = Y ∩ X with X . Denoting by F̄ the restriction of F on Z,
the conic constraint
{Xx + Gg + tT + U u ≤K a, Ax + Bv ≤L b} (5.7.19)
in variables t, g, x, u, v represents (resp., almost represent) (F̄ , Z) (cf. [42, Proposition 7.4.5]).
2. [Direct summation] For k ≤ K, let Xk be nonempty convex subsets of Euclidean spaces Ek and
Fk : Xk → Ek be continuous monotone vector fields, and let
Xk x + Gk gk + tk Tk + Uk uk ≤Kk ak , 1 ≤ k ≤ K (5.7.20)
be K-representations of (Fk , Xk ). Denote
F ([x1 ; ...; xK ]) = [F1 (x1 ); ...; FK (xK )] and X = X1 × ... × XK .
Then the conic constraint
Xk xk + Gk gk + tk Tk + Uk uk ≤Kk ak, k ≤ K (a)
P (5.7.21)
t = k tk (b)
in variables t, g := [g1 ; ...; gK ], x := [x1 ; ...; xK ], t1 , ..., tK , u1 , ..., uK K-represents (F, X ).
When conic constraints (5.7.20) almost represent (Fk , Xk ), (5.7.21) almost represents (F, X ) (cf.
[42, Proposition 7.4.6]).
3. [Taking conic combinations] Let X be a nonempty convex subset of Euclidean space E, and let
F1 , ..., FK be continuous monotone vector fields on X with (Fk , X ) admitting K-representations
Xk x + Gk g + tTk + Uk uk ≤Kk ak , 1 ≤ k ≤ K. (5.7.22)
Let, further, αk > 0 be given, and let
X
F (x) = αk Fk (x) : X → E.
k
The conic constraint
+ Gk gk + tk Tk + Uk uk ≤Kk ak , 1 ≤ k ≤ K
Xk xP (a)
g = P k αk gk (b) (5.7.23)
t = k αk tk (c)
in variables t, g, x, {tk , gk , uk , k ≤ K} is a K-representation of (F, X ).
When (5.7.22) are almost representations of (Fk , X ), (5.7.23) is almost representation of (F, X ).
4. [Affine substitution of variables] Let X be a nonempty convex subset of E, F be a continuous
monotone vector field on X, with (F, X ) given by K-representation
Xx + Gg + tT + U u ≤K a. (5.7.24)
Assume that ξ 7→ Aξ + a is an affine mapping from Euclidean space Λ to E, and
Ξ = {ξ : Aξ + a ∈ X }.
T
Then vector field Φ(ξ) = A F (Aξ + a): Ξ → Λ is continuous and monotone, and the conic
constraint
τ = t − hg, ai, γ = AT g, X(Aξ + a) + Gg + tT + U u ≤K a (5.7.25)
in variables τ, γ, ξ, g, t, and u represents (Φ, Ξ).
When (5.7.24) nearly represents (F, X ), conic constraint (5.7.25) nearly represents (Φ, Ξ).
500 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

5.7.3 Illustrations
5.7.3.1 “Academic” illustration
1
Let K contain Lorentz cones, M ∈ Rn×n be such that M = 2 [M + M T ]  0, and let X ⊂ Rn+ be
nonempty and K-representable:

X = {x : ∃u0 : A0 x + B0 u0 ≤K0 a0 }. (5.7.26)

Suppose that the operator


F (x) = M x − [f1 (x); ...; fn (x)] (5.7.27)
P
is monotone on X , and that functions fi (x) and − i xi fi (x) are K-representable on X :

{(x, s) : x ∈ X , s ≥ fi (x)}
P = {(x, s) : ∃u1i : A1i x + sB1i + C1i u1i ≤K1i a1i } (ai )
(5.7.28)
{(x, s) : x ∈ X , s ≥ − i xi fi (x)} = {(x, s) : ∃u2 : A2 x + sB2 + C2 u2 ≤K2 a2 } (b)

Observe, that when M  αIm with α > 0, F definitely is monotone provided that f (x) := [f1 (x); ...; fn (x)]
is Lipschitz continuous on X with Lipschitz constant w.r.t. k · k2 bounded with α.
Given x ∈ X and g ∈ Rn , let us consider the function
X
fx,g (y) = g T y + F T (y)(x − y) = −y T M y + [gi yi + xi [[M y]i − fi (y)] + yi fi (y)] : X → R.
i

We are in the situation where xi ≥ 0 for x ∈ X , so that

{(y,
 r) : y ∈ X , fx,g (y) ≥ r} 
 A0 y + B0 u0 ≤K0 a0 , y T M y − sP 0 ≤0 
= (y, r) : ∃u0 , s0 , s1i , s2 : sP1i + fi (y) ≤ 0, 1 ≤ i ≤ n,Ps2 − i yi fi (y) ≤ 0
T
i [gi + [M x]i ]yi − s0 + i xi s1i + s2 ≥ r
 

= (y, r) : ∃u0 , s0 , s1i , s2 , u1i , u2 :

A0 y + B0 u0 ≤K0 a0 , y T M y − s0 ≤ 0, A2 y − s2 B2 + C2 u2 ≤K2 a2 
P1i y − s1i BT1i + C1i u1i ≤P
A K1i a1i , 1 ≤ i ≤ n
[g + [M x] ]y −s + i xi s1i + s2 ≥ r

i i i i 0

Taking into account that M  0, the constraint y T M y − s0 ≤ 0 can be represented by strictly feasible
conic constraint
A3 y + s0 B3 ≤K3 a3
on Lorentz cone L3 , we get
X X
max fx,g (y) = sup [gi + [M T x]i ]yi −s0 + xi s1i + s2 :
y∈X y,u0 ,s0 ,s1i ,s2 ,u1i ,u2
i i

A0 y + B0 u0 ≤K0 a0 , A3 y + s0 B3 ≤K3 a3
. (5.7.29)
A1i y − s1i B1i + C1i u1i ≤K1i a1i , 1 ≤ i ≤ n, A2 y − s2 B2 + C2 u2 ≤K2i a2

Assume that the system of conic constraints (5.7.29) in variables y, s0 ,u0 , s1i , s2 , u1i and u2 is essentially
strictly feasible. Then, by conic quality,
 X
max fx,g (y) = min aT0 µ + aT3 ν + aT1i ξi + aT2 η :
y∈X µ,ν,ξi ,η
i
T

AT0 µ
+ AT3 ν + i AT1i ξi + AT2 η
P
M x+g = 

T
xi + B1i ξi = 0, 1 ≤ i ≤ n, B2T η = −1, B3T ν = −1

T T .
B0 µ = 0, C1i ξi = 0, 1 ≤ i ≤ n, C2T η = 0, 

µ ≥K∗0 0, ν ≥K∗3 0, ξi ≥K∗1i 0, 1 ≤ i ≤ n, η ≥K∗2 0

5.7. WELL-STRUCTURED MONOTONE VECTOR FIELDS 501

Now, recalling what fx,g (y) is, we end up with K-representation of F[F, X ]:

F[F, X ] := [t; g; x] : x ∈ X , t − g T y ≥ F T (y)[x − y] ∀y ∈ X
= [t; g; x] : ∃u0 , µ, ν, ξi , η :

t ≥ aT0 µ + aT3 ν + i aT1i ξi +PaT2 η, A0 x + B0 u0 ≤K0 a0 
P

M T x + g = AT0 µ + AT3 ν + i AT1i ξi + AT2 η



T T T
xi + B1i ξi = 0, 1 ≤ i ≤ n, B2 η = −1, B3 ν = −1 ,
B0T µ = 0, C1i T
ξi = 0, 1 ≤ i ≤ n, C2T η = 0




µ ≥K∗0 0, ν ≥K∗3 0, ξi ≥K∗1i 0, 1 ≤ i ≤ n, η ≥K∗2 0

so that the conic constraint


t ≥ aT0 µ + aT3 ν + i aT1i ξi + aT2 η, A0 x + B0 u0 ≤K0 a0
 P 
T
 xi + B1i
 T ξi = 0, 1 ≤ i ≤ n, B2T η = −1, , B3T ν = −1 

T T
 B0 µ = 0, C1i ξi = 0, 1 ≤ i ≤ n, C2 η = 0 
µ ≥K0 0, ν ≥K3 0, ξi ≥K1i 0, 1 ≤ i ≤ n, η ≥K2 0
∗ ∗ ∗ ∗

in variables t, g, x, u0 , µ, ν, ξi and η is a K-representation of (F, X ).

5.7.3.2 Nash Equilibrium


The “covering story” for this example is as follows:
n ≥ 2 retailers intend to enter certain market, say, one of red herrings. To this end they
should select their “selling capacities” (say, rent areas at malls) xi , 1 ≤ i ≤ n, in given ranges
Xi = [0, Xi ] (0 < Xi < ∞). With the selections x = [x1 ; ...; xn ] ∈ X = X1 × ..., ×Xn ⊂ Rn+ ,
the monthly loss of the i-th retailer is
xi
φi (x) = ci xi − PK b,
j=1 xj + a

where ci xi , ci > 0, is the price of the capacity xi , b > 0 is the dollar value of the demand
on red herrings, and a > 0 is the total selling capacity of the already existing retailers; the
term − Pn xix +a b is the minus the revenue of the i-th retailer under the assumption that
j
j=1
the total demand is split between retailers proportionally to their selling capacities.
We want to find is a Nash equilibrium – a point x∗ = [x∗1 ; ...; x∗n ] ∈ X such that every one of
the functions φi (x∗i , ..., x∗i−1 , xi , x∗i+1 , ..., x∗n ) attains its minimum over xi ∈ Xi at xi = x∗i , so
that for the i-th retailer there is no incentive to deviate from capacity x∗i provided that all
other retailers j stick to capacities x∗j , and this is so for all i.
As is immediately seen, the Nash Equilibrium problem in question is convex, meaning that φi (x) are
convex in xi and concave in xi = (xi , ..., xi−1 , xi+1 , ..., xn ) and, on the top of it, i φi (x) is convex on
P
X . It is well known (for justification, see, e.g., [40]) that for such a problem, the vector field
 
∂φ1 (x) ∂φ2 (x) ∂φn (x)
F (x) = ; ; ...; : X → Rn
∂x1 ∂x2 ∂xn

is monotone, and that Nash Equilibria are exactly the weak=strong solutions to VI(F, X ). Specifying
K as the family of direct products of Lorentz cones, we are about to demonstrate that F admits an
explicit K-representation on X , allowing to reduce the problem of finding Nash Equilibrium to an explicit
Second Order conic problem. This is how the construction goes (to save notation, in what follows we set
b = a = 1, which always can be achieved by appropriate scaling of the capacities and loss functions):
1. Observe that all we need is a K-representation R(t, g, x, u) of (F, X )—a K-conic constraint in
variables t ∈ R, g ∈ Rn , x ∈ Rn and additional variables u—satisfying the requirements specified
502 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

in Section 5.7.1.1. By Proposition 5.7.1, given R, for  ≥ 0 we can write down an explicit K-
conic constraint C (t, g, x, u, v) in variables t, g, x and additional variables u, v, with the size of the
constraint (dimension of the associated cone and the total number of variables) independent of 
such that the constraint is feasible, and the x-component of any feasible solution to the constraint
is an -solution to VI(F, X ):

x ∈ X & VI [x|F ] = max F T (y)[x − y] ≤ .


y∈X

As a result, finding -solution to VI(F, X ) reduces to solving an explicit solvable Second Order
feasibility conic problem of the size independent of .

2. Thus, all we need is to find an explicit K-representation of (F, X ). This can be done as follows:

(a) Consider the convex-concave function


u
ψ(u, v) = − : R2+ → R
u+v+1

along with the associated monotone vector field


   
∂ψ(u, v) ∂ψ(u, v) v+1 u
Φ(u, v) = ;− = − ; − .
∂u ∂v (u + v + 1)2 (u + v + 1)2

Let also Ai be 2 ×n matrix 


with i-th column [1; 0] and all remaining columns equal to [0; 1],
and let G(x) = ∇ P 1x +1 . As is immediately seen, we have
j
j

n
X
∀x ∈ Rn+ : Ψ(x) := 2c + ATi Φ(Ai x) + G(x) = 2F (x),
i=1

so that a K-representation of F is readily given by K-representations of the constant monotone


vector field ≡ c, of Φ, and of G via our calculus (rules on affine substitution of argument and
on summation).
A K-representation of the constant vector field ≡ c is trivial; it is given by the system of
linear equalities t = cT x, g = c on variables (t, g, x) ∈ R × Rn × Rn . A K-representation of
the gradient vector field G(x) is given by calculus rule 2 in Section 5.7.2.1; we are in the case
where (5.7.10) reads

{(x, s) : s ≥ f (x), x ∈ X} = (x, s) : −x ≤ 0, x ≤ X := [X1 ; ...; Xn ],

 P P 
− 0; i xi − s; i xi + s ≤L3 [2; 1; 1] ,
| {z }
⇔ xP+ 1 + s ≥ 0,
( Pi xi + 1 + s)2 ≥ ( i xi + 1 − s)2 + 4
P
⇔ s( i xi + 1) ≥ 1

so that (5.7.11), as seen from immediate computation, reduces to the system of constraints
P
≥ r + s, 0 ≤ xP≤ X, r ≥ i max[Xi (gi + θ), 0] + θ − 2ν
 tP
2; i xi + 1 − s; i xi + 1 + s ≥L3 0, [2ν; θ − 1; θ + 1] ≥L3 0
| P {z } | {z }
⇔s( xj +1)≥1 ⇔ν 2 ≤θ
j

in variables t, g, x, θ, ν.
5.7. WELL-STRUCTURED MONOTONE VECTOR FIELDS 503

(b) It remains to find a K-representation of the vector field Φ on a rectangle Ξ = U ×V, U = [0, U ],
V = [0, V ] with given U > 0, V > 0 (they should be large enough in order for Ai x, x ∈ X , to
take values in the rectangle).
Let us use the construction described in item 3 of Section 5.7.2.1. By (i) of item 3, the set
r ≥ max [ζe + ψ(u, ζ)]
 
ζ∈V 

 
 
Z = (t ∈ R, g = [h; e] ∈ R2 , x = [u; v] ∈ Ξ) : ∃r, s : s ≥ max ω T h − ψ(ω, v)

 ω∈U 

t≥r+s
satisfies the relations
Z ⊂ F[Φ, Ξ]
x ∈ X , g = Φ(x), t ≥ xT Φ(x) ⇒ [t; g; x] ∈ Z,

so that all we need to get a K-representation of Φ is to represent Z as a projection onto


the plane of t, g, x-variables of the solution set of K-conic constraint in variables t, g, x and
additional variables. To this end note that for all (u, v) ∈ Ξ = U × V one has
  

 τ ≥ 0, 1 ≥ f ≥ 0, s ≥ 0, t ≤ 1, s + τ ≤ 1  
  [2s; u − f ; u + f ] ∈ L3 
ψ(u, v) = min fv + t : 
  (5.7.30)
f,t,s,τ 
 [2(1 − s); t − f ; t − f + 2] ∈ L3 

3
[2; u − τ + 1; u + τ + 1] ∈ L
 
| {z }
⇔ (f,t,s,τ )∈Π+

and   

 1 ≥ f ≥ 0, 0 ≤ s ≤ 1, t ≤ 1 

−ψ(u, v) = min f u + t :  [2s; v + 1 − f ; v + 1 + f ] ∈ L3  , (5.7.31)
f,t,s 
[2(1 − s); t − 1; t + 1] ∈ L3
 

| {z }
⇔ (f,t,s)∈Π−

see Section 5.7.4.3 for justification. Since Π± are nonempty convex compact sets, by the
Sion-Kakutani Theorem (Theorem D.4.2) one has from (5.7.30):
max [ζe + ψ(u, ζ)] = max min {[f + e]ζ + t : (f, t, s, τ ) ∈ Π+ }
ζ∈V 0≤ζ≤Vf,t,s,τ 
= min max [f + e]ζ + t : (f, t, s, τ ) ∈ Π+
f,t,s,τ 0≤ζ≤V
= min {max[0, V (f + e)] + t : (f, t, s, τ ) ∈ Π+ } ,
f,t,s,τ

and from (5.7.31):


max [ωh − ψ(ω, v)] = max min {[f + h]ω + t : (f, t, s) ∈ Π− }
ω∈U 0≤ω≤U
 f,t,s 
= min max [f + h]ω + t : (f, t, s) ∈ Π−
f,t,s 0≤ω≤U
= min {max[0, U (f + h)] + t : (f, t, s) ∈ Π− } .
f,t,s

As a result,

Z= (t ∈ R, g = [h; e] ∈ R2 , 0 ≤ x := [u; v] ≤ [U ; V ]) : ∃r, s, f± , t± , s± , τ :

t ≥ r + s, r ≥ max[0, V (f+ + e)] + t+ , s ≥ max[0, U (f− + h)] + t−
.
(f+ , t+ , s+ , τ ) ∈ Π+ , (f− , t− , s− ) ∈ Π−

Recalling what are Π± , we can straightforwardly represent Z as projection onto the space
of t, g, x-variables of a set given by a K-conic inequality, ending up with an explicit K-
representation of Φ.
504 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Remark 5.7.2 The just outlined construction can be used in the case of a general convex Nash Equilib-
rium problem where, given n convex compact sets Xi ⊂ Rki and n continuously differentiable functions
n ) : X := X1 × ... × Xn → R with φi convex in xi , concave in (x1 , ..., xi−1 , xi+1 , ..., xn ) and
φi (x1 , ..., xP
such that i φi is convex, one is looking for Nash Equilibria—points x∗ = [x∗1 ; ...; x∗n ] ∈ X such that
for every i, the function φi (x∗1 , ..., x∗i−1 , xi , x∗i+1 , ..., x∗n ) attains its minimum over xi ∈ Xi at xi = x∗i . As
it was already mentioned, these equilibria are exactlyhthe weak=strongi solutions to VI(F, X ) where the
monotone vector field F is given by F (x1 , ..., xn ) = ∂φ∂x 1 (x)
1
; ...; ∂φ∂x
n (x)
n
. Observing that the monotone
vector fields  
∂φi (x) ∂φi (x) ∂φi (x) ∂φi (x) ∂φi (x)
F i (x) = − ; ...; − ; ;− ; ...; −
∂x1 ∂xi−1 ∂xi ∂xi+1 ∂xn
φi and the monotone vector field F 0 (x) = ∇ ( i φi (x)) are
P
associated with convex-concave functions
Pn
linked to F by the relation 2F = i=0 F i , we see that in order to get a K-representation of F , it suffices
to have at our disposal K-representations of F i , 0 ≤ i ≤ n. These latter representations, in good cases,
can be built according to recipes presented in items 2 and 3 of Section 5.7.2.1.

The latter remark puts


P in proper perspective the “red herring” illustration which by itself is of no actual
interest: setting s = i xi + 1, x ∈ X is a Nash Equilibrium if and only if

≥ 0, xi = 0
1 xi ∂φi (x) 
ci − + 2 = = 0, 0 < xi < Xi
s s ∂xi 
≤ 0, xi = Xi .
 P 
As a result, finding the equilibrium reduces to solving on the segment s ∈ 1, i Xi + 1 the univariate
equation 
X  0, s(1 − sci ) < 0
xi (s) + 1 = s, xi (s) = s(1 − sci ), 0 ≤ s(1 − sci ) ≤ Xi
Xi , s(1 − sci ) > Xi .

i

5.7.4 Derivations for Section 5.7.2


5.7.4.1 Verification of “raw materials”
1. [Affine monotone vector field] Note first that the symmetric part Ā of A is  0 due to the monotonicity
of F on E, so that (5.7.9) can be rewritten as a conic constraint on a direct product of properly selected
Lorentz cone and nonnegative orthant, and this direct product belongs to K. Second, when t, g, x solve
(5.7.9), for every y ∈ E one has xT Ax = xT Āx, whence

t − y T g = t − y T [Ax + a] = t − xT [Ax + a] −[y − x]T [Ax + a] ≥ [x − y]T [Ax + a] ≥ [x − y]T [Ay + a],
| {z }
=t−xT Āx−xT a≥0

that is, [t; g; x] ∈ F[F, E], as required in item (i) of a representation of (F, E). Furthermore, when x ∈ E,
setting g = F (x) = Ax + a and t = xT F (x) = xT [Ax + a] = xT [Āx + a], we see that [t; g; x] satisfy (5.7.9),
as required in item (ii) of a representation of (F, E).
2. [Gradient field of continuously differentiable K-representable convex function] Let

f∗ (y) = sup [xT y − f (x)]


x∈X

be the Fenchel transform of f :

f∗ (y) = sup [xT y − f (x)] = sup [xT y − s]


x∈X x∈X , s≥f (x)
= sup[xT y − s : Ax + sp + Qu ≤K a]
x,u 
= min aT λ : AT λ = y, pT λ = −1, QT λ = 0, λ ≥K∗ 0 [by conic duality],
λ
5.7. WELL-STRUCTURED MONOTONE VECTOR FIELDS 505

that is,
r ≥ f∗ (g) ⇔ ∃λ : r ≥ aT λ, AT λ = g, pT λ = −1, QT λ = 0, λ ≥K∗ 0. (5.7.32)
Next, let

Z = {(t, g, x) : ∃s, r : s ≥ f (x), r ≥ f∗ (g), s + r ≤ t}


  (a)
t ≥ s + r, Ax + sp + Qu ≤K a (5.7.33)
= (t, g, x) : ∃s, r, u, λ : (b)
r ≥ aT λ, AT λ = g, pT λ = −1, QT λ = 0, λ ≥K∗ 0

where (5.7.33.b) is due to (5.7.10) and (5.7.32). Thus, Z is the projection of the solution set of (5.7.11) on
the space of t, g, x-variables, and we should check that
(i) Z ⊂ F[f 0 (·), X ], and
(ii) when x ∈ X , g = f 0 (x), and t = xT f 0 (x), we have [t; g; x] ∈ Z.
(i): Let (t, g, x) ∈ Z. We have t ≥ s + r for properly selected s ≥ f (x) and r ≥ f∗ (g), whence for y ∈ X it
holds
t − g T y ≥ f (x) + f∗ (g) − g T y ≥ f (x) − f (y) ≥ (x − y)T f 0 (y),
that is, [t; g; x] ∈ F[f 0 (·), X ]. (i) is proved.
(ii): Given x ∈ X , let us set g = f 0 (x), t = xT g, s = f (x), r = f∗ (g). We have

r = f∗ (g) = sup{g T x0 − f (x0 ) : x0 ∈ X } = g T x − f (x)


x0

(the concluding equality is due to g = f 0 (x)); thus, t = r + s. Invoking (5.7.33.a), we conclude that
(t, g, x) ∈ Z. (ii) is proved.
3. [Monotone vector field associated with K-representable convex-concave function ψ]
Verifying (i):
1o . Let us show (5.7.15.a). Let (t, g = [h; e], x = [u; v]) ∈ Z, and let r, s be reals which, taken together
with t, g, x, form a feasible solution to the system of constraints in (5.7.14). For y = [w; z] ∈ X we have

t − yT g ≥ T T
r +T s − w h −z e [by (5.7.14.c)]
≥ z e + ψ(u, z) + w h − ψ(w, v) − wT h − z T e [by (5.7.14.a,b)]
T

= ψ(u, z) − ψ(w, v) = [ψ(u, z) − ψ(w, z)] + [ψ(w, z) − ψ(w, v)]


≥ [u − w]T ∇u ψ(w, z) − [v − z]T ∇v ψ(w, z) [because ψ is convex-concave]
= [x − y]T F (y),

implying that Z ⊂ F[F, X ].


2o . Let us now verify (5.7.15.b). Given x = [u; v] ∈ X , let us set

g = F (x) = [h; e], h = ∇u ψ(u, v), e = −∇v ψ(u, v); t̄ = xT F (x) = hT u + eT v,

and let t > t̄. The function ζ T e + ψ(u, ζ) of ζ ∈ V is concave, and its gradient w.r.t. ζ taken at the point
ζ = v vanishes, implying that

r̄ := v T e + ψ(u, v) ≥ ζ T e + ψ(u, ζ) ∀ζ ∈ V.

Similarly, the function ω T h − ψ(ω, v) of ω ∈ U is concave. and its gradient w.r.t. ω taken at the point
ω = u vanishes, implying that

s̄ := uT h − ψ(u, v) ≥ ω T h − ψ(ω, v) ∀ω ∈ U .

Observing that r̄ + s̄ = t̄ < t, we can find r > r̄ and s > s̄ in such a way that r + s ≤ t. Looking
at the constraints (5.7.14.a-c), we conclude that whenever x = [u; v] ∈ X and t > xT F (x), the triple
(t, g = F (x), x) can be augmented by r, s to satisfy all constraints (5.7.14.a-c), that is, (t, g, x) ∈ Z, as
claimed in (5.7.15.b).

Verifying (ii): Clearly, all we need in order to justify claim in (ii) is to show that the set
n o
Z+ = (g = [h; e], x = [u; v], r, s) : u ∈ U , v ∈ V, r > max z T e + ψ(u, z) , s > max wT h − ψ(w, v)
  
z∈V w∈U
506 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

is the projection of the solution set of the conic constraint

r ≥ θ + γ T b+τ , η ≥ 0, θ ≥ 0, ηθ ≥ 1
 
 s ≥ θ0 + cT δ + aT , η 0 ≥ 0, θ0 ≥ 0, η 0 θ0 ≥ 1 
Au + Bα ≤K a, Cv + Dβ ≤L b, P f + τ p + Qξ + Ru ≤M c (5.7.34)
 
 
 C T γ = f + e, DT γ = 0, γ ≥L∗ 0, ; δ ≥M∗ 0,  ≥K∗ 0 
P T δ + v = 0, pT δ = −1, QT δ = 0, RT δ + AT  = h, B T  = 0

in variables x = [u; v], g = [h; e], r, s, α, β, γ, f, τ, ξ, δ, , η, θ, η 0 , and θ0 on the plane of g = [h; e], x =
[u; v], r, s-variables. Here is the proof.
1o . Recall that V is convex and compact. Thus, we have

∀(x = [u; v] ∈ X , g = [h; e]) : 


 T
 T
 T

max z e + ψ(u, z) = max z e + inf f z + τ : P f + τ p + Qξ + Ru ≤M c [by (5.7.13)]
z∈V z∈V f,τ,ξ
 T

= max inf (f + e) z + τ : P f + τ p + Qξ + Ru ≤M c
z∈V f,τ,ξ
n o
= inf max(f + e)T z + τ : P f + τ p + Qξ + Ru ≤M c [by Sion-Kakutani Theorem (Theorem D.4.3)]
f,τ,ξ  z∈V 
 T

= inf max (f + e) z + τ : Cz + Dβ ≤L b : P f + τ p + Qξ + Ru ≤M c [by (5.7.12.b)]
f,τ,ξ
z,βT T T

= inf γ b+τ : C γ = f + e, D γ = 0, γ ≥L∗ 0, P f + τ p + Qξ + Ru ≤M c
f,τ,ξ,γ
[by conic duality; recall that (5.7.12.b) is essentially strictly feasible].

Together with (5.7.12) the latter relation results in


  
(x = [u; v], g = [h; e], r) : x ∈ X , r > max z T e + ψ(u, z)
 z∈V

= (x = [u; v], g = [h; e], r) : ∃f, τ, ξ, α, β, γ :



r > γ T b+τ , Au + Bα ≤K a, Cv + Dβ ≤L b, γ ≥L∗ 0
C T γ = f + e, DT γ = 0, P f + τ p + Qξ + Ru ≤M c (5.7.35)

= (x = [u; v], g = [h; e], r) : ∃f, τ, ξ, α, β, γ, η, θ :

r ≥ γ T b+τ + θ, η ≥ 0, θ ≥ 0, ηθ ≥ 1 
Au + Bα ≤K a, Cv + Dβ ≤L b, , γ ≥L∗ , .
C T γ = f + e, DT γ = 0, P f + τ p + Qξ + Ru ≤M c 

2o . When v ∈ V and h ∈ Rnu we have


 
   
max wT h − ψ(w, v) = max wT h + sup −f T v − τ : P f + τ p + Qξ + Rw ≤M c [by (5.7.13)]
w∈U w∈U f,τ,ξ
 
= sup wT h − f T v − τ : P f + τ p + Qξ + Rw ≤M c, Aw + Bα ≤K a [by (5.7.12.a)]
w,f,τ,ξ,α
 
P T δ + v = 0, pT δ = −1, QT δ = 0, RT δ + AT  = h
T T
= min c δ + a  : T
 δ, B  = 0, δ ≥M∗ 0,  ≥K∗ 0 
by conic duality; recall that the K-representations (5.7.12.a)
of U and (5.7.13) of ψ are essentially strictly feasible.

Taken together with (5.7.12), the latter relation results in

(x = [u; v], g = [h; e], s) : x ∈ X , s > max hT w − ψ(w, u)


  
w∈U

= (x = [u; v], g = [h; e], s) : ∃α, β, δ,  :

s > cT δ + aT , Au + Bα ≤K a, Cv + Dβ ≤L b, δ ≥M∗ 0,  ≥K∗ 0
P T δ + v = 0, pT δ = −1, QT δ = 0, RT δ + AT  = h, B T  = 0

= (x = [u; v], g = [h; e], s) : ∃α, β, δ, , η 0 , θ0 :
5.7. WELL-STRUCTURED MONOTONE VECTOR FIELDS 507

s ≥ θ0 + cT δ + aT , η 0 ≥ 0, θ0 ≥ 0, η 0 θ0 ≥ 1 
Au + Bα ≤K a, Cv + Dβ ≤L b, δ ≥M∗ 0,  ≥K∗ 0 . (5.7.36)
P T δ + v = 0, pT δ = −1, QT δ = 0, RT δ + AT  = h, B T  = 0 

Finally, (5.7.35) and (5.7.36) together imply that


 
+
 T
  T

Z := (g = [h; e], x = [u; v], r, s) : u ∈ U , v ∈ V, r > max z e + ψ(u, z) , s > max w h − ψ(v, u)
z∈V w∈U

= (g = [h; e], x = [u; v], r, s) : ∃f, τ, ξ, α, β, γ, η, θ, δ, , η 0 , θ0 :
r ≥ θ + γ T b+τ , η ≥ 0, θ ≥ 0, ηθ ≥ 1


s ≥ θ0 + cT δ + aT , η 0 ≥ 0, θ0 ≥ 0, η 0 θ0 ≥ 1

,
Au + Bα ≤K a, Cv + Dβ ≤L b, P f + τ p + Qξ + Ru ≤M c, γ ≥L∗ 0, δ ≥M∗ 0,  ≥K∗ 0 
C T γ = f + e, DT γ = 0, P T δ + v = 0, pT δ = −1, QT δ = 0, RT δ + AT  = h, B T  = 0

as claimed in (5.7.34).
4. [Univariate monotone rational vector field] For evident reasons, it suffices to consider the case of X = [0, 1];
recall that β(t) > 0 on X . Let degrees of α and β be µ and ν, respectively, and let
κ = max[µ, ν] + 1.

10 . Consider the curves


1 1 
[tα(t); α(t); tβ(t)] : [0, 1] → R3 , γ(t) = 1; t; t2 ; ...; tκ : [0, 1] → Rκ+1

δ(t) = [tF (t); F (t); t] =
β(t) β(t)
For properly selected matrix A we have
δ(t) = Aγ(t), 0 ≤ t ≤ 1,
whence
Y := Conv{δ(t), 0 ≤ t ≤ 1} = AZ, Z = Conv{γ(t), 0 ≤ t ≤ 1}.
We intend to build a semidefinite representation (SDR) of Y (i.e., K-representation with K comprised
of finite direct products of semidefinite cones). Semidefinite representability (and K-representability in
general) of a set is preserved when taking linear images: an SDR
Z = {z : ∃u : A(z, u)  0} [A(z, u) is affine in [z; u] symmetric matrix]
of Z implies the representation
Y := AZ = {y : ∃[z; u] : y = Az, A(z, u)  0},
and the system of the right hand side constraints can be written down as a Linear Matrix Inequality in
variable y and additional variables z, u. Thus, all we need is to build an SDR of Z.

20 . We shall get SDR of Z from SDR of the “support cone”


P = {[p; q] ∈ Rκ+1 × R : min pT z − q ≥ 0} = {[p; q] : min pT γ(t) ≥ q}
z∈Z 0≤t≤1

of Z.

Given p = [p0 ; p1 ; ...; pκ+1 ] ∈ Rκ+1 , let, with a slight abuse of notation, p(t) = i=0 pi ti be the polynomial
with coefficients pi , 0 ≤ i ≤ κ. We have
p(t)
[p; q] ∈ P ⇔ β(t)
≥ q ∀t ∈ [0, 1]
(1+τ ) p(τ /(1+τ 2 ))
2 κ 2
⇔ (1+τ 2 )κ β(τ 2 /(1+τ 2 ))
R≥ q ∀τ ∈
⇔ (1 + τ ) p(τ /(1 + τ )) − q(1 + τ 2 )κ β(τ 2 /(1 + τ 2 )) ≥ 0 ∀τ ∈ R.
2 κ 2 2
| {z }
=:πp,q (τ )

Note that πp,q (τ ) is a polynomial of τ of degree ≤ 2κ, and the vector πp,q of coefficients of polynomial
is linear in [p; q]: πp,q = P [p; q]. We see that P is the inverse image of the cone P2κ of coefficients of
polynomials of degree ≤ 2κ which are nonnegative on the entire axis:
P = {[p; q] : P [p; q] ∈ P2κ }.
508 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

As we remember from Lecture 3, the cone P2κ is the linear image of the semidefinite cone Sκ+1
+ :
X
P2κ = {π ∈ R2κ+1 : ∃x = [xij ]0≤i,j≤κ ∈ Sκ+1
+ : [Qx]` := xij = π` , 0 ≤ ` ≤ 2κ},
0≤i,j≤κ,
i+j=`

and we arrive at a semidefinite representation of P:

P = {[p; q] : ∃x ∈ Sκ+1 : x  0 & Qx = P [p; q]}. (5.7.37)


κ+1
We claim that this representation is essentially strictly feasible. Indeed, let p̄ = [1; 1; ...; 1] ∈ R and
q̄ = 0. Then
κ
X
πp̄,q̄ (τ ) = (1 + τ 2 )κ 1 + τ 2 /(1 + τ 2 ) + [τ 2 /(1 + τ 2 )]2 + ... + [τ 2 /(1 + τ 2 )]κ = ci τ 2i , ci > 0 ∀i,
 
i=0

implying that with x̄ = Diag{c0 , c1 , ..., cκ } ∈ Sκ+1 one has

P [p̄; q̄] = Qx̄ & x̄  0.

That is, p̄, q̄, and x̄ satisfy all constraints in (5.7.37) and strictly satisfy the non-polyhedral constraint
x  0, as required by essentially strict feasibility.

30 . Now we are done: by its origin, Z is a convex compact set and as such is convex and closed, implying
by duality that
κ+1
Z = {z
n ∈R : pT z − q ≥ 0 ∀[p; q] ∈ P} o
= z ∈ Rκ+1 : inf {[pT z − q] : P [p; q] − Qx = 0, x  0}≥ 0 [by(5.7.37)]
n p,q,x o
= z ∈ Rκ+1 : ∃λ ∈ R2κ+1 : P T λ = [z; −1], Q∗ λ  0
[by semidefinite duality; λ 7→ Q∗ λ : R2κ+1 → Sκ+1
is the conjugate of x 7→ Qx : Sκ+1 → R2κ+1 ],

and we arrive at the desired SDR of Z.

5.7.4.2 Verification of calculus rules


1. [Restriction on a K-representable set] Let (5.7.17) represent (F, X ). Suppose that [t; g; x] can be augmented
by u, v to solve (5.7.19). Then (t, g, x, u) solve (5.7.17), implying that [t; g; x] ∈ F [F, X ] by definition of K-
representability of (F, X ), and (x, v) solve (5.7.19), implying that x ∈ Y. Taking together, these inclusions
clearly imply that [t; g; x] ∈ F[F̄ , Z], as required in item (i) of the definition of K-representation of (F̄ , Z).
Next, when x ∈ Z, the triple [t := hF (x), xi; g := F (x); x] can be augmented by u to solve (5.7.17) (by
item (ii) of the definition of K-representation of (F, X )), and because x ∈ Y, x can be augmented by v to
solve (5.7.18). Thus, t := hF (x), xi, g := F (x), x can be augmented by (u, v) to solve (5.7.19), as required
in item (ii) of K-representability of (F̄ , Z). Thus, (5.7.19) indeed represents (F̄ , Z). A completely similar
reasoning shows that if (5.7.17) almost represents (F, X ), then (5.7.19) almost represents (F̄ , Z).
2. [Direct summation]
P When x = [x1 ; ...; xK ] ∈ X and g = F (x) = [g1 ; ...; gK ], gk = Fk (xk ), tk = hxk , gk i
and t = t = hg, xi, there exist uk such that (5.7.21.a) take place (by item (ii) of the definition of
k k
representation as applied to the representations in (5.7.20)), and (5.7.21.b) takes place as well, as required
in item (ii) of the definition of a representation of (F, X ). On the other hand, when x = [x1 ; ...; xK ],
g = [g1 ; ...; gk ], and t can be augmented by uk and tk to solve (5.7.21.a) and (5.7.21.b), we have [tk ; gk ; xk ] ∈
F[Fk , Xk ], k ≤ K, whence for every y = [y1 ; ...; yK ] ∈ X it holds

tk − hgk , yk i ≥ hFk (yk ), xk − yk i, k ≤ K.

When summing up the above inequalities over k, we get

t − hg, yi ≥ hF (y), x − yi ∀y ∈ X ,

that is, [t; g; x] ∈ F [F, X ], as required in item (i) of the definition of a representation of (F, X ).
The above reasoning, with evident modifications, shows the claim in the case of almost representations.
5.7. WELL-STRUCTURED MONOTONE VECTOR FIELDS 509

3. [Taking conic combinations] Let t, g, x can be augmented by uk , tk , gk , k ≤ K to solve (5.7.23). Then


[tk ; gk ; x] ∈ F[Fk , X ], implying that

tk − hgk , yi ≥ hFk (y), x − yi ∀y ∈ X;

multiplying both sides by αk and summing up over k, we get

t − hg, yi ≥ hF (y), x − yi ∀y ∈ X,

that is, [t; g; x] ∈ F[F, X ]. On the other hand, given x ∈ X , let us set gk = Fk (x) and tk = hFk (x), xi.
Since the k-th conic constraint in (5.7.23)
P P (Fk , X ), there exist uk , k ≤ K, such that all relations
represents
(5.7.23.a) take place. Setting t = k
αk t k , g = α g , we, on one hand, satisfy (5.7.23.b-c), and, on
k k k
the other hand, obtain
X X
t= αk hFk (x), xi = hF (x), xi, g = αk gk = F (x),
k k

the bottom line being that [hF (x), xi; F (x); x] can be augmented by uk , tk , gk to solve (5.7.23). Thus,
(5.7.23) indeed is a K-representation of (F, X ).
The above reasoning, with evident modifications, works in the case of almost representations.
4. [Affine substitution of variables] Assume that τ, γ, ξ, g, t, u solve (5.7.25). Then t, g, x := Aξ + a, u satisfy

Xx + Gg + tT + U u ≤K a,

implying that x ∈ X and


t − hg, yi ≥ hF (y), x − yi ∀y ∈ X .
Recall that x = Aξ + a, x ∈ X , implies that ξ ∈ Ξ. Now, when η ∈ Ξ, setting y = Aη + a, we have y ∈ X
and
τ − hγ, ηi = t − hg, ai − hAT g, ηi = t − hg, yi ≥ hF (y), x − yi
= hF (Aη + a), A(ξ − η)i = hΦ(η), ξ − ηi,
and since η ∈ Ξ is arbitrary, we get [τ ; γ; ξ] ∈ F[Φ, Ξ], as required in item (i) of the definition of a
representation of (Φ, Ξ). On the other hand, when ξ ∈ Ξ, γ = Φ(ξ), and τ = hΦ(ξ), ξi, setting x = Aξ + a,
we get x ∈ X . Next, when setting g = F (x), t = hF (x), xi, we obtain γ = AT g and

τ = hF (x), x − ai = t − hg, ai.

Besides this, by the origin of t, g, x and item (ii) of the definition of K-representation, as applied to (5.7.24),
there exists u such that t, g, x, u satisfy (5.7.24). The bottom line is that ξ, γ, τ can be augmented by t, g, u
to solve (5.7.25), that is, (5.7.25) meets item (ii) of definition of K-representation of (Φ, Ξ).
The above reasoning, with evident modifications, works in the case of almost representations.

5.7.4.3 Verifying (5.7.30) and (5.7.31)


A. Let us prove that for all u ∈ U = [0, U ], v ∈ V = [0, V ] one has
u
ψ(u, v) = −
u+v+1
1
n p o
= min f v + t : 1 ≥ f ≥ 0, 0 ≤ s ≤ uf , s + ≤ 1, t − f ≥ (1 − s)2 − 1, t ≤ 1
f,t,s u+1
 
 1 ≥ f ≥ 0, s ≥ 0, t ≤ 1, τ ≥ 0, s + τ ≤ 1 
[2s; u − f ; u + f ] ∈ L3

 


 


 | {z } 


 ⇔s2 ≤uf when u + f ≥ 0


 
= min fv + t : [2(1 − s); t − f ; t − f + 2] ∈ L3 .
f,t,s;,τ  | {z } 
⇔(1−s)2 ≤t−f +1

 when t − f + 2 ≥ 0


 
[2; u − τ + 1; u + τ + 1] ∈ L3

 


 


 | {z } 

⇔(u+1)τ ≥1 when u + τ + 1 ≥ 0
510 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Indeed, for u ∈ U and v ∈ V we have


 √
min f v + t : 1 ≥ f ≥ 0, 0 ≤ s ≤ ≤ 1, t − f ≥ (1 − s)2 − 1, t ≤ 1
uf , s + 1
u+1
f,t,s   u √ 
= min f v + f − 2s̄(f ) + s̄(f )2 : 1 ≥ f ≥ 0, s̄(f ) = min u+1 , uf
f 
√ n o
u2
 2u
= min min f v + f − 2 uf + uf , min fv + f − u+1
+ (u+1)2
2 1≥f ≥u/(u+1)2
h 0≤f ≤u/(u+1) 2
i h i h i
u u(v+1)−2u−u 1 v−(u+1) u v 2 −(u+1)2
= min − u+v+1 , (u+1)2
= u min − v+(u+1) , (u+1)2
= v+u+1
min −1, (u+1)2
= ψ(u, v),
as claimed.
B. Now let us show that for all u ∈ U = [0, U ], v ∈ V = [0, V ]
u
−ψ(u, v) :=
u+v+1
n p o
= min f u + t : 1 ≥ f ≥ 0, 0 ≤ s ≤ (v + 1)f , s ≤ 1, 1 ≥ t ≥ (1 − s)2
f,t,s

1 ≥ f ≥ 0, 0 ≤ s ≤ 1, t ≤ 1
 
[2s; v + 1 − f ; v + 1 + f ] ∈ L3

 


 

 | {z } 
= min fu + t : ⇔s2 ≤(v+1)f when v + 1 + f ≥ 0
f,t,s 3
[2(1 − s); t − 1; t + 1] ∈ L

 


 

 | {z } 
⇔(1−s)2 ≤t

Indeed, for u ∈ U and v ∈ Q we have


n p o
min f u + t : 1 ≥ f ≥ 0, 0 ≤ s ≤ (v + 1)f , s ≤ 1, 1 ≥ t ≥ (1 − s)2
f,t,s n o
p
= min f u + (1 − s)2 : 1 ≥ f ≥ 0, 0 ≤ s ≤ (v + 1)f , s ≤ 1
f,s n o
p
= min f u + (1 − s̄(f ))2 : 1 ≥ f ≥ 0, s̄(f ) = min[ (v + 1)f , 1]
f  
p
= min min {f u + 1 − 2 (v + 1)f + (v + 1)f }, min {f u}
 0≤fu≤1/(v+1)
u
 u
1≥f ≥1/(v+1)
= min u+v+1
, v+1
= u+v+1
= −ψ(u, v),
as claimed. 2

5.8 Fast First Order algorithms for Smooth Convex Minimiza-


tion
5.8.1 Fast Gradient Methods for Smooth Composite minimization
The fact that problem (CP) with smooth convex objective can be solved by a gradient-type algorithm at
the rate O(1/T 2 ) was discovered by Yu. Nesterov in 1983 (celebrated Nesterov’s optimal algorithm for
large-scale smooth convex optimization [43]). A significant more recent development here, again due to
Yu. Nesterov [48] is extending of the O(1/T 2 )-converging algorithm from the case of problem (CP) with
smooth objective (smoothness is a rare commodity!) to the so called composite case where the objective
in (CP) is the sum of a smooth convex component and a nonsmooth “easy to handle” convex component.
We are about to present the related results of Yu. Nesterov; our exposition follows [49] (modulo minor
modifications aimed at handling inexact proximal mappings).

5.8.1.1 Problem formulation


The general problem of composite minimization is
φ∗ = min{φ(x) := f (x) + Ψ(x)}, (5.8.1)
x∈X
FAST FIRST ORDER MINIMIZATION AND SADDLE POINTS 511

where X is a closed convex set in a Euclidean space E equipped with norm k · k (not necessarily the
Euclidean one), Ψ(x) : X → R ∪ {+∞} is a lower semicontinuous convex function which is finite on the
relative interior of X, and f : X → R is a convex function. In this section, we focus on the smooth
composite case, where f has Lipschitz continuous gradient:

k∇f (x) − ∇f (y)k∗ ≤ Lf kx − yk, x, y ∈ X; (5.8.2)

here, as always, k · k∗ is the norm conjugate to k · k. Condition (5.8.2) ensures that

Lf
f (y) − f (x) − h∇f (x), y − xi ≤ kx − yk2 ∀x, y ∈ X, (5.8.3)
2

Note that we do not assume here that X is bounded; however, we do assume from now on that (5.8.1) is
solvable, and denote by x∗ an optimal solution to the problem.

5.8.1.2 Composite prox-mapping


We assume that X is equipped with DGF ω(x) compatible with k · k. As always, we set

Vx (y) = ω(y) − ω(x) − hω 0 (x), y − xi [x, y ∈ X]

and denote by xω the minimizer of ω(·) on X.


Given a convex lower semicontinuous function h(x) : X → R ∪ {+∞} which is finite on the relative
interior of X, we know from Lemma 5.9.1 that the optimization problem

Opt = min {h(x) + ω(x)}


x∈X

is solvable with the unique solution z = argminx∈X [h(x) + ω(x)] fully characterized by the property

z ∈ X & {∃h0 (z) ∈ ∂X h(z) : hh0 (z) + ω 0 (z), w − zi ≥ 0 ∀w ∈ X}.

Given  ≥ 0 and h as above, let us set

argmin {h(x) + ω(x)} = {z ∈ X : ∃h0 (z) ∈ ∂X h(z) : hh0 (z) + ω 0 (z), w − zi ≥ − ∀w ∈ X}


x∈X

Note that for h as above, the set argmin x∈X {h(x) + ω(x)} is nonempty, since it clearly contains the
point argminx∈X {h(x) + ω(x)}.
The algorithm we are about to develop requires the ability to find, given ξ ∈ E and α ≥ 0, a point
from the set
argmin {hξ, xi + αΨ(x) + ω(x)}
x∈X

where  ≥ 0 is construction’s parameter23 ; we refer to such a computation as to computing within


accuracy  a composite prox-mapping, ξ, α being the arguments of the mapping. The “ideal” situation
here is when the composite prox-mapping

(ξ, α) 7→ argmin {hξ, xi + αΨ(x) + ω(x)}


x∈X

is easy to compute either by an explicit formula, or by a cheap computational procedure, as is the case
in the following instructive example.
23
In the usual Composite FGM,  = 0; we allow for  > 0 in order to incorporate eventually the case where X
is given by Linear Minimization Oracle, see Corollary 5.8.2.
512 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Example: LASSO. An instructive example of a composite minimization problem where the com-
posite prox-mapping indeed is easy to compute is the LASSO problem
min λkxkE + kA(x) − bk22 ,

(5.8.4)
x∈E

where E is a Euclidean space, k · kE is a norm on E, x 7→ A(x) : E → Rm is a linear mapping, b ∈ Rm ,


and λ > 0 is a penalty coefficient. To represent the LASSO problem in the form of (5.8.1), it suffices to
set
f (x) = kA(x) − bk22 , Ψ(x) = kxkE , X = E.
The most important cases of the LASSO problem are
A. Sparse recovery: E = Rn , k · k = k · k1 or, more generally, E = Rk1 × ... × Rkn and k · k is the associated
block `1 norm. In this case, thinking of x as of the unknown input to a linear system, and of b as of
observed (perhaps, corrupted by noise) output, (5.8.4) is used to carry out a trade off between “model
fitting term” kA(x) − bk22 and the (block) sparsity of x: usually, the larger is λ, the less nonzero entries
(in the k · k1 -case) or nonzero blocks (in the block `1 case) has the optimal solution to (5.8.4), and the
larger is the model fitting term.
Note that in the case in question, computing composite prox-mapping is really easy, provided that
the proximal setup in use is either
(a) the Euclidean setup, or
(b) the `1 /`2 setup with DGF (5.3.11).
Indeed, in both cases computing composite prox-mapping reduces to solving the optimization problem
of the form   2/p 

X 

X
j j T j j p
min [αkx k2 + [b ] x ] +  kx k2 , (5.8.5)
xj ∈Rkj ,1≤j≤n 
 j j

where α > 0 and 1 < p ≤ 2. This problem has a closed form solution (find it!) which can be computed
in O(dim E) a.o.
B. Low rank recovery: E = Mµν , k · k = k · knuc is the nuclear norm. The rationale behind this setting
is similar to the one for sparse recovery, but now the trade off is between the model fitting and the rank
of the optimal solution (which is a block-diagonal matrix) to (5.8.4).
In the case in question, for both the Euclidean and the Nuclear norm proximal setups (in the latter
case, one should take X = E and use the DGF (5.3.15)), so that computing composite prox-mapping
reduces to solving the optimization problem
 !2/p 
Xn m
X  X
p
min Tr(y j [bj ]T ) + αkσ(y)k1 + σi (y) , m= µj , (5.8.6)
y=Diag{y 1 ,...,y n }∈Mµν  
j=1 i=1 j

where b = Diag{b1 , ..., bn } ∈ Mµν , α > 0 and p ∈ (1, 2]. Same as in the case of the usual prox-mapping
associated with the Nuclear norm setup (see section 5.3.3), after computing the svd of b the problem
reduces to the similar one with diagonal bj ’s: bj = [Diag{β1j , ..., βµj j }, 0µj ×(νj −µj ) ], where βkj ≥ 0. By the
same symmetry argument as in section 5.3.3, in the case of diagonal bj , (5.8.6) has an optimal solution
with diagonal y j ’s. Denoting sjk , 1 ≤ k ≤ j, the diagonal entries in the blocks y j of this solution, (5.8.6)
reduces to the problem
  2/p 

 

 X 
j j j  X j p
min [s k β k + α|s k |] +  |s k |  ,
{sjk :q≤j≤n,1≤k≤µj } 
 1≤j≤n, 1≤j≤n,


 1≤k≤µ 1≤k≤µ

j j

which is of exactly the same structure as (5.8.5) and thus admits a closed form solution which can be
computed at the cost of O(m) a.o. We see that in the case in question computing composite prox-mapping
is no more involved that computing the usual prox-mapping and reduces, essentially, to finding the svd
of a matrix from Mµν .
FAST FIRST ORDER MINIMIZATION AND SADDLE POINTS 513

5.8.1.3 Fast Composite Gradient minimization: Algorithm and Main Result


In order to simplify our presentation, we assume that the constant Lf > 0 appearing in (5.8.2) is known
in advance.
The algorithm we are about to develop depends on a sequence {Lt }∞ t=0 of reals such that

0 < Lt ≤ Lf , t = 0, 1, ... (5.8.7)

In order to ensure O(1/t2 ) convergence, the sequence should satisfy certain conditions which can be
verified online; to meet these conditions, the sequence can be adjusted online. We shall discuss this issue
in-depth later.
Let us fix two tolerances  ≥ 0, ¯ ≥ 0. The algorithm is as follows:
• Initialization: Set y0 = xω := argminx∈X ω(x), ψ0 (w) = Vxω (w), and select y0+ ∈ X such that
φ(y0+ ) ≤ φ(y0 ). Set A0 = 0.
• Step t = 0, 1, ... Given ψt (·), yt+ and At ≥ 0,

1. Compute
zt ∈ argmin ψt (w). (5.8.8)
w∈X

2. Find the positive root at+1 of the quadratic equation


" s #
1 1 At
Lt a2t+1 = At + at+1 ⇔ at+1 = + 2 + (5.8.9)
2Lt 4Lt Lt

and set
at+1
At+1 = At + at+1 , τt = . (5.8.10)
At+1
Note that τt ∈ (0, 1].
3. Set
xt+1 = τt zt + (1 − τt )yt+ . (5.8.11)
and compute f (xt+1 ), ∇f (xt+1 ).
4. Compute
x̂t+1 ∈ argmin¯ [at+1 [h∇f (xt+1 ), wi + Ψ(w)] + Vzt (w)] . (5.8.12)
w∈X

5. Set

(a) yt+1 = τt x̂t+1 + (1 − τt )yt+


(5.8.13)
(b) ψt+1 (w) = ψt (w) + at+1 [f (xt+1 ) + h∇f (xt+1 ), w − xt+1 i + Ψ(w)] .

compute f (yt+1 ), set


 
1
δt = Vzt (x̂t+1 ) + h∇f (xt+1 ), yt+1 − xt+1 i + f (xt+1 ) − f (yt+1 ) (5.8.14)
At+1
+ +
and select somehow yt+1 ∈ X in such a way that φ(yt+1 ) ≤ φ(yt+1 ).
Step t is completed, and we pass to step t + 1.
+
The approximate solution generated by the method in course of t steps is yt+1 .
In the sequel, we refer to the just described algorithm as to Fast Composite Gradient Method,
nicknamed FGM.
514 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Theorem 5.8.1 FGM is well defined and ensures that zt ∈ X, xt+1 , x̂t+1 , yt , yt+ ∈ X for all t ≥ 0.
Assuming that (5.8.7) holds true and that δt ≥ 0 for all t (the latter definitely is the case when Lt = Lf
for all t), one has
4Lf
yt , yt+ ∈ X, φ(yt+ ) − φ∗ ≤ φ(yt ) − φ∗ ≤ A−1
t [Vxω (x∗ ) + 2t] ≤ [Vxω (x∗ ) + 2t] (5.8.15)
t2
for t = 1, 2, ...
A step of the algorithm, modulo updates yt 7→ yt+ , reduces to computing the value and the gradient
of f at a point, computing the value of f at another point and to computing within accuracy  two prox
mappings.

Comments. Let us look what happens when  = 0. The O(1/t2 ) efficiency estimate (5.8.15) is
conditional, the condition being that the quantities δt defined by (5.8.14) are nonnegative for all t and
that 0 < Lt ≤ Lf for all t. Invoking (5.8.3), we see that at every step t, setting Lt = Lf ensures that
δt ≥ 0. In other words, we can safely use Lt ≡ Lf for all t. However, from the description of At ’s it
follows that the less are Lt ’s, the larger are At ’s and thus the better is the efficiency estimate (5.8.15).
This observations suggests more “aggressive” policies aimed at operating with Lt ’s much smaller than
Lf . A typical policy of this type is as follows: at the beginning of step t, we have at our disposal a trial
value L0t ≤ Lf of Lt . We start with carrying out step t with L0t in the role of Lt ; if we are lucky and this
choice of Lt results in δt ≥ 0, we pass to step t + 1 and reduce the trial value of L, say, set L0t+1 = L0t /2.
If with Lt = L0t we get δt < 0, we rerun step t with increased value of Lt , say, with Lt = min[2L0t , Lf ]
or even with Lt = Lf . If this new value of Lt still does not result in δt ≥ 0, we rerun step t with larger
value of Lt , say, with Lt = min[4L0t , Lf ], and proceed in this fashion until arriving at Lt ≤ Lf resulting
in δt ≥ 0. We can maintain a whatever upper bound k ≥ 1 on the number of trials at a step, just by
setting Lt = Lf at k-th trial. As far as the evolution of initial trial values L0t of Lt in time is concerned,
a reasonable “aggressive” policy could be “set L0t+1 to a fixed fraction (say, 1/2) of the last (the one
resulting in δt ≥ 0) value of Lt tested at step t.”
In spite of the fact that every trial requires some computational effort, the outlined on-line adjustment
of Lt ’s usually significantly outperforms the “safe” policy Lt ≡ Lf .

5.8.1.4 Proof of Theorem 5.8.1


0
1 . Observe that by construction
ψt (·) = ω(·) + `t (·) + αt Ψ(·) (5.8.16)
with nonnegative αt and affine `t (·), whence, taking into account that τt ∈ (0, 1], the algorithm is well
defined and maintains the inclusions zt ∈ X, xt , x̂t+1 , yt , yt+ ∈ X.
Besides this, (5.8.16), (5.8.12) show that computing zt , same as computing x̂t+1 , reduces to computing
within accuracy  composite prox-mappings, and the remaining computational effort at step t is dominated
by the necessity to compute f (xt+1 ), ∇f (xt+1 ), and f (yt+1 ), as stated in Theorem.
20 . Observe that by construction
X t
At = ai (5.8.17)
i=1
(as always, the empty sum is 0), and
1 At+1 At + at+1 Lt
= = = , t≥0 (5.8.18)
2τt2 At+1 2
2at+1 2
2at+1 2
(see (5.8.9) and (5.8.10)). Observe also that the initialization rules, (5.8.17.a) and (5.8.13.b) imply that
whenever t ≥ 1, setting λti = ai /At we have
Pt
ψt (w) = a [h∇f (xi ), w − xi i + f (xi ) + Ψ(w)] + Vxω (w)
Pt i
i=1
= At i=1 λti [h∇f (xi ), w − xi i + f (xi ) + Ψ(w)] + Vxω (w), (5.8.19)
Pt t
Pt ai
i=1 λi = i=1 At = 1.
FAST FIRST ORDER MINIMIZATION AND SADDLE POINTS 515

30 . We need the following simple


Lemma 5.8.1 Let h : X → R ∪ {+∞} be a lower semicontinuous convex function which is finite on the
relative interior of X, and let
w̄ ∈ argmin {h(x) + ω(x)} .
x∈X
Then for all w ∈ X we have
h(w) + ω(w) ≥ h(w̄) + ω(w̄) + Vw̄ (w) − . (5.8.20)
 0
Proof. Invoking the definition of argmin , for some h (w̄) ∈ ∂X h(w̄) we have
hh0 (w̄) + ω 0 (w̄), w − w̄i ≥ − ∀w ∈ X,
implying by convexity that
∀w ∈ X :
h(w) + ω(w) ≥ [h(w̄) + hh0 (w̄), w − w̄i] + ω(w)
= [h(w̄) + ω(w̄) + hh0 (w̄) + ω 0 (w̄), w − w̄i] + Vw̄ (w),
| {z }
≥−

as required. 2
40 . Denote ψt∗ = ψt (zt ). Let us prove by induction that

Bt + ψt∗ ≥ At φ(yt+ ), t ≥ 0, Bt = ( + ¯)t. (∗t )


For t = 0 this inequality is valid, since by initialization rules ψ0 (·) ≥ 0, B0 = 0, and A0 = 0. Suppose
that (∗t ) holds true, and let us prove that so is (∗t+1 ).
Taking into account (5.8.8) and the structure of ψt (·) as presented in (5.8.16), Lemma 5.8.1 as applied
with w̄ = zt and h(x) = ψt (x) − ω(x) = `t (x) + αt Ψ(x) yields, for all w ∈ X, the first inequality in the
following chain:
ψt (w) ≥ ψt∗ + Vzt (w) − 
+
≥ At φ(y
 t ) + Vzt (w) −  − Bt +[by (∗t )]
≥ At f (xt+1 ) + h∇f (xt+1 ), yt − xt+1 i + Ψ(yt+ ) + Vzt (w) −  − Bt [since f is convex].


Invoking (5.8.13.b), the resulting inequality implies that for all w ∈ X it holds
ψt+1 (w) ≥ Vzt (w) + At f (xt+1 ) + h∇f (xt+1 ), yt+ − xt+1 i + Ψ(yt+ )
 
(5.8.21)
+at+1 [f (xt+1 ) + h∇f (xt+1 ), w − xt+1 i + Ψ(w)] −  − Bt .
By (5.8.11) and (5.8.10) we have At (yt+ − xt+1 ) − at+1 xt+1 = −at+1 zt , whence (5.8.21) implies that
ψt+1 (w) ≥ Vzt (w) + At+1 f (xt+1 ) + At Ψ(yt+ ) + at+1 [h∇f (xt+1 ), w − zt i + Ψ(w)] −  − Bt
(5.8.22)
for all w ∈ X. Thus,

ψt+1 = minw∈X ψt+1 (w)
=h(w)+ω(w) with convex lover semicondinuous h
z }| {
≥ min Vzt (w) + At+1 f (xt+1 ) + At Ψ(yt+ ) + at+1 [h∇f (xt+1 ), w − zt i + Ψ(w)] −  − Bt
w∈X
[by (5.8.22)]
≥ Vzt (x̂t+1 ) + At+1 f (xt+1 ) + At Ψ(yt+ ) + at+1 [h∇f (xt+1 ), x̂t+1 − zt i + Ψ(x̂t+1 )]
− − ¯ − Bt
| {z }
−Bt+1
[by (5.8.12) and Lemma 5.8.1]
 At + at+1 
= Vzt (x̂t+1 ) + [At + at+1 ] Ψ(yt ) + Ψ(x̂t+1 )
| {z } At + at+1 At + at+1
=At+1 by (5.8.10)
| {z }
≥Ψ(yt+1 ) by (5.8.13.a) and (5.8.10); recall that Ψ is convex
+At+1 f (xt+1 ) + at+1 h∇f (xt+1 ), x̂t+1 − zt i − Bt+1
≥ Vzt (x̂t+1 ) + At+1 f (xt+1 ) + at+1 h∇f (xt+1 ), x̂t+1 − zt i + At+1 Ψ(yt+1 ) − Bt+1
516 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

that is,

ψt+1 ≥ Vzt (x̂t+1 ) + At+1 f (xt+1 ) + at+1 h∇f (xt+1 ), x̂t+1 − zt i + At+1 Ψ(yt+1 ) − Bt+1 . (5.8.23)

Now note that x̂t+1 − zt = (yt+1 − xt+1 )/τt by (5.8.11) and (5.8.13.a), whence (5.8.23) implies the first
inequality in the following chain:

ψt+1 ≥ Vzt (x̂t+1 ) + At+1 f (xt+1 ) + at+1
τt h∇f (xt+1 ), yt+1 − xt+1 i + At+1 Ψ(yt+1 ) − Bt+1
= Vzt (x̂t+1 ) + At+1 f (xt+1 ) + At+1 h∇f (xt+1 ), yt+1 − xt+1 i + At+1 Ψ(yt+1 ) − Bt+1
h [by
i (5.8.10)]
1
= At+1 At+1 Vzt (x̂t+1 ) + f (xt+1 ) + h∇f (xt+1 ), yt+1 − xt+1 i + Ψ(yt+1 ) − Bt+1
= At+1 [φ(yt+1 ) + δt ] − Bt+1 [by (5.8.14)]
≥ At+1 φ(yt+1 ) − Bt+1 [since we are in the case of δt ≥ 0]
+ +
≥ At+1 φ(yt+1 ) − Bt+1 [since φ(yt+1 ) ≤ φ(yt+1 )].

The inductive step is completed.


50 . Observe that Vzt (x̂t+1 ) ≥ 21 kx̂t+1 − zt k2 and that x̂t+1 − zt = (yt+1 − xt+1 )/τt , implying that
Vzt (x̂t+1 ) ≥ 2τ12 kyt+1 − xt+1 k2 , whence
t

h i
1
δt = Vz (x̂t+1 ) + h∇f (xt+1 ), yt+1 − xt+1 i + f (xt+1 ) − f (yt+1 ) [by (5.8.14)]
h At+1 t i
1 2
≥ 2At+1 τ 2 kyt+1 − x t+1 k + h∇f (xt+1 ), yt+1 − x t+1 i + f (x t+1 ) − f (yt+1 )
t
Lt
= 2 kyt+1 − xt+1 k2 + h∇f (xt+1 ), yt+1 − xt+1 i + f (xt+1 ) − f (yt+1 ) [by (5.8.18)]

whence δt ≥ 0 whenever Lt = Lf by (5.8.3), as claimed.


60 . We have proved inequality (∗t ) for all t ≥ 0. On the other hand,

ψt∗ = min ψt (x) ≤ ψt (x∗ ) ≤ At φ∗ + Vxω (x∗ ),


w∈X

where the concluding inequality is due to (5.8.19). The resulting inequality taken together with (∗t )
yields
φ(yt+ ) ≤ φ∗ + A−1
t [Vxω (x∗ ) + Bt ], t = 1, 2, ... (5.8.24)
Now, from A0 = 0, (5.8.9) and (5.8.10) it immediately follows that if Lt ≤ L for some L and all t and if
Āt are given by the recurrence
" r #
1 1 Āt
Ā0 = 0; Āt+1 = Āt + + + , (5.8.25)
2L 4L2 L
2
t
then At ≥ Āt for all t. It is immediately seen that (5.8.25) implies that Āt ≥ 4L for all t 24 . Taking into
t2
account that Lt ≤ Lf for all t by Theorem’s premise, the bottom line is that At ≥ 4L f
for all t, which
combines with (5.8.24) to imply (5.8.15). 2

5.8.2 “Universal” Fast Gradient Methods


A conceptual shortcoming of the Fast Composite Gradient Method FGM presented in section 5.8.1 is
that it is “tuned” to composite minimization problems (5.8.1) with smooth (with Lipschitz continuous
gradient) functions f . We also know what to do when f is convex and just Lipschitz continuous; here
we can use Mirror Descent. Note that Lipschitz continuity of the gradient of a convex function f and
24 2
The simplest way
p to see the inequality
p is to set 2LĀt = Ctpand to note that (5.8.25) implies that C0 = 0 and
2
√ t2
Ct+1 = Ct + 1 + 1 + 2Ct > Ct + 2 1/2Ct + 1/2 = (Ct + 1/2)2 , so that Ct ≥ t/ 2 and thus At ≥ 4L
2 2 2
for
all t.
FAST FIRST ORDER MINIMIZATION AND SADDLE POINTS 517

Lipschitz continuity of f itself are “two extremes” in characterizing function’s smoothness; they can be
linked together by “in-between” requirements for the gradient f 0 of f to be Hölder continuous with a
given Hölder exponent κ ∈ [1, 2]:
kf 0 (x) − f 0 (y)k∗ ≤ Lκ kx − ykκ−1 , ∀x, y ∈ X. (5.8.26)
Inspired by Yu. Nesterov’s recent paper [50], we are about to modify the algorithm from section 5.8.1 to
become applicable to problems of composite minimization (5.8.1) with functions f of smoothness varying
between the outlined two extremes. It should be stressed that the resulting algorithm does not require
a priori knowledge of smoothness parameters of f and adjusts itself to the actual level of smoothness as
characterized by Hölder scale.
While inspired by [50], the algorithm to be developed is not completely identical to the one proposed
in this reference; in particular, we still are able to work with inexact prox mappings.

5.8.2.1 Problem formulation


We continue to consider the optimization problem
φ∗ = min{φ(x) := f (x) + Ψ(x)}, (5.8.1)
x∈X

where
1. X is a closed convex set in a Euclidean space E equipped with norm k · k (not necessarily the
Euclidean one). Same as in section 5.8.1, we assume that X is equipped with distance-generating
function ω(x) compatible with k · k,
2. Ψ(x) : X → R ∪ {+∞} is a lower semicontinuous convex function which is finite on the relative
interior of X,
3. f : X → R is a convex Lipschitz function satisfying, for some κ ∈ [1, 2], L ∈ (0, ∞), and a selection
f 0 (·) : X → E of subgradients, the relation
L
∀(x, y ∈ X) : f (y) ≤ f (x) + hf 0 (x), y − xi +ky − xkκ . (5.8.27)
κ
Observe that in section 5.8.1 we dealt with the case κ = 2 of the outlined situation. Note also that
a sufficient (necessary and sufficient when κ = 2) condition for (5.8.27) is Hölder continuity of the
gradient of f , i.e., the relation (5.8.26).
It should be stressed that the algorithm to be built does not require a priori knowledge of κ and L.
From now on we assume that (5.8.1) is solvable and denote by x∗ an optimal solution to this problem.
In the sequel, we continue to use our standard “prox-related” notation xω , Vx (y), same as the notation
argmin {h(x) + ω(x)} = {z ∈ X : ∃h0 (z) ∈ ∂X h(z) : hh0 (z) + ω 0 (z), w − zi ≥ − ∀w ∈ X}
x∈X

see p. 511.
The algorithm we are about to develop requires the ability to find, given ξ ∈ E and α ≥ 0, a point
from the set
argminν {hξ, xi + αΨ(x) + ω(x)}
x∈X
where ν ≥ 0 is construction’s parameter.

5.8.2.2 Algorithm and Main Result


The Fast Universal Composite Gradient Method, nicknamed FUGM, we are about to consider is aimed
at solving (5.8.1) within a given accuracy  > 0; we will refer to  as to target accuracy of FUGM. On
the top of its target accuracy , FUGM is specified by three other parameters
ν ≥ 0, ν̄ ≥ 0, L > 0.
The method works in stages. The stages differ from each other only in their numbers of steps; for stage
s, s = 1, 2, ..., this number is Ns = 2s .
518 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Single Stage. N -step stage of FUGM is as follows.


• Initialization: Set y0 = xω := argminx∈X ω(x), ψ0 (w) = Vxω (w), and select y0+ ∈ X such that
φ(y0+ ) ≤ φ(y0 ). Set A0 = 0.
• Outer step t = 0, 1, ..., N − 1. Given ψt (·), yt+ ∈ X and At ≥ 0,
1. Compute
zt ∈ argminν ψt (w). (5.8.28)
w∈X

2. Inner Loop: Set Lt,0 = L. For µ = 0, 1, ..., carry out the following computations.
(a) Set Lt,µ = 2µ Lt,0 ;
(b) Find the positive root at+1,µ of the quadratic equation
" s #
1 1 A t
Lt,µ a2t+1 = At + at+1,µ ⇔ at+1,µ = + + (5.8.29)
2Lt,µ 4L2t,µ Lt,µ

and set
at+1,µ
At+1,µ = At + at+1,µ , τt,µ = . (5.8.30)
At+1,µ
Note that τt,µ ∈ (0, 1].
(c) Set
xt+1,µ = τt,µ zt + (1 − τt,µ )yt+ . (5.8.31)
and compute f (xt+1,µ ), f 0 (xt+1,µ ).
(d) Compute

x̂t+1,µ ∈ argminν̄ [at+1,µ [hf 0 (xt+1,µ ), wi + Ψ(w)] + Vzt (w)] . (5.8.32)


w∈X

(e) Set
yt+1,µ = τt,µ x̂t+1,µ + (1 − τt,µ )yt+ , (5.8.33)
compute f (yt+1,µ ) and then compute
h i
Lt,µ 2 0
δt,µ = 2 kyt+1,µ − xt+1,µ k + hf (xt+1,µ ), yt+1,µ − xt+1,µ i + f (xt+1,µ )
−f (yt+1,µ ).
(5.8.34)

(f) If δt,µ < − N , pass to the step µ + 1 of Inner loop, otherwise terminate Inner loop and
set
Lt = Lt,µ , at+1 = at+1,µ , At+1 = At+1,µ , τt = τt,µ ,
xt+1 = xt+1,µ , yt+1 = yt+1,µ , δt = δt,µ ,
ψt+1 (w) = ψt (w) + at+1 [f (xt+1 ) + f 0 (xt+1 ), w − xt+1 i + Ψ(w)]
(5.8.35)
+ +
3. Select somehow yt+1∈ X in such a way that ≤ φ(yt+1 ).
φ(yt+1 )
Step t is completed. If t < N − 1, pass to outer step t + 1, otherwise terminate the stage and
+
output yN

Main result on the presented algorithm is as follows:


Theorem 5.8.2 Let function f in (5.8.1) satisfy (5.8.27) with some L ∈ (0, ∞) and κ ∈ [1, 2]. As a
+
result of an N -step stage SN of FUGM we get a point yN ∈ X such that
+
φ(yN ) − φ∗ ≤  + A−1 −1
N Vxω (x∗ ) + [ν + ν̄]N AN . (5.8.36)
FAST FIRST ORDER MINIMIZATION AND SADDLE POINTS 519

Furthermore,
2 2−κ
(a) L ≤ Lt ≤ max[L, 2L κ [N/] κ ], 0 ≤ t ≤ N − 1,
P
N −1 1
2 (5.8.37)
(b) AN ≥ √
τ =0 2 L ,
τ

Besides this, the number of steps of Inner loop at every outer step of SN does not exceed
 h 2 2−κ
i
M (N ) := 1 + log2 max 1, 2L−1 L κ [N/] κ . (5.8.38)

For proof, see the concluding subsection of this section.

Remark 5.8.1 Along with the outlined “aggressive” stepsize policy, where every Inner loop of a stage is
initialized by setting Lt,0 = L, one can consider a “safe” stepsize policy where L0,0 = L and Lt,0 = Lt−1
for t > 0. Inspecting the proof of Theorem 5.8.2 it is immediately seen that with the safe stepsize policy,
(5.8.36) and (5.8.37) remain true, and the guaranteed bound on the total number of Inner loop steps at
a stage (which is N M (N ) for the aggressive stepsize policy) becomes N + M (N ).

Assume from now on that


We have at our disposal a valid upper bound Θ∗ on Vxω (x∗ ).

Corollary 5.8.1 Let function f in (5.8.1) satisfy (5.8.27) with some L ∈ (0, ∞) and κ ∈ [1, 2], and let
√ 2
 κ !

κ 3κ−2
LΘ ∗ 8 2 LΘ 2

N∗ () = max 2 √ , . (5.8.39)
 

Whenever N ≥ N∗ (), the N -step stage SN of the algorithm ensures that

A−1 −1
N Vxω (x∗ ) ≤ AN Θ∗ ≤ . (5.8.40)

In turn, the on-line verifiable condition


A−1
N Θ∗ ≤  (5.8.41)
implies, by (5.8.36), the relation
+
φ(yN ) − φ∗ ≤ 2 + [ν + ν̄]N A−1
N . (5.8.42)

In particular, with exact prox-mappings (i.e., with ν = ν̄ = 0) it takes totaly at most Ceil(N ∗ ()) outer
steps to ensure (5.8.41) and thus to ensure the error bound
+
φ(yN ) − φ∗ ≤ 2.

Postponing for a moment Corollary’s verification, let us discuss its consequences, for the time being – in
the case ν = ν̄ = 0 of exact prox-mappings (the case of inexact mappings will be discussed in section
5.8.3).

Discussion. Corollary 5.8.1 suggests the following policy of solving problem (5.8.1) within a given
accuracy  > 0: we run FUGM with somehow selected L and ν = ν̄ = 0 stage by stage, starting with
a single-step stage and increasing the number of steps N by factor 2 when passing from a stage to the
next one. This process is continued until at the end of the current stage the condition (5.8.41) (which
is on-line verifiable) takes place. When it happens, we terminate; by Corollary 5.8.1, at this moment we
have at our disposal a feasible 2-optimal solution to the problem of interest. On the other hand, the
total, over all stages, number of Outer steps before termination clearly does not exceed twice the number
of Outer steps at the concluding stage, that is, invoking Corollary 5.8.1, it does not exceed

N ∗ () := 4Ceil(N∗ ()).


520 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

As about the total, over all Outer steps of all stages, number M of Inner Loop steps (which is the actual
complexity measure of FUGM), it, by Theorem 5.8.2, can be larger than N ∗ () by at most a logarithmic
in L, L, 1/ factor. Moreover, when “safe” stepsize policy is used, see Remark 5.8.1, we have by this
Remark X X
M≤ [2s + M (2s )] ≤ N ∗ () + M, M := M (2s )
s≥0:2s−1 ≤N∗ () s≥0:2s−1 ≤N∗ ()

with M (N ) given by (5.8.38). It is easily seen that for small  the “logarithmic complexity term” M is
negligible as compared to the “leading complexity term” N ∗ ().
κ/2
Further, setting L = max[L, L] and assuming  ≤ LΘ∗ , we clearly have
κ ! 2
3κ−2
∗ LΘ∗2
N () ≤ O(1) . (5.8.43)


Thus, our principal complexity term is expressed in terms of the true and unknown in advance smoothness
parameters (L, κ) and is the better the better is smoothness
q (the larger is κ and the smaller is L). In the
“most smooth case” κ = 2, we arrive at N ∗ () = O(1) LΘ  , which, basically, is the complexity bound

for the Fast Composite√


Gradient Method FGM from section 5.8.1. In the “least smooth case” κ = 1, we
get N ∗ () = O(1) L 2Θ∗ , which, basically, is the complexity bound of Mirror Descent25 Note that while
in the nonsmooth case the complexity bounds of FUGM and MD are “nearly the same,” the methods
are essentially different, and in fact FUGM exhibits some theoretical advantage over MD: the complexity
bound of FUGM depends on the maximal deviation L of the gradients of f from each other, the deviation
being measured in k · k∗ , while what matters for MD is the Lipschitz constant, taken w.r.t. k · k∗ , of f ,
that is, the maximal deviation of the gradients of f from zero; the latter deviation can be much larger
than the former one. Equivalently: the complexity bounds of MD are sensitive to adding to the objective
a linear form, while the complexity bounds of FUGM are insensitive to such a perturbation.

Proof of Corollary 5.8.1. The first inequality in (5.8.40) is evident. Now let N ≥ N∗ (). Consider
two possible cases, I and II, as defined below.

2 2−κ
Case I: 2L κ [N/] κ ≥ L. In this case, (5.8.37.a) ensures the first relation in the following chain:
2 2−κ
Lt ≤ 2L κ [N/] κ , 0 ≤ t ≤ N − 1
 1 2−κ 2−κ
2
⇒ AN ≥ 2−3/2 L− κ N 1− 2κ  2κ [by (5.8.37.b)]
3κ−2 2−κ
1
= 2 N κ  κ
8L κ
" 2 # 3κ−2
κ
κ κ  3κ−2
1 8 2 LΘ∗2 2−κ
≥ 2   κ [by (5.8.39) and due to N ≥ N∗ ()]
8L κ
2 2−κ 2
κ −κ
1
= 2 8L κ Θ∗  = Θ∗ /,
8L κ

and the second inequality in (5.8.40) follows.

2 2−κ
Case II: 2L κ [N/] κ < L. In this case, (5.8.37.a) ensures the first relation in the following chain:
Lt ≤ L, 0 ≤ t ≤ N − 1
2
⇒ AN ≥ N 4L [by (5.8.37.b)]
≥ Θ∗ / [by (5.8.39) and due to N ≥ N∗ ()]
and the second inequality in (5.8.40) follows. 2

25
While we did not consider MD in the composite minimization setting, such a consideration is quite straight-
forward, and the complexity bounds for “composite MD” are identical to those of the “plain MD.”
FAST FIRST ORDER MINIMIZATION AND SADDLE POINTS 521

5.8.2.3 Proof of Theorem 5.8.2


00 . By (5.8.27) we have
 
Lt,µ L Lt,µ 2 L κ
δt,µ ≥ kyt+1,µ − xt+1,µ k2 − kyt+1,µ − xt+1,µ kκ ≥ min r − r . (5.8.44)
2 κ r≥0 2 κ
Assuming κ = 2, we see that δt,µ is nonpositive whenever Lt,µ ≥ L, so that in the case in question Inner
loop is finite, and
Lt ≤ max[L, 2L]. (5.8.45)
When κ ∈ [1, 2), (5.8.44) after straightforward calculation implies that whenever
2 2−κ
Lt,µ ≥ L κ [N/] κ ,

we have δt,µ ≥ − N . Consequently, in the case in question Inner loop as finite as well, and
2 2−κ
Lt ≤ max[L, 2L κ [N/] κ ], (5.8.46)

Note that by (5.8.45) the resulting inequality holds true for κ = 2 as well.
The bottom line is that FUGM is well defined and maintains the following relations:
2 2−κ
(a.1) L ≤ Lt ≤ max[L,
q 2L [N/]
κ κ ],

1 1 At
(a.2) at+1 = 2Lt + 4L2 + Lt [⇔ Lt a2t+1 = At + at+1 ],
t
A0 = 0,
Pt+1
(a.3) At+1 = τ =1 aτ ,
at+1
(a.4) τt = A t+1
∈ (0, 1],
1
(a.5) τ 2 At+1 = Lt [by (a.2-4)];
t
(b.1) ψ0 (w) = Vxω (w),
(b.2) ψt+1 (w) = ψt (w) + at+1 [f (xt+1 ) + f 0 (xt+1 ), w − xt+1 i + Ψ(w)] (5.8.47)
(c.1) zt ∈ argminν ψt (w);
w∈X
(c.2) xt+1 = τt zt + (1 − τt )yt+ ,
(c.3) x̂t+1 ∈ argminν̄ [at+1 [hf 0 (xt+1 ), wi + Ψ(w)] + Vzt (w)] ,
w∈X
(c.4) yt+1 = τt x̂t+1 + (1 − τt )yt+ ;
Lt 2 0 
(d) 2 kyt+1 − xt+1 k + hf (xt+1 ), yt+1 − xt+1 i + f (xt+1 ) − f (yt+1 ) ≥ − N
[by termination rule for Inner loop]
(e) yt+ ∈ X & φ(yt+ ) ≤ φ(yt ).

and, on the top of it, the number M of steps of Inner loop does not exceed
 h 2 2−κ
i
M (N ) := 1 + log2 max 1, 2L−1 L κ [N/] κ . (5.8.48)

10 . Observe that by (5.8.47.b) we have


ψt (·) = ω(·) + `t (·) + αt Ψ(·) (5.8.49)

with nonnegative αt and affine `t (·), whence, taking into account that τt ∈ (0, 1], the algorithm maintains
the inclusions zt ∈ X, xt , x̂t+1 , yt , yt+ ∈ X.

20 . Observe that by (5.8.47.a.2-4) we have


t
X
At = ai (5.8.50)
i=1
522 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

(as always, the empty sum is 0) and


1 At+1 At + at+1 Lt
= 2 = = , t ≥ 0. (5.8.51)
2τt2 At+1 2at+1 2a2t+1 2

Observe also that (5.8.47.b1-2) and (5.8.50) imply that whenever t ≥ 1, setting λti = ai /At we have
Pt
ψt (w) = a [hf 0 (x ), w − xi i + f (xi ) + Ψ(w)] + Vxω (w)
Pt i t 0i
i=1
= At i=1 λi [hf (xi ), w − xi i + f (xi ) + Ψ(w)] + Vxω (w), (5.8.52)
Pt t
Pt ai
i=1 λi = i=1 At = 1.

30 . Denote ψt∗ = ψt (zt ). Let us prove by induction that


t
X 
Bt + ψt∗ ≥ At φ(yt+ ), t ≥ 0, Bt = (ν + ν̄)t + Aτ . (∗t )
τ =1
N

For t = 0 this inequality is valid, since by initialization rules ψ0 (·) ≥ 0, B0 = 0, and A0 = 0. Suppose
that (∗t ) holds true for some t ≥ 0, and let us prove that so is (∗t+1 ).
Taking into account (5.8.47.c.1) and the structure of ψt (·) as presented in (5.8.49), Lemma 5.8.1 as
applied with w̄ = zt and h(x) = ψt (x) − ω(x) = `t (x) + αt Ψ(x) yields, for all w ∈ X, the first inequality
in the following chain:

ψt (w) ≥ ψt∗ + Vzt (w) − ν


+
≥ At φ(y
 t ) + Vzt (w) − ν − Bt [by (∗t )]
≥ At f (xt+1 ) + hf 0 (xt+1 ), yt+ − xt+1 i + Ψ(yt+ ) + Vzt (w) − ν − Bt [since f is convex].


Invoking (5.8.47.b.2), the resulting inequality implies that for all w ∈ X it holds

ψt+1 (w) ≥ Vzt (w) + At f (xt+1 ) + hf 0 (xt+1 ), yt+ − xt+1 i + Ψ(yt+ )


 
(5.8.53)
+at+1 [f (xt+1 ) + hf 0 (xt+1 ), w − xt+1 i + Ψ(w)] − ν − Bt .

By (5.8.47.c.2) and (5.8.47.a.4-5) we have At (yt+ − xt+1 ) − at+1 xt+1 = −at+1 zt , whence (5.8.53) implies
that

ψt+1 (w) ≥ Vzt (w) + At+1 f (xt+1 ) + At Ψ(yt+ ) + at+1 [hf 0 (xt+1 ), w − zt i + Ψ(w)] − ν − Bt (5.8.54)

for all w ∈ X. Thus,



ψt+1 = minw∈X ψt+1 (w)
=h(w)+ω(w) with convex lover semicontinuous h
z }| {
≥ min Vzt (w) + At+1 f (xt+1 ) + At Ψ(yt+ ) + at+1 [hf 0 (xt+1 ), w − zt i + Ψ(w)] − ν − Bt
w∈X
[by (5.8.54)]
≥ Vzt (x̂t+1 ) + At+1 f (xt+1 ) + At Ψ(yt+ ) + at+1 [hf 0 (xt+1 ), x̂t+1 − zt i + Ψ(x̂t+1 )]
−ν − ν̄ − Bt
[by (5.8.47.c.3) and Lemma 5.8.1]
 At + at+1 
= Vzt (x̂t+1 ) + [At + at+1 ] Ψ(yt ) + Ψ(x̂t+1 )
| {z } At + at+1 At + at+1
=At+1 by (5.8.47.a.3)
| {z }
≥Ψ(yt+1 ) by (5.8.47.c.4) and (5.8.47.a.2-3); recall that Ψ is convex
+At+1 f (xt+1 ) + at+1 hf 0 (xt+1 ), x̂t+1 − zt i − ν − ν̄ − Bt
0
≥ Vzt (x̂t+1 ) + At+1 f (xt+1 ) + at+1 hf (xt+1 ), x̂t+1 − zt i + At+1 Ψ(yt+1 ) − ν − ν̄ − Bt

that is,

ψt+1 ≥ Vzt (x̂t+1 ) + At+1 f (xt+1 ) + at+1 hf 0 (xt+1 ), x̂t+1 − zt i + At+1 Ψ(yt+1 ) − ν − ν̄ − Bt . (5.8.55)
FAST FIRST ORDER MINIMIZATION AND SADDLE POINTS 523

Now note that


x̂t+1 − zt = (yt+1 − xt+1 )/τt (5.8.56)
by (5.8.47.c), whence (5.8.55) implies the first inequality in the following chain:

ψt+1 ≥ Vzt (x̂t+1 ) + At+1 f (xt+1 ) + at+1 0
τt hf (xt+1 ), yt+1 − xt+1 i + At+1 Ψ(yt+1 ) − ν − ν̄ − Bt
= Vzt (x̂t+1 ) + At+1 f (xt+1 ) + At+1 hf 0 (xt+1 ), yt+1 − xt+1 i + At+1 Ψ(yt+1 ) − ν − ν̄ − Bt
h i [by (5.8.47.a.4)]
= At+1 1
Vz (x̂t+1 ) + f (xt+1 ) + hf 0 (xt+1 ), yt+1 − xt+1 i + Ψ(yt+1 ) − ν − ν̄ − Bt
h At+1 t h ii
= At+1 φ(yt+1 ) + At+1 1
Vzt (x̂t+1 ) + f (xt+1 ) + hf 0 (xt+1 , yt+1 − xt+1 i − f (yt+1 )
−ν −hν̄ − Bt h ii
≥ At+1 φ(yt+1 ) + 1
2At+1 kx̂t+1 − zt k2 + f (xt+1 ) + hf 0 (xt+1 , yt+1 − xt+1 i − f (yt+1 )
−ν − ν̄ − Bt
= At+1 φ(y
h t+1 ) i
1 2 0
+At+1 2τ 2 A t+1
kx t+1 − yt+1 k + f (xt+1 ) + hf (x t+1 , yt+1 − x t+1 i − f (y t+1 )
t
−ν −ν̄ − Bt [by (5.8.56)]
At+1 φ(yt+1 ) + L2t kxt+1 − yt+1 k2 + f (xt+1 ) + hf 0 (xt+1 , yt+1 − xt+1 i − f (yt+1 )

=
−ν − ν̄ − Bt [by (5.8.47.a.5)]
≥ At+1 φ(yt+1 ) − At+1 N − ν − ν̄ − Bt [by (5.8.47.d)]
= At+1 φ(yt+1 ) − Bt+1 [see (∗t )]
+
≥ At+1 φ(yt+1 ) − Bt+1 [by (5.8.47.e)].
The inductive step is completed.

40 . We have proved inequality (∗t ) for all t ≥ 0. On the other hand,


ψt∗ = min ψt (x) ≤ ψt (x∗ ) ≤ At φ∗ + Vxω (x∗ ),
w∈X

where the concluding inequality is due to (5.8.52). The resulting inequality taken together with (∗t )
yields
φ(yt+ ) − φ∗ ≤ A−1
t [Vxω (x∗ ) + Bt ] P
t
= A−1
t Vxω (x∗ ) + At
−1  −1
τ =1 Aτ N + At [ν + ν̄]t (5.8.57)
−1 −1
≤ At Vxω (x∗ ) +  Nt + At [ν + ν̄]t, t = 1, 2, ..., N,
where the concluding ≤ is due to Aτ ≤ At for τ ≤ t.

50 . From A0 = 0 and (5.8.47.a.2) by induction in t it follows that


X 2
t−1 1
At ≥ √ . (5.8.58)
τ =0 2 Lτ
| {z }
=:Āt

Indeed, (5.8.58) clearly holds true for t = 0. Assuming that the relation holds true for some t and invoking
(5.8.47.a.2), we get
hP i2 hP i
t−1 t−1
p 1
At+1 = At + at+1 ≥ Āt + Āt /Lt + 2L ≥ √1 + 2 √1 √1 1
t τ =0 Lτ τ =0 Lτ 2 Lt + 4Lt
hP i2
t
= √1 = Ā2t+1 ,
τ =0 2 Lτ

completing the induction.

Bottom line. Combining (5.8.47.a.1), (5.8.57), (5.8.58) and taking into account (5.8.48), we arrive
at Theorem 5.8.2. 2
524 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

5.8.3 From Fast Gradient Minimization to Conditional Gradient


5.8.3.1 Proximal and Linear Minimization Oracle based First Order algorithms
First Order algorithms considered so far heavily utilize proximal setups; to be implementable, these
algorithms need the convex compact domain X of convex problem

Opt = min f (x) (CP)


x∈X

to be proximal friendly – to admit a continuously differentiable strongly convex (and thus nonlinear!)
DGF ω(·) with easy to minimize over X linear perturbations ω(X) + hξ, xi 26 . Whenever this is the case,
it is easy to minimize over X linear forms hξ, xi (indeed, minimizing such a form over X within a whatever
high accuracy reduces to minimizing over X the sum ω(x) + αhξ, xi with large α). The inverse conclusion
is, in general, not true: it may happen that X admits a computationally cheap Linear Minimization
Oracle (LMO) – a procedure capable to minimize over X linear forms, while no DGFs leading to equally
computationally cheap proximal mappings are known. The important for applications examples include:

1. nuclear norm ball X ⊂ Rn×n playing crucial role in low rank matrix recovery: with all known
proximal setups, computing prox-mappings requires computing full singular value decomposition
of an n × n matrix. At the same time, minimizing a linear form over X requires finding just the
pair of leading singular vectors of an n × n matrix x (why?) The latter task can be easily solved
by Power method27 For n in the range of few thousands and more, computing the leading singular
vectors is by order of magnitude cheaper, progressively as n grows, than the full singular value
decomposition.

2. spectahedron X = {x ∈ Sn : x  0, Tr(x) = 1} “responsible” for Semidefinite Programming.


Situation here is similar to the one with nuclear norm ball: with all known proximal setups,
computing a proximal mapping requires full eigenvalue decomposition of a symmetric n × n matrix,
while minimizing a linear form over X reduces to approximating the leading eigenvector of such a
matrix; the latter task can be achieved by Power method and is, progressively in n, much cheaper
computationally than full eigenvalue decomposition.

3. Total Variation ball X in the space of n × n images (n × n arrays with zero mean). Total Variation
of an n × n image x is

n−1
XX n n n−1
X X
TV(x) = |xi+1,j − xi,j | + |xi,j+1 − xi,j |
i=1 j=1 i=1 j=1

and is a norm on the space of images (the discrete analogy of L1 norm of the gradient of a function
on a 2D square); convex minimization over TV balls and related problems (like LASSO problem
(5.8.4) with the space of images in the role of E and TV(·) in the role of k · kE ) plays important role
in image reconstruction. Computing the simplest – Euclidean – prox mapping on TV ball in the
space of n × n images requires computing Euclidean projection of a point onto large-scale polytope
cut off O(1)n2 `1 ball by O(1)n2 homogeneous linear equality constraints; for n of practical interest
(few hundreds), this problem is quite time consuming. In contrast, it turns out that minimizing
a linear form over TV ball requires solving a special Maximum Flow problem [28], and this turns
out to be by order of magnitudes cheaper than metric projection onto TV ball.
26
On the top of proximal friendliness, we would like to ensure a moderate, “nearly dimension-independent”
value of the ω-radius of X, in order to avoid rapid growth of the iteration count with problem’s dimension. Note,
however, that in the absence of proximal friendliness, we cannot even start running a proximal algorithm...
27
In the most primitive implementation, Power method iterates matrix-vector multiplications et 7→ et+1 =
T
x xet , with randomly selected, usually from standard Gaussian distribution, starting vector e0 ; after few tens of
iterations, vectors et and xet become good approximations of the left and the right leading singular vectors of x.
FAST FIRST ORDER MINIMIZATION AND SADDLE POINTS 525

Here is some numerical data allowing to get an impression of the “magnitude” of the just outlined
phenomena:
2 2
10 10

1 1
10 10

0
10

0 0
10 10
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 2 3 4
10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10

CPU ratio
CPU ratio CPU ratio
“metric projection”/“LMO computation”
“full svd”/”finding leading singular vectors” “full evd”/“finding leading eigenvector”
for TV ball in Rn×n vs. n
for n × n matrix vs. n for n × n symmetric matrix vs. n
n 129 256 512 1024
n 1024 2048 4096 8192 n 1024 2048 4096 8192
ratio 10.8 8.8 11.3 20.6
ratio 0.5 2.6 4.5 7.5 ratio 2.0 4.1 7.9 13.0
Metric projection onto TV ball for n = 1024
Full svd for n = 8192 takes 475.6 sec! Full evd for n = 8192 takes 142.1 sec!
takes 1062.1 sec!
Platform: 2 × 3.40 GHz CPU, 16.0 GB RAM, 64-bit Windows 7

The outlined and some other generic large-scale problems of convex minimization inspire investigating the
abilities to solve problems (CP) on LMO - represented domains X – convex compact domains equipped
by Linear Minimization Oracles. As about the objective f , we still assume that it is given by First Order
Oracle.

5.8.3.2 Conditional Gradient algorithm


The only classical optimization algorithm capable to handle convex minimization over LMO-represented
domain is Conditional Gradient Algorithm (CGA ) invented as early as in 1956 by Frank and Wolfe [23].
CGA does not work when the function f in (CP) possesses no smoothness beyond Lipschitz continuity,
so in the sequel we impose some “nontrivial smoothness” assumption. More specifically, we consider the
same composite minimization problem

φ∗ = min{φ(x) := f (x) + Ψ(x)}, (5.8.1)


x∈X

as in the previous sections, and assume that


• X is a convex and compact subset in Euclidean space E equipped with a norm k · k,
• Ψ is convex lower semicontinuous on X and is finite on the relative interior of X function, and
• f is convex and continuously differentiable on X function satisfying, for some L < ∞ and κ ∈ (1, 2]
(note that κ > 1!) relation (5.8.27):

L
∀(x, y ∈ X) : f (y) ≤ f (x) + hf 0 (x), y − xi + ky − xkκ .
κ

Composite LMO and Composite Conditional Gradient algorithm. We assume that we


have at our disposal a Composite Linear Minimization Oracle (CLMO) – a procedure which, given on
input a linear form hξ, xi on the embedding X Euclidean space E and a nonnegative α, returns a point

CLMO(ξ, α) ∈ Argmin {hξ, xi + αΨ(x)} . (5.8.59)


x∈X

The associated Composite Conditional Gradient Algorithm (nicknamed in the sequel CGA ) as applied
to (5.8.1) generates iterates xt ∈ X, t = 1, 2, ..., as follows:
1. Initialization: x1 is an arbitrary points of X.
2. Step t = 1, 2, ...: Given xt , we
526 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

• compute f 0 (xt ) and call CLMO to get the point


0
x+
t = CLMO(f (xt ), 1) [∈ Argmin[`t (x) := f (xt ) + hf 0 (xt ), x − xt i + Ψ(x)] (5.8.60)
x∈X

• compute the point


2
bt+1 = xt + γt (x+
x t − xt ), γt = (5.8.61)
t+1
• take as xt+1 a (whatever) point in X such that φ(xt+1 ) ≤ φ(b
xt+1 ).
The convergence properties of the algorithm are summarized in the following simple statement going back
to [23]:
Theorem 5.8.3 Let Composite CGA be applied to problem (5.8.1) with convex f satisfying (5.8.27) with
some κ ∈ (1, 2], and let D be the k · k-diameter of X. Denoting by φ∗ the optimal value in (5.8.1), for all
t ≥ 2 one has
2LDκ κ−1 2κLDκ
t := φ(xt ) − φ∗ ≤ γt = . (5.8.62)
κ(3 − κ) κ(3 − κ)(t + 1)κ−1
Proof. Let
`t (x) = f (xt ) + hf 0 (xt ), x − xt i + Ψ(x).
We have
t+1 := φ(xt+1 ) − φ∗ ≤ φ(b xt+1 ) − φ∗ = φ(xt + γt (x+ t − xt )) − φ∗
+ +
= `t (xt + γt (xt − xt ) + [f (xt + γ t
 Lt(x − x t )) − f (x t ) − hf 0 (xt ), γt (x+
t − xt )] − φ∗
+ κ
≤ (1 − γt ) `(xt ) +γt `(xt ) − φ∗ + κ [γt D]
| {z }
=φ(xt )
[since `t (·) is convex and due to (5.8.27);
 Lnote that kγt (x+
t − xt )k ≤ γt D]
+
 κ
= (1 − γt ) [φ(xt ) − φ∗ ] +γt [`(xt ) − φ∗ ] + κ [γt D]
| {z }
t
+
= (1 − γt )t + L κ
κ [γt D] + γt [`(xt ) − φ∗ ]
L κ
≤ (1 − γt )t + κ [γt D]
[since `t (x) ≤ φ(x) for all x ∈ X and `(x+t ) = minx∈X `t (x)]

We have arrived at the recurrent inequality for the residuals t = φ(xt ) − φ∗ :


L 2
t+1 ≤ (1 − γt )t + [γt D]κ , γt = , t = 1, 2, ... (5.8.63)
κ t+1
Let us prove by induction in t ≥ 2 that for t = 2, 3, ... it holds
2LDκ κ−1
t ≤ γ (∗t )
κ(3 − κ) t
κ
The base t = 2 is evident: from (5.8.63) it follows that 2 ≤ LD κ , and the latter quantity is
2LD κ κ−1 2κ LD κ 28
≤ κ(3−κ) γ2 = κ(3−κ) . Assuming that (∗t ) holds true for some t ≥ 2, we get from (5.8.63) the
first inequality in the following chain:
κ κ
2LD
t+1 ≤ κ(3−κ) [γtκ−1 − γtκ ] + LD γκ
κ 
κ−1
 κκ t 2LDκ κ−1 
2LD 3−κ
(t + 1)−(κ−1) + (1 − κ)(t + 1)−κ
 
= (3−κ)κ γt − 1 − 2 γt = (3−κ)κ 2
2LD κ κ−1
≤ (3−κ)κ 2 (t + 2)−κ−1 ,

where the concluding inequality in the chain is the gradient inequality for the convex function u1−κ of
u > 0. We see that the validity of (∗t ) for some t implies the validity of (∗t+1 ); induction, and thus the
proof of Theorem 5.8.3, is complete. 2

28
Indeed, the required inequality boils down to the inequality 3κ − κ3κ−1 ≤ 2κ , which indeed is true when
κ ≥ 1, due to the convexity of the function uκ on the ray u ≥ 0.
FAST FIRST ORDER MINIMIZATION AND SADDLE POINTS 527

5.8.3.3 Bridging Fast and Conditional Gradient algorithms


Now assume that we are interested in problem (5.8.1) in the situation considered in section 5.8.2, so that
X is a convex subset in Euclidean space E with a norm k · k, and X is equipped with a compatible with
this norm DGF ω. From now on, we make two additional assumptions:
A. X is a compact set, and for some known Υ < ∞ one has
∀(z ∈ X, w ∈ X) : |hω 0 (z), w − zi| ≤ Υ. (5.8.64)
Remark: For all standard proximal setups described in section 5.3.3, one can take as Υ the
quantity O(1)Θ, with Θ defined in section 5.3.2.
B. We have at our disposal a Composite Linear Minimization Oracle (5.8.59).

Lemma 5.8.2 Under assumptions A, B, let


 = Υ, ¯ = 3Υ.
Then for every ξ ∈ E, α ≥ 0 and every z ∈ X one has
(a) CLMO(ξ, α) ∈ argmin {hξ, xi + αΨ(x) + ω(x)} ,
x∈X
(b) CLMO(ξ, α) ∈ argmin¯ {hξ, xi + αΨ(x) + Vz (x)} .
x∈X

Proof. Given ξ ∈ E, α > 0, z ∈ X, and setting w = CLMO(ξ, α), we have Lemma 5.9.1 w ∈ X and for
some Ψ0 (w) ∈ ∂X Ψ(w) one has
hξ + αΨ0 (w), x − wi ≥ 0 ∀x ∈ X.
Consequently,
∀x ∈ X : hξ + αΨ0 (w) + ω 0 (w), x − wi ≥ hω 0 (w), x − wi ≥ −Υ,
whence w ∈ argmin {hξ, xi + αΨ(x) + ω(x)}. Similarly,
x∈X

∀x ∈ X : hξ + αΨ0 (w) + ω 0 (w) − ω 0 (z), x − wi ≥ hω 0 (w) − ω 0 (z), x − wi


≥ −Υ + hω 0 (z), w − xi = −Υ + hω 0 (z), w − zi + hω 0 (z), z − xi ≥ −3Υ.
whence w ∈ argmin¯ {hξ, xi + αΨ(x) + Vz (w)}. The case of α = 0 is completely similar (redefine Ψ as
x∈X
identically zero and α as 1). 2
Combining Lemma 5.8.2 and Theorem 5.8.1, we arrive at the following

Corollary 5.8.2 In the situation described in section 5.8.1.1 and under assumptions A, B, consider the
implementation of the Fast Composite Gradient Method (5.8.8) – (5.8.14) where (5.8.8) is implemented
as
zt = CLMO(∇`t (·), αt ) (5.8.65)
(see (5.8.16)), and (5.8.12) is implemented as
x̂t+1 = CLMO(at+1 ∇f (xt+1 ), at+1 ) (5.8.66)
(which, by Lemma 5.8.2, ensures (5.8.8), (5.8.12) with  = Υ, ¯ = 3Υ). Assuming that (5.8.7) holds true
and that δt ≥ 0 for all t (the latter definitely is the case when Lt = Lf for all t), one has
4Lf
yt , yt+ ∈ X, φ(yt+ ) − φ∗ ≤ φ(yt ) − φ∗ ≤ A−1
t [Vxω (x∗ ) + 4Υt] ≤ [Vxω (x∗ ) + 4Υt] (5.8.67)
t2
for t = 1, 2, ..., where, as always, x∗ is an optimal solution to (5.8.1) and xω is the minimizer of ω(·) on
X.
A step of the algorithm, modulo updates yt 7→ yt+ , reduces to computing the value and the gradient of
f at a point, computing the value of f at another point and to computing two values of Composite Linear
Minimization Oracle, with no computations of the values and the derivatives of ω(·) at all.
528 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Discussion. Observe that (5.8.64) clearly implies that Vxω (x∗ ) ≤ Υ, which combines with (5.8.67) to
yield the relation
Lf Υ
φ(yt+ ) − φ∗ ≤ O(1) , t = 1, 2, ... (5.8.68)
t
On the other hand, under the premise of Corollary 5.8.2, we have at our disposal a Composite LMO for
X and the function f in (5.8.1) is convex and satisfies (5.8.27) with L = Lf and κ = 2, see (5.8.2). As a
result, we can solve the problem of interest (5.8.1) by CGA ; the resulting efficiency estimate, as stated
by Theorem 5.8.3 in the case of L = Lf , κ = 2, is

Lf D 2
φ(xt ) − φ∗ ≤ O(1) , (5.8.69)
t
where D is the k · k-diameter of X. Assuming that

Υ ≤ α(X)D2 (5.8.70)

for properly selected α(X) ≥ 1, the efficiency estimate (5.8.68) is by at most factor O(1)α(X) worse
than the CGA efficiency estimate (5.8.69), so that for moderate α(X) the estimates are “nearly the
same.” This is not too surprising, since FGM with imprecise proximal mappings given by (5.8.65),
(5.8.66) is, basically, LMO-based rather than proximal algorithm. Indeed, with on-line tuning of Lt ’s, the
implementation of FGM described in Corollary 5.8.2 requires operating with ω(zt ), ω 0 (zt ), and ω(x̂t+1 )
solely in order to compute δt when checking whether this quantity is nonnegative. We could avoid the
necessity to work with ω by computing kb xt+1 − zt k in order to build the lower bound
 
1
δ̄t = xt+1 − zt k2 + h∇f (xt+1 ), yt+1 − xt+1 i + f (xt+1 ) − f (yt+1 )
kb
2At+1

on δt , and to use this bound in the role of δt when updating Lt ’s (note that from item 50 of the proof
of Theorem 5.8.1 it follows that δ̄t ≥ 0 whenever Lt = Lf ). Finally, when both the norm k · k and ω(·)
are difficult to compute, we can use the off-line policy Lt = Lf for all t, thus avoiding completely the
necessity to check whether δt ≥ 0 and ending up with an algorithm, let it be called FGM-LMO, based
solely on Composite Linear Minimization Oracle and never “touching” the DGF ω (which is used solely
in the algorithm’s analysis). The efficiency estimate of the resulting algorithm is within O(1)α(X) factor
of the one of CGA .
Note that in many important situations (5.8.70) indeed is satisfied with “quite moderate” α(X). For
example, for the Euclidean setup one can take α(X) = 1. For other standard proximal setups presented
in section 5.3.3, except for the Entropy one, α(X) just logarithmically grows with the dimension of X.
An immediate question is, what, if any, do we gain when passing from simple and transparent CGA to
much more sophisticated FGM-LMO. The answer is: as far as theoretical efficiency estimates are con-
cerned, nothing is gained. Nevertheless, “bridging” proximal Fast Composite Gradient Method and
LMO-based Conditional Gradients possesses some “methodological value.” For example, FGM-LMO
replaces the precise composite prox mapping

(ξ, z) 7→ argmin [hξ, xi + αΨ(x) + Vz (x)]


x∈X

with its “most crude” approximation

(ξ, z) 7→ CLMO(ξ, α) ∈ Argmin [hξ, xi + αΨ(x)] .


x∈X

which ignores Bregman distance Vz (x) at all. In principle, one could utilize a less crude approximation of
the composite prox mapping, “shifting” the CGA efficiency estimate O(LD2 /t) towards the O(LD2 /t2 )
efficiency estimate of FGM with precise proximal mappings – the option we here just mention rather
than exploit in depth.
FAST FIRST ORDER MINIMIZATION AND SADDLE POINTS 529

5.8.3.4 LMO-based implementation of Fast Universal Gradient Method


The above “bridging” of FGM and CGA as applied to Composite Minimization problem (5.8.1) with
convex and “fully smooth” f (one with Lipschitz continuous gradient) can be extended to the case of less
smooth f ’s, with FUGM in the role of FGM. Specifically, with assumptions A, B in force, let us look at
Composite Minimization problem (5.8.1) with convex f satisfying the smoothness requirement (5.8.27)
with some L > 0 and κ ∈ (1, 2] (pay attention to κ > 1!).
By Lemma 5.8.2, setting
ν = Υ, ν̄ = 3Υ.
we ensure that for every ξ ∈ E, α ≥ 0 and every z ∈ X one has

(a) CLMO(ξ, α) ∈ argminν {hξ, xi + αΨ(x) + ω(x)} ,


x∈X
(5.8.71)
(b) CLMO(ξ, α) ∈ argminν̄ {hξ, xi + αΨ(x) + Vz (x)} .
x∈X

Therefore we can consider the implementation of FUGM (in the sequel referred to as FUGM-LMO) where
the target accuracy is a given , the parameters ν and ν̄ are specified as ν = Υ, ν̄ = 3Υ, and
• zt is defined according to
zt = CLMO(∇`t (·), αt ).
with `t (·), αt defined in (5.8.49);
• x̂t+1,µ is defined according to

x̂t+1,µ = CLMO(at+1,µ f 0 (xt+1,µ ), at+1,µ ).

By (5.8.71), this implementation ensures (5.8.28), (5.8.32).


Let us upper-bound the quantity [ν + ν̄]N A−1N appearing in (5.8.36) and (5.8.42).

Lemma 5.8.3 Let assumption A take place, and let FUGM-LMO be applied to Composite Minimization
problem (5.8.1) with convex function f satisfying the smoothness condition (5.8.27) with some L ∈ (0, ∞)
and κ ∈ (1, 2]. When the number of steps N at a stage of the method satisfies the relation
"  κ 1 #
 κ−1
# 16LΥ 2(κ−1)
5κ LΥ 2
N ≥ N () := max ,2 (5.8.72)
 

one has
[ν + ν̄]N A−1
N ≤ .

Proof. We have A−1 −1 #


N N [ν + ν̄] = 4ΥAN N . Assuming N ≥ N (), consider the same two cases as in the
proof of Corollary 5.8.1. Similarly to this proof,
— in Case I we have
3κ−2 κ−2
1
AN ≥ 2 N κ  κ
8L κ 1
2(1−κ) κ−2 5κ
 κ  κ−1
2
⇒ 4A−1
N N Υ ≤ 32ΥL N
κ κ  κ ≤ [due to N ≥ 2 2(κ−1) LΥ 2
 ];

— in Case II we have
N2
AN ≥ 4L
LΥ 16LΥ 2
⇒ 4A−1
N N Υ ≤ 16 N ≤ [due to N ≥  ].

Combining Corollary 5.8.1, Lemma 5.8.3 and taking into account that the quantity Υ participating in
assumption A can be taken as Θ∗ participating in Corollary 5.8.1 (see Discussion after Corollary 5.8.2),
we arrive at the following result:
530 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Corollary 5.8.3 Let assumption A take place, and let FUGM-LMO be applied to Composite Minimiza-
tion problem (5.8.1) with convex function f satisfying, for some L ∈ (0, ∞) and κ ∈ (1, 2], the smoothness
condition (5.8.27). The on-line verifiable condition

4A−1
N NΥ ≤  (5.8.73)
+
ensure that the result yN of N -step stage SN of FUGM-LMO satisfies the relation
+
φ(yN ) − φ∗ ≤ 3.

Relation (5.8.73) definitely is satisfied when

N ≥ N () := max[N# (), N # ()],

where " √ 2 #
 κ κ 
LΥ 8 2 LΥ 2 3κ−2
N# () = max 2 √ , (5.8.74)
 

(cf. (5.8.39)) and N # () is given by (5.8.72).

Remarks. I. Note that FUGM-LMO does not require computing ω(·) and ω 0 (·) and operates solely
with CLMO(·, ·), First Order oracle for f , and the Zero Order oracle for k·k (the latter is used to compute
δt,µ , see (5.8.34)). Assuming κ and L known in advance, we can further avoid the necessity to compute
the norm k · k and can skip, essentially, the Inner loop. To this end it suffices to carry out a single step
2 2−κ
µ = 0 of the Inner loop with Lt,0 set to L κ [N/] κ and to skip computing δt,0 – the latter quantity
automatically is ≥ −/N , see item 10 in the proof of Theorem 5.8.2, p. 521.
II. It is immediately seen that for small enough values of  > 0, we have N # () ≥ N# (), so that the
total, over all stages, number of outer steps before -solution is built is upper-bounded by O(1)N # ();
with “safe” stepsize policy, see Remark 5.8.1, and small  the total, over all stages, number of Inner steps
also is upper-bounded by O(1)N # (). Note that in the case of (5.8.70) and for small enough  > 0, we
have  1
LDκ κ−1

κ
#
N () ≤ [O(1)α(X)] 2(κ−1) ,

κ
implying that the complexity bound of FUGM-LMO for small  is at most by factor [O(1)α(X)] 2(κ−1)
worse that the complexity bound for CGA as given by (5.8.62). The conclusions one can make from these
observations are completely similar to those related to FGM-LMO, see p. 528.

5.9 Appendix: Some proofs


5.9.1 A useful technical lemma
Lemma 5.9.1 Let X be a closed convex domain in Euclidean space E, (k · k, ω(·)) be a proximal setup
for X, let Ψ : X → R ∪ {+∞} be a lower semicontinuous convex function which is finite on the relative
interior of X, and let φ(·) : X → R be a convex continuously differentiable function.The minimizer z+
of the function ω(w) + φ(w) + Ψ(w) on X exists and is unique, and there exists Ψ0 ∈ ∂Ψ(z+ ) such that

hω 0 (z+ ) + φ0 (z+ ) + Ψ0 , w − z+ i ≥ 0 ∀w ∈ X. (5.9.1)

If Ψ(·) is differentiable at z+ , one can take Ψ0 = ∇Ψ(z+ ).

Proof. Let us set ω b (w) = ω(w) + φ(w). Since φ is continuously differentiable and convex on X, ω b
possesses the same convexity and smoothness properties as ω, specifically, ω
b is continuously differentiable
and strongly convex, with modulus 1 w.r.t. k · k, on X,
5.9. APPENDIX: SOME PROOFS 531

To prove existence and uniqueness of z+ , let us set

X + = {(x, t) ∈ E × R : x ∈ X, t ≥ Ψ(x)}, f (x, t) = ω


b (x) + t : X + → R.

X + is a closed convex set (since X is closed and convex, and Ψ is convex, lower semicontinuous, and
finite on the relative interior of X), and f is a continuously differentiable convex function on x. Besides
this, the level sets Xa+ = {(x, t) ∈ X + : f (x, t) ≤ a} are bounded for every a ∈ R.
Indeed, selecting x̄ ∈ rint X and selecting somehow g ∈ ∂X Ψ(x̄), we have

∀((x, t) ∈ X + ) :
b (x) + Ψ(x) ≥ f¯(x) := ω
f (x, t) ≥ ω ω 0 (x̄) + g, x − x̄i + 21 kx − x̄k2
b (x̄) + Ψ(x̄) + hb

It follows that when (x, t) ∈ Xa+ , we have x ∈ Xa := {x ∈ X : f¯(x) ≤ a}. The set Xa clearly
is bounded, and therefore Ψ(x) ≥ Ψ(x̄) + hg, x − x̄i ≥ a0 ∈ R for some a0 ∈ R and all x ∈ Xa .
Assuming, on the contrary to what should be proved, that Xa+ is unbounded, there exists an
unbounded sequence (xi , ti ) ∈ Xa+ ; since xi ∈ Xa , the sequence {xi } is bounded, whence the
sequence {ti } is unbounded; the latter sequence is below bounded due to ti ≥ Ψ(xi ) ≥ a0 ,
implying that {ti } is not bounded from above. Passing to a subsequence, we can assume
that ti → ∞, whence a ≥ f (xi , ti ) = ω b (xi ) + ti , which is a desired contradiction: the right
hand side in the latter inequality goes to ∞ as i → ∞ due to the fact that ti → ∞, i → ∞,
while the sequence {bω (xi )} is bounded along with the sequence {xi }.
Since X + is closed and the level sets of f are bounded, f achieves its minimum at some point
(z+ , t+ ) ∈ X + ; by evident reasons, t+ = Ψ(z+ ), and z+ is a minimizer of F (x) := ω
b (x) + Ψ(x) on X, so
that such a minimizer does exist; its uniqueness is readily given by strong convexity of F (x) (implied by
strong convexity of ω b and convexity of Ψ). By optimality condition we have

ω 0 (z+ ), x − z+ i + t − t+ ≥ 0,
∀(x, t) ∈ X + : hb

whence
ω 0 (z+ ), x − z+ i ≥ 0
∀x ∈ X : Ψ(x) − Ψ(z+ ) + hb
implying that Ψ0 := −b ω 0 (z+ ) ∈ ∂X Ψ(z+ ), so that Ψ0 + ω 0 (z+ ) + φ0 (z+ ) = Ψ0 + ω
b 0 (z+ ) = 0, implying that
(5.9.1) holds true. Finally, assuming that Ψ has a derivative at z+ , the convex function Ψ(x)+ω(x)+φ(x)
which achieves its minimum on X at z+ is differentiable at z+ , the derivative being ∇Ψ(z+ ) + ω 0 (z+ ) +
ψ 0 (z+ ), and validity of (5.9.1) with Ψ = ∇Ψ(z+ ) is readily given by optimality conditions. 2

5.9.2 Justifying Ball setup


Justification of Ball setup is trivial.

5.9.3 Justifying Entropy setup


For x ∈ ∆+ n +
n and h ∈ R , setting zi = (xi + δ/n)/(1 + δ), so that 0 < z ∈ ∆n zi , we have
sX sX sX
√ √
X X q
|hi | = [|hi |/ zi ]] zi ≤ h2i /zi zi ≤ h2i /zi = hT ω 00 (x)h,
i i i i i

that is,
hT ω 00 (x)h ≥ khk21 ,
which immediately implies that ω is strongly convex, with modulus 1, w.r.t. k · k1 .
532 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

Now let us upper-bound Θ. Given x, x0 ∈ ∆+ 0


n and setting yi = xi + δ/n, yi = xi 60 + δ/n, we have

Vx (x0 ) = (1P+ δ) [ i yi0 ln(y 0 0


P P P
P i ) 0− i0 yi ln(yi ) − i [yi − P
yi ][1 + ln(yi )]]
0
= (1 + δ) [ i [yi − yi ] + i yi ln(yi /yi )] ≤ (1 + δ) [1 + i yi0 ln((1 + δ/n)n/δ)]
≤ (1 + δ) [1 + (1 + δ) ln(2n/δ)] ,
and (5.3.8)
Pn follows. When justifying Entropy setup, the only not completely trivial task is to check that
ω(x) = i=1 xi ln(xi ) is strongly convex, modulus 1, w.r.t. k · k1 , on the set ∆o = {x ∈ ∆+
n : x > 0}. An
evident sufficient condition for this is the relation
d2 X
D2 (x)[h, h] := 2 t=0 ω(x + th) ≥ κkhk2 ∀(x, h : x > 0, xi ≤ 1) (5.9.2)
dt i

(why?) To verify (5.9.2), observe that for x ∈ ∆o we have


2 
P |hi | √ 2
    
2
P P P h2i T 00
khk1 = |hi | = √
xi xi ≤ xi xi = h ω (x)h.
i i i i

5.9.4 Justifying `1 /`2 setup


All we need is given by the following two claims:
Theorem 5.9.1 Let E = Rk1 × ... × Rkn , k · k be the `1 /`2 norm on E, and let Z be the unit ball of the
norm k · k. Then the function

1
n   1, n=1
X 2, n≤2
ω(x = [x1 ; ...; xn ]) = kxj kp2 , p = , γ = 1/2, n = 2 : Z → R (5.9.3)
pγ j=1 1 + ln1n , n ≥ 3 1

e ln(n) , n >2
is a continuously differentiable DGF for Z compatible with k · k. As a result, whenever X ⊂ Z is a
nonempty closed convex subset of Z, the restriction ω X of ω on X is DGF for X compatible with k · k,
and the ω X -radius of X can be bounded as
p
Ω ≤ 2e ln(n + 1). (5.9.4)

Corollary 5.9.1 Let E and k · k be as in Theorem 5.9.1. Then the function


n(p−1)(2−p)/p hXn i p2
b (x = [x1 ; ...; xn ]) =
ω kxj kp2 : E → R (5.9.5)
2γ j=1

with p, γ given by (5.9.3) is a continuously differentiable DGF for E compatible with k · k. As a result,
whenever X is a nonempty closed subset of E, the restriction ωb X of ω
b onto X is a DGF for X compatible

with k · k. When, in addition, X ⊂ {x ∈ E : kxk ≤ R} with R < ∞, the ω b X -radius of X can be bounded
as p
Ω ≤ O(1) ln(n + 1)R. (5.9.6)

5.9.4.1 Proof of Theorem 5.9.1


We have for x = [x1 ; ...; xn ] ∈ Z 0 = {x ∈ Z : xj [x] 6= 0 ∀j}, and h = [h1 ; ...; hn ] ∈ E:
Pn
γDω(x)[h] = j=1 kxj kp−2 2 hxj , hj i
n Pn
γD2 ω(x)[h, h] = −(2 − p) j=1 kxj kp−4 [hxj , hj i]2 + j=1 kxj kp−2 khj k22
P
2 2
Pn j p−2 j 2
P n j p−4 j 2 j 2
≥ j=1 kx k2 kh k2 − (2 − p) j=1 kx k2 kx k2 kh k2
Pn
≥ (p − 1) j=1 kxj kp−2 2 khj k22
hP i2  p−2 2−p
2 h i hP i
j
Pn j j j
Pn j 2 j p−2 n j 2−p
⇒ j kh k 2 = j=1 [kh k 2 kx k2
2
]kx k 2
2
≤ j=1 kh k2 kz k2 j=1 kx k2
hP i2 hP i
j n j 2−p γ 2
⇒ j kh k 2 ≤ j=1 kx k 2 p−1 D ω(x)[h, h]
5.9. APPENDIX: SOME PROOFS 533

2−p
Setting tj = kxj k2 ≥ 0, we have ≤ np−1 . Thus,
P P
j tj ≤ 1, whence due to 0 ≤ 2 − p ≤ 1 it holds j tj

hX i2 γ
khj k2 ≤ np−1 D2 ω(x)[h, h] (5.9.7)
j p−1
while
1
max ω(x) − min ω(x) ≤ (5.9.8)
x∈Z x∈Z γp
γ
With p, γ as stated in Theorem 5.9.1, when n ≥ 3 we get p−1 np−1 = 1, and similarly for n = 1, 2.
Consequently,
hXn i2
∀(x ∈ Z 0 , h = [h1 ; ...; hn ]) : khj k2 ≤ D2 ω(x)[h, h]. (5.9.9)
j=1

Since ω(·) is continuously differentiable and the complement of Z 0 in Z is the union of finitely many
proper linear subspaces of E, (5.9.9) implies that ω is strongly convex on Z, modulus 1, w.r.t. the `1 /`2
norm k · k. Besides this, we have
 
= 12 , n=1 
1 
= 1, n=2 ≤ O(1) ln(n + 1).
γp 
≤ e ln(n), n ≥ 3

which combines with (5.9.8) to imply (5.9.4). 2

5.9.4.2 Proof of Corollary 5.9.1


Let
n
1 X j p
w(x = [x1 ; ...; xn ]) = kx k2 : E := Rk1 × ... × Rkn → R (5.9.10)
γp j=1

with γ, p given by (5.9.3); note that p ∈ (1, 2]. As we know from Theorem 5.9.1, the restriction of w(·) onto
the unit ball of `1 /`2 norm k · k is continuously differentiable and strongly convex, modulus 1, w.r.t. k · k.
Besides this, w(·) clearly is nonnegative and positively homogeneous of the degree p: w(λx) ≡ |λ|p w(x).
Note that the outlined properties clearly imply that w(·) is convex and continuously differentiable on the
entire E. We need the following simple

Lemma 5.9.2 Let H be a Euclidean space, k · kH be a norm (not necessary the Euclidean one), on H,
and let YH = {y ∈ H : kykH ≤ 1}. Let, further, ζ : H → R be a nonnegative continuously differentiable
function, absolutely homogeneous of a degree p ∈ (1, 2], and strongly convex, modulus 1, w.r.t. k · kH , on
YH . Then the function
b = ζ 2/p (y) : H → R
ζ(y)
is continuously differentiable homogeneous of degree 2 and strongly convex, modulus
2h i
β= miny:kykH =1 ζ 2/p−1 (y) ,
p
w.r.t. k · kH , on the entire H.

Proof. The facts that ζ(·)


b is continuously differentiable convex and homogeneous of degree 2 function
on H is evident, and all we need is to verify that the function is strongly convex, modulus β, w.r.t k · kH :

hζb0 (y) − ζb0 (y 0 ), y − y 0 i ≥ βky − y 0 k2H ∀y, y 0 . (5.9.11)

It is immediately seen that for a continuously differentiable function ζ,b the target property is local:
to establish (5.9.11), it suffices to verify that every ȳ ∈ H\{0} admits a neighbourhood U such that
the inequality in (5.9.11) holds true for all y, y 0 ∈ U . By evident homogeneity reasons, in the case of
534 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

homogeneous of degree 2 function ζ, b to establish the latter property for every ȳ ∈ H\{0} is the same as
to prove it in the case of kȳkH = 1 − , for any fixed  ∈ (0, 1/3). Thus, let us fix an  ∈ (0, 1) and ȳ ∈ H
with kȳkH = 1 − , and let U = {y 0 : ky 0 − ȳkH ≤ /2}, so that U is a compact set contained in the
interior of YH . Now let φ(·) be a nonnegative C∞ function on H which vanishes outside of YH and has
unit integral w.r.t. the Lebesgue measure on H. For 0 < r ≤ 1, let us set

φr (y) = r−dim H φ(y/r), ζr (·) = (ζ ∗ φr )(·), ζbr (·) = ζr2/p (·),

where ∗ stands for convolution. As r → +0, the nonnegative C∞ function ζr (·) converges with the first
order derivative, uniformly on U , to ζ(·), and ζbr (·) converges with the first order derivative, uniformly
on U , to ζ(·).
b Besides this, when r < /2, the function ζr (·) is, along with ζ(·), strongly convex, modulus
1, w.r.t. k · kH , on U (since ζr0 (·) is the convolution of ζ 0 (·) and φr (·)). Since ζr is C∞ , we conclude that
when r < /2, we have
D2 ζr (y)[h, h] ≥ khk2H ∀(y ∈ U, h ∈ H),
whence also
 
2 2 2/p−2 2/p−1
D2 ζbr (y)[h, h] = p p − 1 ζr (y)[Dζr (y)[h]]2 + p2 ζr (y)D2 ζr [h, h]
2 2/p−1
≥ p ζr (y)khk2H ∀(h ∈ H, y ∈ U ).

It follows that when r < /2, we have

hζbr0 (y) − ζbr0 (y 0 ), y − y 0 i ≥ βr [U ]ky − y 0 k2H ∀y, y 0 ∈ U,


2/p−1
βr [U ] = p2 minu∈U ζr (u).

As r → +0, ζbr0 (·) uniformly on U converges to ζb0 (·), and βr [U ] converges to β[U ] = 2
p min ζ 2/p−1 (u) ≥
u∈U
2−p
(1 − 3/2) β (recall the origin of β and U and note that ζ is absolutely homogeneous of degree p), and
we arrive at the relation

hζb0 (y) − ζb0 (y 0 ), y − y 0 i ≥ (1 − 3/2)2−p βky − y 0 k2H ∀y, y 0 ∈ U.

As we have already explained, the just established validity of the latter relation for every U = {y :
ky − ȳkH ≤ /2} and every ȳ, kȳkH = 1 −  implies that ζb is strongly convex, modulus (1 − 3/2)2−p β,
on the entire H. Since  > 0 is arbitrary, Lemma 5.9.2 is proved. 2

B. Applying Lemma 5.9.2 to E in the role of H, the `1 /`2 norm k · k in the role of k · kH and the
function w(·) given by (5.9.10) in the role of ζ, we conclude that the function w(·)b ≡ w2/p (·) is strongly
2/p−1
convex, modulus β = p2 [miny {w(y) : y ∈ E, kyk = 1}] , w.r.t. k · k, on the entire space E, whence
the function β −1 w(z) is continuously differentiable and strongly convex, modulus 1, w.r.t. k · k, on the
entire E. This observation, in view of the evident relation β = p2 (n1−p γ −1 p−1 )2/p−1 and the origin of
γ, p, immediately implies all claims in Corollary 5.9.1. 2

5.9.5 Justifying Nuclear norm setup


All we need is given by the following two claims:

Theorem 5.9.2 Let µ = (µ1 , ..., µn ) and ν = (nu1 , ..., νn ) be collections of positive reals such that µj ≤ νj
for all j, and let
X n
m= µj .
j=1
5.9. APPENDIX: SOME PROOFS 535

With E = Mµν and k · k = k · knuc , let Z = {x ∈ E : kxknuc ≤ 1}. Then the function

4 e ln(2m) Xm 1+q 1
ω(x) = σ (x) : Z → R, q = , (5.9.12)
2q (1 + q) i=1 i 2 ln(2m)

is continuously differentiable DGF


for Z compatible with k·knuc , so that for every nonempty closed
convex
subset X of Z, the function ω X is a DGF for X compatible with k · knuc . Besides this, the ω X radius
of X can be bounded as
q √ p
Ω ≤ 2 2 e ln(2m) ≤ 4 ln(2m). (5.9.13)

Corollary 5.9.2 Let µ, ν, m, E, k · k be as in Theorem 5.9.2. Then the function


2
h Xm i 1+q 1
ω
b (x) = 2e ln(2m) σj1+q (x) : E → R, q = (5.9.14)
j=1 2 ln(2m)

Z = E compatible with k · knuc , so that for every nonempty closed


is continuously differentiable DGF for
convex set X ⊂ E, the function ω
b is a DGF for X compatible with k · knuc . Besides this, if X is
X
bounded, the ω
b X radius of X can be bounded as

p
Ω ≤ O(1) ln(2µ)R, (5.9.15)

where R < ∞ is such that X ⊂ {x ∈ E : kxknuc ≤ R}.

5.9.5.1 Proof of Theorem 5.9.2


A. As usual, let Sn be the space of n × n symmetric matrices equipped with the Frobenius inner product;
for y ∈ Sn , let λ(y) be the vector of eigenvalues of y (taken with their multiplicities in the non-ascending
order), and let |y|1 = kλ(y)k1 be the trace norm.

Lemma 5.9.3 Let N ≥ M ≥ 2, and let F be a linear subspace in SN such that every matrix y ∈ F has
1
at most M nonzero eigenvalues. Let q = 2 ln(M ) , so that 0 < q < 1, and let

√ N
4 e ln(M ) X
ω
b (y) = |λj (y)|1+q : SN → R.
1+q j=1

The function ω b (·) is continuously differentiable, convex, and its restriction on the set YF = {y ∈ F :
|y|1 ≤ 1} is strongly convex, modulus 1, w.r.t. | · |1 .

Proof. 10 . Let 0 < q < 1. Consider the following function of y ∈ SN :

N
1 X 1
χ(y) = |λj (y)|1+q = Tr(f (y)), f (s) = |s|1+q . (5.9.16)
1 + q j=1 1+q

20 . Function f (s) is continuously differentiable on the axis and twice continuously differentiable outside
of the origin; consequently, we can find a sequence of polynomials fk (s) converging, as k → ∞, to f
along with their first derivatives uniformly on every compact subset of R and, besides this, converging
to f uniformly along with the first and the second derivative on every compact subset of R\{0}. Now let
y, h ∈ SN , let y = uDiag{λ}uT be the eigenvalue decomposition of y, and let h = ub huT . For a polynomial
536 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

PK PK
p(s) = k=0 pk sk , setting P (w) = Tr( k=0 pk wk ) : SN → R, and denoting by γ a closed contour in C
encircling the spectrum of y, we have
PN
(a) P (y) = Tr(p(y)) = j=1 p(λj (y))
PK PN
(b) DP (y)[h] = k=1 kpk Tr(y k−1 h) = Tr(p0 (y)h) = j=1 p0 (λj (y))b hjj
d d 0
(c) D2 P (y)[h, h]H = dt t=0
DP (y + th)[h] = dt t=0 Tr(p (y + th)h)
d 1
Tr(h(zI − (y + th))−1 )p0 (z)dz = 2πı 1
Tr(h(zI − y)−1 h(zI − y)−1 )p0 (z)dz
H
= dt t=0 2πı
γ γ
1
H PN
b2 p0 (z) Pn b2
= 2πı i,j=1 hij (z−λi (y))(z−λj (y)) dz = i,j=1 hij Γij ,
γ
( 0
p (λi (y))−p0 (λj (y))
λi (y)−λj (y) , λi (y) 6= λj (y)
Γij = 00
p (λi (y)), λi (y) = λj (y)

We conclude from (a, b) that as k → ∞, the real-valued polynomials Fk (·) = Tr(fk (·)) on SN converge,
along with their first order derivatives, uniformly on every bounded subset of SN , and the limit of the
sequence, by (a), is exactly χ(·). Thus, χ(·) is continuously differentiable, and (b) says that
N
X
Dχ(y)[h] = f 0 (λj (y))b
hjj . (5.9.17)
j=1

Besides this, (a-c) say that if U is a closed convex set in SN which does not contain singular matrices,
then Fk (·), as k → ∞, converge along with the first and the second derivative uniformly on every compact
subset of U , so that χ(·) is twice continuously differentiable on U , and at every point y ∈ U we have
( 0
N f (λi (y))−f 0 (λj (y))
2
X
2 λi (y)−λj (y) , λi (y) 6= λj (y)
D χ(y)[h, h] = hij Γij , Γij =
b
00
(5.9.18)
i,j=1
f (λi (y)), λi (y) = λj (y)

and in particular χ(·) is convex on U .


30 . We intend to prove that (i) χ(·) is convex, and (ii) its restriction on the set YF is strongly convex,
with certain modulus α > 0, w.r.t. the trace norm | · |1 . Since χ is continuously differentiable, all we
need to prove (i) is to verify that
hχ0 (y 0 ) − χ0 (y 00 ), y 0 − y 00 i ≥ 0 (∗)
for a dense in SN × SN set of pairs (y 0 , y 00 ), e.g., those with nonsingular y 0 − y 00 . For a pair of the latter
type, the polynomial q(t) = Det(y 0 + t(y 00 − y 0 )) of t ∈ R is not identically zero and thus has finitely
many roots on [0, 1]. In other words, we can find finitely many points t0 = 0 < t1 < ... < tn = 1
such that all “matrix intervals” ∆i = (yi , yi+1 ), yk = y 0 + tk (y 00 − y 0 ), 1 ≤ i ≤ n − 1, are comprised of
nonsingular matrices. Therefore χ is convex on every closed segment contained in one of ∆i ’s, and since
χ is continuously differentiable, (∗) follows.
40 . Now let us prove that with properly defined α > 0 one has

hχ0 (y 0 ) − χ0 (y 00 ), y 0 − y 00 i ≥ α|y 0 − y 00 |21 ∀y 0 , y 00 ∈ YF (5.9.19)

Let  > 0, and let Y  be a convex open in Y = {y : |y|1 ≤ 1} neighbourhood of YF such that for all
y ∈ Y  at most M eigenvalues of y are of magnitude > . We intend to prove that for some α > 0 one
has
hχ0 (y 0 ) − χ0 (y 00 ), y 0 − y 00 i ≥ α |y 0 − y 00 |21 ∀y 0 , y 00 ∈ Y  . (5.9.20)
Same as above, it suffices to verify this relation for a dense in Y  × Y  set of pairs y 0 , y 00 ∈ Y  , e.g., for
those pairs y 0 , y 00 ∈ Y  for which y 0 − y 00 is nonsingular. Defining matrix intervals ∆i as above and taking
into account continuous differentiability of χ, it suffices to verify that if y ∈ ∆i and h = y 0 − y 00 , then
D2 χ(y)[h, h] ≥ α |h|21 . To this end observe that by (5.9.18) all we have to prove is that
N
X
D2 χ(y)[h, h] = h2ij Γij ≥ α |h|21 .
b (#)
i,j=1
5.9. APPENDIX: SOME PROOFS 537

50 . Setting λj = λj (y), observe that λi =


6 0 for all i due to the origin of y. We claim that if |λi | ≥ |λj |,
then Γij ≥ q|λi |q−1 . Indeed, the latter relation definitely holds true when λi = λj . Now, if λi and λj
|λi |q −|λ|q
are of the same sign, then Γij = |λi |−|λj |j ≥ q|λi |q−1 , since the derivative of the concave (recall that
0 < q ≤ 1) function tq of t > 0 is positive and nonincreasing. If λi and λj are of different signs, then
|λ |q +|λ |q
Γij = |λi i |+|λjj | ≥ |λi |q−1 due to |λj |q ≥ |λj ||λi |q−1 , and therefore Γij ≥ q|λi |q−1 . Thus, our claim is
justified.
W.l.o.g. we can assume that the positive reals µi = |λi |, i = 1, ..., N , form a nondecreasing sequence,
so that, by above, Γij ≥ qµq−1 j when i ≤ j. Besides this, at most M of µj are ≥ , since y 0 , y 00 ∈ Y  and
therefore y ∈ Y by convexity of Y  . By the above,


X N
X
D2 χ(y)[h, h] ≥ 2q h2ij µjq−1 + q
b h2jj µjq−1 ,
b
i<j≤N j=1

or, equivalently by symmetry of b


h, if
 
bh1j

 bh2j 

..
hj = 
 
 . 

 bhj1 bhj2 ··· bhjj 
 

and Hj = khj kFro is the Frobenius norm of hj , then


N
X
D2 χ(y)[h, h] ≥ q Hj2 µjq−1 . (5.9.21)
j=1
√ √
Observe that hj are of rank ≤ 2, whence |hj |1 ≤ 2khj kFro = 2Hj , so that
P 2 P 2 P 2
N N (q−1)/2 (1−q)/2
|h|21 = |b
h|21 ≤ j
j=1 |h |1 ≤2 j=1 Hj =2 j [Hj µj ]µj
P  P 
N 2 q−1 N 1−q
≤2 H
j=1 j j µ µ
P j=1 j 
N
2
≤ (2/q)D χ(y)[h, h] µ1−q [by (5.9.21)]
 j=1 j PN 
≤ (2/q)D2 χ(y)[h, h] (N − M )1−q + [M −1 j=N −M +1 µj ]1−q M
 [due to 0 < q < 1 and µj ≤ , j ≤ N − M ]
≤ (2/q)D2 χ(y)[h, h] (N − M )1−q + M q [since j µj ≤ 1 due to y ∈ Y  ⊂ {y : |y|1 ≤ 1}],
P

and we see that (#) holds true with α = 2[(N −Mq)+M q ] . As a result, with the just defined α relation
(5.9.20) holds true, whence (5.9.19) is satisfied with α = lim→+0 α = 2q M −q , that is, χ is continuously
differentiable convex function which is strongly convex, modulus α, w.r.t. | · |1 , on YF . Recalling the
b (·) = α−1 χ(·), so that ω
definition of q, we see that ω b satisfies the conclusion of Lemma 5.9.3. 2

B. We are ready to prove Theorem 5.9.2. Under the premise and in the notation of Theorem, let
 
X X X 1 x
N= µj + νj , M = 2 µj = 2m, Ax = ∈ SN [x ∈ E = Mµν ]. (5.9.22)
2 xT
j j j

Observe that the image space F of A is a linear subspace of SN , and that the eigenvalues of y = Ax
are the 2m reals ±σi (x)/2, 1 ≤ i ≤ m, and N − M zeros, so that kxknuc ≡ |Ax|1 and M, F satisfy the
premise of Lemma 5.9.3. Setting
√ m
4 e ln(2m) X 1+q 1
ω(x) = ω
b (Ax) = q σ (x), q = ,
2 (1 + q) i=1 i 2 ln(2m)
538 LECTURE 5. SIMPLE METHODS FOR LARGE-SCALE PROBLEMS

and invoking Lemma 5.9.3, we see that ω is a convex continuously differentiable function on E which,
due to the identity kxknuc ≡ |Ax|1 , is strongly convex, modulus 1, w.r.t. k · knuc , on the k · knuc -unit ball
Z of E. This function clearly satisfies (5.9.13). 2

5.9.5.2 Proof of Corollary 5.9.2


Corollary 5.9.2 is obtained from Theorem 5.9.2 in exactly the same fashion as Corollary 5.9.1 was derived
from Theorem 5.9.1, see section 5.9.4.
Bibliography

[1] Aghassia, M., Bertsimas. D., Perakis, G. “Solving asymmetric variational inequalities via convex
optimization” – Operations Research Letters 34 (2006) 481–490
[2] Andersen, E. D. MOSEK optimization suite, MOSEK ApS, 2019.https://docs.mosek.com/9.2/
intro.pdf
[3] Antipin, A.S. “The convergence of proximal methods to fixed points of extremal mappings and
estimates of their rate of convergence,” Vychisl. Mat. Mat. Fiz. 35:5 (1995), 688-704 (in Russian;
English translation in Computational Mathematics and Mathematical Physics 35:5 (1995), 539–
551).
[4] Antipin, A.S. “Gradient Approach of Computing Fixed Points of Equilibrium Problems.” Journal
of Global Optimization 24 (2002), 285–309.
[5] Antipin, A.S. “Linear Programming and Dynamics” - Ural Mathematical Journal 1:1 (2015), 3–19.
[6] Antipin, A., Khoroshilova, E. “Saddle point approach to solving problem of optimal control with
fixed ends” – Journal of Global Optimization 16 (2016), 317.
[7] Antipin, A., Khoroshilova, E. “Controlled dynamic model with boundary-value problem of mini-
mizing a sensitivity function” – Operations Research Letters 13 (2019), 451–473.
[8] Beck, Amir First-Order Methods in Optimization – MOS-SIAM Series on Optimization, SIAM,
2017.
[9] Grant, M., and Boyd, S. The CVX User’s Guide, CVX Research, Inc http://cvxr.com/cvx/doc/
CVX.pdf
[10] Ben-Tal, A., and Nemirovski, A. “Robust Solutions of Uncertain Linear Programs” – OR Letters
v. 25 (1999), 1–13.
[11] Ben-Tal, A., Margalit, T., and Nemirovski, A. “The Ordered Subsets Mirror Descent Optimization
Method with Applications to Tomography” – SIAM Journal on Optimization v. 12 (2001), 79–108.
https://www2.isye.gatech.edu/~nemirovs/SIOPT_MD_2001.pdf
[12] Ben-Tal, A., and Nemirovski, A. Lectures on Modern Convex Optimization: Analysis, Algorithms,
Engineering Applications – MPS-SIAM Series on Optimization, SIAM, Philadelphia, 2001.
[13] Ben-Tal, A., and Nemirovski, A. “On tractable approximation of uncertain linear matrix inequali-
ties affected by interval uncertainty” – SIAM J. on Optimization 12:3 (2002), 811-833.
https://www2.isye.gatech.edu/~nemirovs/SIOPT_MC_2002.pdf
[14] Ben-Tal, A., Nemirovski, A., and Roos, C. (2001), “Robust solutions of uncertain quadratic and
conic-quadratic problems” – SIAM J. on Optimization 13:2 (2002), 535-560.
https://www2.isye.gatech.edu/~nemirovs/SIOPT_RQP_2002.pdf
[15] Ben-Tal, A., El Ghaoui, L., Nemirovski, A. Robust Optimization – Princeton University Press,
2009.
https://docs.google.com/viewer?a=v&pid=sites&srcid=
ZGVmYXVsdGRvbWFpbnxyb2J1c3RvcHRpbWl6YXRpb258Z3g6N2NkNmUwZWZhMTliYjlhMw

539
540 BIBLIOGRAPHY

[16] Bertsimas, D., Popescu, I., Sethuraman, J. “Moment problems and semidefinite programming.”
— In H. Wolkovitz, & R. Saigal (Eds.), Handbook on Semidefinite Programming (pp. 469-509).
Kluwer Academic Publishers, 2000.
[17] Bertsimas, D., Popescu, I. “Optimal inequalities in probability theory: a convex optimization
approach.” — SIAM Journal on Optimization 15:3 (2005), 780-804.
[18] Boyd, S., El Ghaoui, L., Feron, E., and Balakrishnan, V. Linear Matrix Inequalities in System and
Control Theory – volume 15 of Studies in Applied Mathematics, SIAM, Philadelphia, 1994.
[19] Vanderberghe, L., Boyd, S., El Gamal, A.“Optimizing dominant time constant in RC circuits”,
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems v. 17 (1998),
110–125.
[20] Boyd, S., Vandenberghe, L. Convex Optimization, Cambridge University Press, 2004. https://
web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf
[21] Dhaene, J., Denuit, M., Goovaerts, M.J., Kaas, R., Vyncke, D. “The concept of comonotonicity
in actuarial science and finance: theory” — Insurance: Mathematics and Economics, 31 (2002),
3-33.
[22] El Ghaoui, L., Oks, M., Oustry, F. “Worst-case value-at-risk and robust portfolio optimization: A
conic programming approach” – Operations Research, 51(4) 2003, 543–556.
[23] Frank, M.; Wolfe, P. “An algorithm for quadratic programming” – Naval Research Logistics Quar-
terly 3:1-2 (1956), 95–110. doi:10.1002/nav.3800030109
[24] Goemans, M.X., and Williamson, D.P. “Improved approximation algorithms for Maximum Cut and
Satisfiability problems using semidefinite programming” – Journal of ACM 42 (1995), 1115–1145.
[25] Goldfarb, D., Iyengar, G. “Robust portfolio selection problems” – Mathematics of Operations
Research, 28(1) 2003, 1–38.
[26] Grigoriadis, M.D., Khachiyan, L.G. A sublinear time randomized approximation algorithm for
matrix games. – OR Letters 18 (1995), 53-58.
[27] Grotschel, M., Lovasz, L., and Schrijver, A. The Ellipsoid Method and Combinatorial Optimization,
Springer, Heidelberg, 1988.
[28] Harchaoui, Z., Juditsky, A., Nemirovski, A. “Conditional Gradient Algorithms for Norm-
Regularized Smooth Convex Optimization” – Mathematical Programming 152:1-2 (2015), 75–112.
https://www2.isye.gatech.edu/~nemirovs/HarchaouiJudNem.pdf
[29] Juditsky, A., Nemirovski, A. “On verifiable sufficient conditions for sparse signal recovery via `1
minimization” – Mathematical Programming Series B, 127 (2011), 5788.
https://www2.isye.gatech.edu/~nemirovs/CSNote-Submitted.pdf
[30] Juditsky, A., Nemirovski, A. Statistical Inference via Convex Optimization – Princeton University
Press, 2020. https://www2.isye.gatech.edu/~nemirovs/StatOptPUPWeb.pdf
https://www2.isye.gatech.edu/~nemirovs/StatOptOct2019.pdf
[31] Juditsky, A., Nemirovski, A. “On Well-Structured Convex-Concave Saddle Point Problems and
Variational Inequalities with Monotone Operators” – to appear in Optimization and Software,
https://arxiv.org/pdf/2102.01002.pdf
[32] Kiwiel, K. “Proximal level bundle method for convex nondifferentable optimization, saddle point
problems and variational inequalities”, Mathematical Programming Series B v. 69 (1995), 89-109.
[33] Lan, Guanghui First-order and Stochastic Optimization Methods for Machine Learning – Springer
Series in the Data Sciences, Springer, 2020.
[34] Lemarechal, C., Nemirovski, A. and Nesterov, Yu. “New variants of bundle methods”, Mathemat-
ical Programming Series B v. 69 (1995), 111-148.
BIBLIOGRAPHY 541

[35] Lobo, M.S., Vanderbeghe, L., Boyd, S., and Lebret, H. “Second-Order Cone Programming” –
Linear Algebra and Applications v. 284 (1998), 193–228.
[36] Nemirovski, A. “Polynomial time methods in Convex Programming” – in: J. Renegar, M. Shub
and S. Smale, Eds., The Mathematics of Numerical Analysis, 1995 AMS-SIAM Summer Seminar
on Mathematics in Applied Mathematics, July 17 – August 11, 1995, Park City, Utah. – Lectures
in Applied Mathematics, v. 32 (1996), AMS, Providence, 543–589.
[37] Nemirovski, A., Roos, C., and Terlaky, T. “On maximization of quadratic form over intersection
of ellipsoids with common center” – Mathematical Programming v. 86 (1999), 463-473.
https://www2.isye.gatech.edu/~nemirovs/MP_QuadForms_1999.pdf
[38] Nemirovski, A. “Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with
Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems”
– SIAM Journal on Optimization v. 15 No. 1 (2004), 229-251.
https://www2.isye.gatech.edu/~nemirovs/SIOPT_042562-1.pdf
[39] Nemirovski, A. “On Safe Tractable Approximations of Chance Constraints” – European Journal
of Operations Research v. 219 (2012), 707-718.
https://www2.isye.gatech.edu/~nemirovs/EUROXXIV.pdf
[40] Nemirovski, A., Onn, S., Rothblum, U. “Accuracy certificates for computational problems with
convex structure” – Mathematics of Operations Research 35:1 2010, 52–78. https://www2.isye.
gatech.edu/~nemirovs/MOR_AccuracyCertificates.pdf
[41] Nemirovski, A. Introduction to Linear Optimization
http://www2.isye.gatech.edu/∼nemirovs/OPTI LectureNotes.pdf
[42] Nesterov, Yu., and Nemirovski, A. Interior point polynomial time methods in Convex Programming,
SIAM, Philadelphia, 1994.
[43] Nesterov, Yu. “A method of solving a convex programming problem with convergence rate O(1/k 2 )”
– Doklady Akademii Nauk SSSR v. 269 No. 3 (1983) (in Russian; English translation Soviet Math.
Dokl. 27:2 (1983), 372-376).
[44] Nesterov, Yu. “Squared functional systems and optimization problems”, in: H. Frenk, K. Roos,
T. Terlaky, S. Zhang, Eds., High Performance Optimization, Kluwer Academic Publishers, 2000,
405–440.
[45] Nesterov, Yu. “Semidefinite relaxation and non-convex quadratic optimization” – Optimization
Methods and Software v. 12 (1997), 1–20.
[46] Nesterov, Yu. “Global quadratic optimization via conic relaxation”, in: R. Saigal, H. Wolkowicz, L.
Vandenberghe, Eds. Handbook on Semidefinite Programming, Kluwer Academis Publishers, 2000,
363-387.
[47] Nesterov, Yu. “Smooth minimization of non-smooth functions” – CORE Discussion Paper, 2003,
and Mathematical Programming Series A, v. 103 (2005), 127-152.
[48] Nesterov, Yu. “Gradient Methods for Minimizing Composite Objective Function” – CORE Dis-
cussion Paper 2007/96 http://www.ecore.be/DPs/dp_1191313936.pdf and Mathematical Pro-
gramming 140:1 (2013), 125–161.
[49] Nesterov, Yu. Nemirovski, A. “On first order algorithms for `1 /nuclear norm minimization” – Acta
Numerica 22 (2013).
https://www2.isye.gatech.edu/~nemirovs/ActaFinal_2013.pdf
[50] Nesterov, Yu. “Universal gradient methods for convex optimization problems” – Mathematical
Programming 152 (2015), 381–404.
[51] Nesterov, Yu. Lectures on Convex Optimization, 2nd Edition – Springer Optimization and Its
Applications, 137, Springer, 2018.
542 BIBLIOGRAPHY

[52] Roos. C., Terlaky, T., and Vial, J.-P. Theory and Algorithms for Linear Optimization: An Interior
Point Approach, J. Wiley & Sons, 1997.
[53] Toh, K., Todd, M., Tütüncü, R. H. On the implementation and usage of SDPT3-a Matlab software
package for semidefinite-quadratic-linear programming, version 4.0 – In Anjos, M. F., Lasserre, J.
B. (Eds.) Handbook on semidefinite, conic and polynomial optimization, Springer, Boston, MA,
2012, 715–754.
[54] Wu S.-P., Boyd, S., and Vandenberghe, L. (1997), “FIR Filter Design via Spectral Factorization
and Convex Optimization” – Biswa Datta, Ed., Applied and Computational Control, Signal and
Circuits, Birkhauser, 1997, 51–81.
[55] Ye, Y. Interior Point Algorithms: Theory and Analysis, J. Wiley & Sons, 1997.
Solutions to Selected Exercises

6.1 Exercises for Lecture 1


6.1.1 Around Theorem on Alternative
Exercise 1.4, 1) and 3) Derive the following statements from the General Theorem on Alternative:
1. [Gordan’s Theorem on Alternative] One of the inequality systems
(I) Ax < 0, x ∈ Rn ,
(II) AT y = 0, 0 6= y ≥ 0, y ∈ Rm ,
(A being an m × n matrix, x are variables in (I), y are variables in (II)) has a solution if and only
if the other one has no solutions.

Solution: By GTA, (I) has no solutions if and only if a weighted sum [AT y]T xΩ0, with
y ≥ 0 and Ω = ” < ” when y 6= 0 and Ω = ” ≤ ” when y = 0, is a contradictory
inequality, which happens iff Ω = ” < ” and AT y = 0. Thus, Ax < 0 has no solutions
iff there exists nonzero y ≥ 0 such that AT y = 0. 2

3. [Motzkin’s Theorem on Alternative] The system


Sx < 0, N x ≤ 0
in variables x has no solutions if and only if the system
S T σ + N T ν = 0, σ ≥ 0, ν ≥ 0, σ 6= 0
in variables σ, ν has a solution.
Solution: Same as above, non-existence of solutions to the first system is equivalent to
existence of nonnegative weights σ, ν such that the inequality [S T σ + N T ν]T xΩ0 with
Ω = ” < ” when σ 6= 0 and Ω = ” ≤ ” when σ = 0 is contradictory. The only one of
these two options which can take place is that S T σ + N T ν = 0 and σ 6= 0. 2
.

6.1.2 Around cones


Exercise 1.8.2-3) 2) Let K be a cone in Rn and u 7→ Au be a linear mapping from certain Rk to
Rn with trivial null space and such that ImA ∩ int K 6= ∅. Prove that the inverse image of K under the
mapping – the set
A−1 (K) = {u | Au ∈ K}
– is a cone in Rk . Prove that the cone dual to A−1 (K) is the image of K∗ under the mapping λ 7→ AT λ:
(A−1 (K))∗ = {AT λ | λ ∈ K∗ }.

543
544 SOLUTIONS TO SELECTED EXERCISES

Solution: The fact that A−1 (K) is a cone is evident. Let us justify the announced description
of the cone dual to A−1 (K). If λ ∈ K∗ , then AT λ clearly belongs to (A−1 (K))∗ :

u ∈ A−1 (K) ⇒ Au ∈ K ⇒ λT (Au) = (AT λ)T u ≥ 0.

Now let us prove the inverse implication: if c ∈ (A−1 (K))∗ , then c = AT λ for some λ ∈ K∗ .
To this end consider the conic problem

min cT x | Ax ≥K 0 .

x

The problem is strictly feasible and below bounded (why?), so that by the Conic Duality
Theorem the dual problem

max 0T λ | AT λ = c, λ ≥K∗ 0

λ

is solvable, Q.E.D.
3) Let K be a cone in Rn and y = Ax n N
T be a linear mapping from R onto R (i.e., the image of A
N
is the entire R ). Assume that Null(A) K = {0}.
Prove that then the image of K under the mapping A – the set

AK = {Ax | x ∈ K}

is a cone in RN .
Prove that the cone dual to AK is

(AK)∗ = {λ ∈ RN | AT λ ∈ K∗ }.

Demonstrate by example that if in the above statement the assumption Null(A)∩K = {0} is weakened
to Null(A) ∩ int K = ∅, then the image of K under the mapping A may happen to be non-closed.
Solution: Let us temporarily set B = AT (note that B has trivial null space, since A is an
onto mapping) and
L = {λ ∈ RN | Bλ ∈ K∗ }.
Let us prove that the image of B intersects the interior of K∗ . Indeed, otherwise we could
separate the convex set int K∗ from the linear subspace Im B: there would exist x 6= 0 such
that
inf xT µ ≥ sup xT µ,
µ∈int K∗ µ∈Im B

whence x ∈ (K∗ )∗ = K and x ∈ (Im B)⊥ = Null(A), which is impossible.


It remains to apply to the cone K∗ and the mapping B the rule on inverse image (rule 2)):
according to this rule, the set L is a cone, and its dual cone is the image of (K∗ )∗ = K under
the mapping B T = A. Thus, A(K) indeed is a cone – namely, the cone dual to L, whence
the cone dual to A(K) is L.
“Demonstrate by example...”: When the 3D ice-cream cone is projected onto its tangent
plane, the projection is the open half-plane plus a single point on the boundary of this
half-plane, which is not a closed set.

Exercise 1.9 Let A be a m × n matrix of full column rank and K be a cone in Rm .


1. Prove that at least one of the following facts always takes place:
(i) There exists a nonzero x ∈ Im A which is ≥K 0;
(ii) There exists a nonzero λ ∈ Null(AT ) which is ≥K∗ 0.
Geometrically: given a primal-dual pair of cones K, K∗ and a pair L, L⊥ of linear subspaces which
are orthogonal complements of each other, we either can find a nontrivial ray in the intersection
L ∩ K, or in the intersection L⊥ ∩ K∗ , or both.
6.1. EXERCISES FOR LECTURE 1 545

2. Prove that the “strict” version of (ii) takes place (i.e., there exists λ ∈ Null(AT ) which is >K∗ 0)
if and only if (i) does not take place, and vice versa: the strict version of (i) takes place if and only
if (ii) does not take place.
Geometrically: if K, K∗ is a primal-dual pair of cones and L, L⊥ are linear subspaces which are
orthogonal complements of each other, then the intersection L ∩ K is trivial (is the singleton {0})
if and only if the intersection L⊥ ∩ int K∗ is nonempty.

Solution: 1): Assuming that (ii) does not take place, note that AT K∗ is a cone in Rn
(Exercise 1.8), and the dual of this latter cone is the inverse image of K under the mapping
A. Since the dual must contain nonzero vectors, (i) takes place.
2): Let e >K∗ 0. Consider the conic problem

max t | AT λ = 0, λ − te ≥K∗ 0 .

λ,t

Note that this problem is strictly feasible, and the strict version of (ii) is equivalent to the
fact that the optimal value in the problem is > 0. Thus, (ii) is not valid if and only if the
optimal value in our strictly feasible maximization conic problem is ≤ 0. By Conic Duality
Theorem this is the case if and only if the dual problem

min 0 | Az + µ = 0, µT e = 1, µ ≥K 0

z,µ

is solvable with optimal value equal to 0, which clearly is the case if and only if the intersection
of ImA and K is not the singleton {0}.

6.1.3 Feasible and level sets of conic problems


Exercise 1.15 Let the problem

min cT x | Ax − b ≥K 0

(CP)

be feasible (A is of full column rank). Then the following properties are equivalent to each other:
(i) the feasible set of the problem is bounded;
(ii) the set of primal slacks K = {y ≥K 0, y = Ax − b} is bounded
(iii) Im A ∩ K = {0}
(iv) the system of vector inequalities

AT λ = 0, λ >K∗ 0

is solvable.
Corollary. The property of (CP) to have bounded feasible set is independent of the particular value
of b such that (CP) is feasible!

Solution: (i) ⇔ (ii): this is an immediate consequence of A being of full column rank.
(ii) ⇒ (iii): If there exists 0 6= y = Ax ≥K 0, then the set of primal slacks contains, along
with any of its points ȳ, the entire ray {ȳ + ty | t > 0}, so that the set of primal slacks is
unbounded (recall that it is nonempty – (CP) is feasible!). Thus, (iii) follows from (ii) (by
contradiction).
(iii) ⇒ (iv): see Exercise 1.9.2).
(iv) ⇒ (ii): let λ be given by (iv). For all primal slacks y = Ax − b ∈ K one has λT y =
λT [Ax − b] = (AT λ)T x − λT b = λT b, and it remains to use the result of Exercise 1.7.2).
546 SOLUTIONS TO SELECTED EXERCISES

Exercise 1.16 Let the problem

min cT x | Ax − b ≥K 0

(CP)

be feasible (A is of full column rank). Prove that the following two conditions are equivalent to each
other:
(i) (CP) has bounded level sets
(ii) The dual problem
max bT λ | AT λ = c, λ ≥K∗ 0


is strictly feasible.
Corollary. The property of (CP) to have bounded level sets is independent of the particular value of
b such that (CP) is feasible!
Solution: (i) ⇒ (ii): Consider the linear vector inequality
 
Ax
Āx ≡ ≥K̄ 0
−cT x
K̄ = {(x, t) | x ≥K 0, t ≥ 0}

If x̄ is a solution of this inequality and x is a feasible solution to (CP), then the entire ray
{x + tx̄ | t ≥ 0} is contained in the same level set of (CP) as x. Consequently, in the
case of (i) the only solution to the inequality is the trivial solution x̄ = 0. In other words,
the intersection of Im Ā with K̄ is trivial – {0}. Applying the result of Exercise 1.9.2), we
conclude that the system
   
λ λ
ĀT ≡ AT λ − µc = 0, >K̄∗ 0
µ µ

is solvable; if (λ, µ) solves the latter system, then λ >K∗ 0 and µ > 0, so that µ−1 λ is a
strictly feasible solution to the dual problem.
(ii) ⇒ (i): If λ >K∗ 0 is feasible for the dual problem and x is feasible for (CP), then

λT (Ax − b) = (AT λ)T x − λT b = cT x − λT b.

We conclude that if x runs through a given level set L of (CP), the corresponding slacks
y = Ax − b belong to a set of the form {y ≥K 0, λT y ≤ const}. The sets of the latter type
are bounded in view of the result of Exercise 1.7.2) (recall that λ >K∗ 0). It remains to
note that in view of A being of full column rank boundedness of the image of L under the
mapping x 7→ Ax − b implies boundedness of L.
6.2. EXERCISES FOR LECTURE 2 547

6.2 Exercises for Lecture 2


6.2.1 Optimal control in discrete time linear dynamic system
Consider a discrete time linear dynamic system
x(t) = A(t)x(t − 1) + B(t)u(t), t = 1, 2, ..., T ;
(S)
x(0) = x0 .
Here:
• t is the (discrete) time;
• x(t) ∈ Rl is the state vector: its value at instant t identifies the state of the controlled plant;
• u(t) ∈ Rk is the exogenous input at time instant t; {u(t)}Tt=1 is the control;
• For every t = 1, ..., T , A(t) is a given l × l, and B(t) – a given l × k matrices.
A typical problem of optimal control associated with (S) is to minimize a given functional of the trajectory
x(·) under given restrictions on the control. As a simple problem of this type, consider the optimization
model ( )
T
T 1X T
min c x(T ) | u (t)Q(t)u(t) ≤ w , (OC)
x 2 t=1
where Q(t) are given positive definite symmetric matrices.

Exercise 2.1.1-3) 1) Use (S) to express x(T ) via the control and convert (OC) in a quadratically
constrained problem with linear objective w.r.t. the u-variables.
Solution: From (S) it follows that
x(1) = A(1)x0 + B(1)u(1);
x(2) = A(2)x(1) + B(2)u(2)
= A(2)A(1)x0 + B(2)u(2) + A(2)B(1)u(1);
...
T
P
x(T ) = A(T )A(T − 1)...A(1)x0 + A(T )A(T − 1)...A(t + 1)B(t)u(t)
t=1
PT
≡ A(T )A(T − 1)...A(1)x0 + C(t)u(t),
t=1
C(t) = A(T )A(T − 1)...A(t + 1)B(t);
Consequently, (OC) is equivalent to the problem
( T T
)
X
T 1X T
min dt u(t) | u (t)Q(t)u(t) ≤ w [dt = C T (t)c] (∗)
u(·)
t=1
2 i=1

2) Convert the resulting problem to a conic quadratic program


Solution:
T
dTt u(t)
P
minimize
t=1
s.t.  
21/2 Q1/2 (t)u(t)
 1 − s(t)  ≥Lk 0, t = 1, ..., T
1 + s(t)
PT
s(t) ≤ w,
t=1

the design variables are {u(t) ∈ Rk , s(t) ∈ R}Tt=1 .


548 SOLUTIONS TO SELECTED EXERCISES

3) Pass to the resulting problem to its dual and find the optimal solution to the latter problem.
Solution: The conic dual is
T
P
maximize −wω − [µ(t) + ν(t)]
t=1
s.t.
21/2 Q1/2 (t)ξ(t) = dt , t = 1, ..., T
[⇔ ξ(t) = 2−1/2 Q−1/2 (t)dt ]
−ω − µ(t) + ν(t) = 0, t = 1, ..., T
p [⇔ µ(t) = ν(t) − ω]
ξ T (t)ξ(t) + µ2 (t) ≤ ν(t), t = 1, ..., T,

the variables are {ξ(t) ∈ Rk , µ(t), ν(t), ω ∈ R}. Equivalent reformulation of the dual problem
is
PT
minimize wω + [2ν(t) − ω]
t=1
s.t. p
a2t + (ν(t) − ω)2 ≤ ν(t), t = 1, ..., T
[a2t = 2−1 dTt Q−1 (t)dt ]
or, which is the same,
T
P
minimize wω + [2ν(t) − ω]
t=1
s.t.
ω(2ν(t) − ω) ≥ a2t , t = 1, ..., T

or, which is the same, ( ! )


T
X
−1
min wω + ω a2t | ω>0 .
ω
t=1

It follows that the optimal solution to the dual problem is


s
T
w−1 ( a2t );
P
ω∗ =
i=1
ν∗ (t) = 12 a2t ω∗−1 + ω∗ , t = 1, ..., T ;
 

µ∗ (t) = ν∗(t) − ω∗
= 12 a2t ω∗−1 − ω∗ , t = 1, ..., T.


6.2.2 Around stable grasp


Recall that the Stable Grasp Analysis problem is to check whether the system of constraints

kF i k2 ≤ µ(f i )T v i , i = 1, ..., N
(v i )T F i = 0, i = 1, ..., N
N
(f i + F i ) + F ext
P
= 0 (SG)
i=1
N
pi × (f i + F i ) + T ext
P
= 0
i=1

in the 3D vector variables F i is or is not solvable. Here the data are given by a number of 3D vectors,
namely,
• vectors v i – unit inward normals to the surface of the body at the contact points;
6.2. EXERCISES FOR LECTURE 2 549

• contact points pi ;
• vectors f i – contact forces;
• vectors F ext and T ext of the external force and torque, respectively.
µ > 0 is a given friction coefficient; we assume that fiT v i > 0 for all i.

Exercise 2.2 Regarding (SG) as the system of constraints of a maximization program with trivial
objective, build the dual problem.
Solution:
N
N T N T
i T i i ext i i ext
P P P
minimize µ[(f ) v ]φi − f +F Φ− p ×f +T Ψ
i=1 i=1 i=1
i i
s.t. Φi + σi v + Φ − p × Ψ = 0, i = 1, ..., N
kΦi k2 ≤ φi , i = 1, ..., N,
[Φ, Φi , Ψ ∈ R3 , σi , φi ∈ R]

m
( N
)
X i T i i i T T
min µ[(f ) v ]kp × Ψ − Φ − σi v k2 − F Φ − T Ψ ,
σi ∈R,Ψ,Φ∈R3
i=1
N N
f i + F ext , T = pi × f i + T ext
P P
F =
i=1 i=1
550 SOLUTIONS TO SELECTED EXERCISES

6.3 Exercises for Lecture 3

6.3.1 Around positive semidefiniteness, eigenvalues and -ordering

6.3.1.1 Criteria for positive semidefiniteness

Exercise 3.2 [Diagonal-dominant matrices] Let A = [aij ]m


i,j=1 be a symmetric matrix satisfying the
relation

X
aii ≥ |aij |, i = 1, ..., m.
j6=i

Prove that A is positive semidefinite.

Solution: Let e be an eigenvector of A and λ be the corresponding eigenvalue. We may


assume that the largest, in absolute value, of coordinates of e is equal to 1. Let i be the
index of this coordinate; then

X X
λ = aii + aij ej ≥ aii − |aij | ≥ 0.
j6=i j6=i

Thus, all eigenvalues of A are nonnegative, so that A is positive semidefinite.

6.3.1.2 Variational description of eigenvalues

Exercise 3.7 Let f∗ be a closed convex function with the domain Dom f∗ ⊂ R+ , and let f be the
Legendre transformation of f∗ . Then for every pair of symmetric matrices X, Y of the same size with the
spectrum of X belonging to Dom f and the spectrum of Y belonging to Dom f∗ one has

 
λ(f (X)) ≥ λ Y 1/2 XY 1/2 − f∗ (Y ) . (∗)

Solution: By continuity reasons, it suffices to prove (∗) in the case of Y  0 (why?). Let
m be the size of X, let k ∈ {1, ..., m}, let Ek be the family of linear subspaces of Rm of
codimension k − 1, and let E ∈ Ek be such that

e ∈ E ⇒ eT Xe ≤ λk (X)eT e

(such an E exists by the Variational characterization of eigenvalues as applied to X). Let


also F = Y −1/2 E; the codimension of F , same as the one of E, is k − 1. Finally, let g1 , .., gm
6.3. EXERCISES FOR LECTURE 3 551

be an orthonormal system of eigenvectors of Y , so that Y gj = λj (Y )gj . We have

h ∈ F, hT h = 1 ⇒
m
1/2 T
hT [Y 1/2 XY 1/2 − f∗ (Y )]h 1/2 T 2
P
= | {z h}) X(Y h) − j=1 f∗ (λj (Y ))(gj h)
(Y
∈E
m
λk (X)(Y 1/2 h)T (Y 1/2 h) − f∗ (λj (Y ))(gjT h)2
P

j=1
[since Y 1/2 h ∈ E]
m
λk (X)(hT Y h) − f∗ (λj (Y ))(gjT h)2
P
=
j=1
m m
λj (Y )(gjT h)2 − f∗ (λj (Y ))(gjT h)2
P P
= λk (X)
j=1 j=1
m
[λk (X)λj (Y ) − f∗ (λj (Y ))](gjT h)2
P
=
j=1
m
f (λk (X))(gjT h)2
P

j=1
[since f = (f∗ )∗ ]
= f (λk (X)) P T 2
[since (gj h) = hT h = 1]
j
= λk (f (X))
[since f (·) is nonincreasing due to Dom f∗ ⊂ R+ ]

We see that there exists F ∈ Ek such that

max hT [Y 1/2 XY 1/2 − f∗ (Y )]h ≤ λk (f (X)).


h∈F :hT h=1

From Variational characterization of eigenvalues it follows that

λk (Y 1/2 XY 1/2 − f∗ (Y )) ≤ λk (f (X)).

Exercise 3.9,(iv) [Trace inequality] Prove that whenever A, B ∈ Sm , one has

λT (A)λ(B) ≥ Tr(AB).

Solution: Denote λ = λ(A), and let A = V T Diag(λ)V be the spectral decomposition of A.


Setting B̂ = V BV T , note that λ(B̂) = λ(B) and Tr(AB) = Tr(Diag(λ)B̂). Thus, it suffices
to prove the Trace inequality in the particular case when A is a diagonal matrix with the
diagonal λ = λ(A). Denoting by µ the diagonal of B and setting

k
X
0 k
σ = 0; σ = µi , k = 1, ..., m,
i=1
552 SOLUTIONS TO SELECTED EXERCISES

we have
m
P
Tr(AB) = λi µi
i=1
m
λi (σ i − σ i−1 )
P
=
i=1
m−1
−λ1 σ 0 + (λi − λi+1 )σ i + λm σ m
P
=
i=1
m−1
(λi − λi+1 )σ i + λm Tr(B)
P
=
i=1
m−1
P i
P m
P
≤ (λi − λi+1 ) λj (B) + λm λj (B)
i=1 j=1 j=1
[since λi ≥ λi+1 and in view of Exercise 3.9.(iii)]
Pm
= λi λi (B)
i=1
T
= λ (A)λ(B).

Exercise 3.11.3) Let X be a symmetric n × n matrix partitioned into blocks in a symmetric, w.r.t.
the diagonal, fashion:
X X12 ... X1m 
11
T
 X12 X22 ... X2m 
X= .
 .. .. .. 
.. . . .

T T
X1m X2m ··· Xmm

so that the blocks Xii are square. Let also g : R → R ∪ {+∞} be convex function on the real line which
is finite on the set of eigenvalues of X, and let Fn ⊂ Sn be the set of all n × n symmetric matrices with
all eigenvalues belonging to the domain of g. Assume that the mapping

Y 7→ g(Y ) : Fn → Sn

is -convex:

g(λ0 Y 0 + λ00 Y 00 )  λ0 g(Y 0 ) + λ00 g(Y 00 ) ∀(Y 0 , Y 00 ∈ Fn , λ0 , λ00 ≥ 0, λ0 + λ00 = 1).

Prove that
(g(X))ii  g(Xii ), i = 1, ..., m,

where the partition of g(X) into the blocks (g(X))ij is identical to the partition of X into the blocks Xij .
 I 
1 n1
 2 In2 
Solution: Let  = (1 , ..., m ) with j = ±1, and let U =  .. ,
 . 
m Inm
where ni is the row size of Xii . Then U are orthogonal matrices and one clearly has

X11
 
X22
= 1
X
UT XU .
 
D(X) ≡  ..  2m
 . :i =±1, i=1,...,m
Xmm
6.3. EXERCISES FOR LECTURE 3 553

We have
1
UT XU
P
D(X) = 2m
:i =±1, i=1,...,m
⇓P [since g is -convex]
1
g(D(X))  2m g(UT XU )
:i =±1, i=1,...,m
mP
1
g(D(X))  2m UT g(X)U
:i =±1, i=1,...,m

g(D(X))  D(g(X)).

6.3.1.3 Cauchy’s inequality for matrices


The standard Cauchy’s inequality says that
X sX sX
| xi yi | ≤ x2i yi2 (3.8.1)
i i i

for reals xi , yi , i = 1, ..., n; this inequality


P 2 is exact in the sense that for every collection x1 , ..., xn there
exists a collection y1 , ..., yn with yi = 1 which makes (3.8.1) an equality.
i

Exercise 3.18 (i) Prove that whenever Xi , Yi ∈ Mp,q , one has


" #1/2  !
X X X
σ( XiT Yi ) ≤ λ XiT Xi  kλ YiT Yi k1/2
∞ (∗)
i i i

where σ(A) = λ([AAT ]1/2 ) is the vector of singular values of a matrix A arranged in the non-ascending
order.
P Prove that for every collection X1 , ..., Xn ∈ Mp,q there exists a collection Y1 , ..., Yn ∈ Mp,q with
T
Yi Yi = Iq which makes (∗) an equality.
i
(ii) Prove the following “matrix version” of the Cauchy inequality: whenever Xi , Yi ∈ Mp,q , one has
" #1/2 
X X X
| Tr(XiT Yi )| ≤ Tr  XiT Xi  kλ( YiT Yi )k1/2
∞ , (∗∗)
i i i

and for every collection X1 , ..., Xn ∈ Mp,q there exists a collection Y1 , ..., Yn ∈ Mp,q with YiT Yi = Iq
P
i
which makes (∗∗) an equality.
 1/2
XiT Xi YiT Yi (Xi , Yi ∈ Mp,q ). We should prove
P P
Solution: (i): Denote P = ,Q=
i i
that X
σ( XiT Yi ) ≤ λ(P )kλ(Q)k1/2
∞ ,
i
or, which is the same, X
σ( YiT Xi ) ≤ λ(P )kλ(Q)k1/2
∞ .
i
By the Variational description of singular values, it suffices to prove that for every k =
1, 2, ..., p there exists a subspace Lk ⊂ Rq of codimension k − 1 such that
!
X
∀ξ ∈ Lk : k YiT Xi ξk2 ≤ kξk2 λk (P )kλ(Q)k1/2
∞ . (∗)
i
554 SOLUTIONS TO SELECTED EXERCISES

Let e1 , ..., eq be the orthonormal eigenbasis of P : P ei = λi (P )ei , and let Lk be the linear
span of ek , ek+1 , ..., eq . For ξ ∈ Lk one has
  rP rP
|η T YiT Xi ξ| kYi ηk22 kXi ξk22
P P
≤ kYi ηk2 kXi ξk2 ≤
i i i i
s   s  
T
T
P T
P T
= η Yi Yi η ξ Xi Xi ξ
i i
1/2 1/2 1/2
≤ kλ(Q)k∞ kηk22 λk (P )kξk22
2
= kλ(Q)k∞ λk (P )kηk2 kξk2 ,

whence
! !
X X
k YiT Xi ξk2 = max η T
YiT Xi ξ ≤ λk (P )kλ(Q)k1/2
∞ kξk2 ,
η:kηk2 =1
i i

as required in (*).
To make (*) equality, assume that P  0 (the case of singular P is left to the reader), and
let Yi = Xi P −1 . Then
!
X X
−1
T
Yi Yi = P Xi Xi P −1 = I
T

i i

and !
X X
XiT Yi = XiT Xi P −1 = P,
i i

so that (*) becomes equality.


(i)⇒(ii): it suffices to prove that if A ∈ Mp,p , then

|Tr(A)| ≤ kσ(A)k1 . (∗∗)

Indeed, we have A = U ΛV , where U, V are orthogonal matrices and Λ is a diagonal matrix


with the diagonal σ(A). Denoting by ei the standard basic orths in Rp , we have
X X X
|Tr(A)| = |Tr(U T AU )| = |Tr(Λ(V U ))| = | eTi Λ(V U )ei | ≤ |σi (A)eTi (V U )ei | ≤ σi (A),
i i i

as required in (**).
Here is another exercise of the same flavour:

Exercise 3.19 For nonnegative reals a1 , ..., am and a real α > 1 one has
m
!1/α m
X X

i ≤ ai .
i=1 i=1

Both sides of this inequality make sense when the nonnegative reals ai are replaced with positive semidef-
inite n × n matrices Ai . What happens with the inequality in this case?
Consider the following four statements (where α > 1 is a real and m, n > 1):
1)
m
!1/α m
X X
n α
∀(Ai ∈ S+ ) : Ai  Ai .
i=1 i=1
6.3. EXERCISES FOR LECTURE 3 555

2) 
m
!1/α  m
!
X X
∀(Ai ∈ Sn+ ) : λmax  Aα
i
 ≤ λmax Ai .
i=1 i=1

3) 
m
!1/α  m
!
X X
∀(Ai ∈ Sn+ ) : Tr  Aα
i
 ≤ Tr Ai .
i=1 i=1

4) 
m
!1/α  m
!
X X
∀(Ai ∈ Sn+ ) : Det  Aα
i
 ≤ Det Ai .
i=1 i=1

Among these 4 statements, exactly 2 are true. Identify and prove the true statements.
Solution: The true statements are 2) and 3).
2) is an immediate consequence of the following

Lemma 6.3.1 Let Ai ∈ Sn+ , i = 1, ..., m, and let α > 1. Then


 !1/α  ! !
m m m
1 1
1−
X X X
λj  Aαi
 ≤ λjα Ai λ1 α Ai .
i=1 i=1 i=1

(here, as always, λj (B) are eigenvalues of a symmetric matrix B arranged in the non-
ascending order).
m m
1/α
Aα Ai . Since λj (B 1/α ) = (λj (B))
P P
Proof. Let B = i , A= , we should prove that
i=1 i=1

λj (B) ≤ λj (A)λ1α−1 (A) (6.3.1)

By the Variational description of eigenvalues, it suffices to verify that for every j ≤ n there
exists a linear subspace Lj in Rn of codimension j − 1 such that

ξ T Bξ ≤ λj (A)λ1α−1 (A) ∀(ξ ∈ Lj , kξk2 = 1). (6.3.2)

Let e1 , ..., en be an orthonormal eigenbasis of A (Aej = λj (A)ej ), and let Lj be the linear
span of the vectors ej , ej+1 , ..., en . Let ξ ∈ Lj be a unit vector. We have

ξ Ai [Aα−1
P T α P T
ξ Ai ξ = ξ]
i i | i{z }
ηi
P T
≤ (ξ Ai ξ)1/2 (ηiT Ai ηi )1/2 [since Ai  0]
i 1/2  1/2
P T P T
≤ ξ Ai ξ ηi Ai ηi [Cauchy’s inequality]
i i
 1/2
1/2 P T 2α−1
= ξ T Aξ ξ Ai ξ
i 1/2
T
1/2 P 2α−2 T
≤ ξ Aξ λ1 (Ai )ξ Ai ξ
 iα−1 P
≤ max λ1 (Ai ) ξ T Ai ξ
i i
≤ λα−1
1 (A)λj (A) [since kξk2 = 1 and ξ ∈ Lj ]

as required in (6.3.2). 2
556 SOLUTIONS TO SELECTED EXERCISES

3) is less trivial than 2). Let us look at the (nm) × (nm) square matrix
 α/2 
A1
 Aα/2 
Q =  2.
 

 .. 
α/2
Am

(as always, blank spaces are filled with zeros). Then

B ≡ Aα
P !
i
QT Q = i ,

so that Tr([QT Q]1/α ) = Tr(B 1/α ). Since the eigenvalues of QT Q are exactly the same as
eigenvalues of X = QQT , we conclude that
1/α T 1/α
Tr(B ) = Tr([QQ ] ) = Tr(X 1/α ),
α/2 α/2 α/2 α/2

Aα 1 A1 A2 · · · A1 Am
 α/2 α/2 α α/2 α/2
 A2 A1 A2 · · · A2 Am


X= .. .. .. .. .
.
 
 . . . 
α/2 α/2 α/2 α/2
Am A1 Am A2 ··· Aα
m

Applying the result of Exercise 3.11.1) with F (Y ) = −Tr(Y 1/α ), we get


 α 
A1 m
..
X
[Tr(B 1/α ) =] Tr(X 1/α ) = −F (X) ≤ −F ( ) = Tr(Ai ),
 
.
Aαm
i=1

as required.

6.3.2 -convexity of some matrix-valued functions


Exercise 3.21.4-6) 4): Prove that the function

F (x) = x1/2 : Sm m
+ → S+

is -concave and -monotone.


Solution: Since the function is continuous on its domain, it suffices to verify that it is -
monotone and -concave on int Sm + , where the function is smooth.
Differentiating the identity
F (x)F (x) = x (∗)
in a direction h and setting F (x) = y, DF (x)[h] = dy, we get

y dy + dy y = h.

Since y  0, this Lyapunov equation admits an explicit solution:


Z∞
dy = exp{−ty}h exp{−ty}dt,
0
6.3. EXERCISES FOR LECTURE 3 557

and we see that dy  0 whenever h  0; applying Exercise 3.20.4), we conclude that F is


-monotone.
Differentiating (*) twice in a direction h and denoting d2 y = D2 F (x)[h, h], we get

y d2 y + d2 y y + 2(dy)2 = 0,

whence, same as above,


Z∞
2
d y=− exp{−ty}(dy)2 exp{−ty}dt  0;
0

applying Exercise 3.20.3), we conclude that F is -concave.

5): Prove that the function


F (x) = ln x : int Sm
+ →S
m

is -monotone and -concave.


Solution: The function x1/2 : Sm m
+ → S+ is -monotone and -concave by Exercise 3.21.4).
k
Applying Exercise 3.20.6), we conclude that so are the functions x1/2 for all positive integer
k. It remains to note that h k
i
ln x = lim 2k x1/2 − I
k→∞

and to use Exercise 3.20.7).

6): Prove that the function


 −1
F (x) = Ax−1 AT : int Sn+ → Sm ,

with matrix A of rank m is -concave and -monotone.


Solution: Since A is of rank m, the function F (x) clearly is well-defined and  0 when x  0.
To prove that F is -concave, it suffices to verify that the set

{(x, Y ) | x, Y  0, Y  (Ax−1 AT )−1 }

is convex, which is nearly evident:

{(x, Y ) | x, Y  0, Y  (Ax−1 AT )−1 } Y −1  Ax−1


= {(x, Y ) | x, Y  0,  A }
T
−1
Y A
= {(x, Y ) | x, Y  0,  0}
AT x
= {(x, Y ) | x, Y  0, x  AT Y A}.

To prove that F is -monotone, note that if 0  x  x0 , then 0 ≺ (x0 )−1  x−1 , whence 0 ≺
A(x0 )−1 AT  Ax−1 AT , whence, in turn, F (x) = (Ax−1 AT )−1  (A(x0 )−1 AT )−1 = F (x0 ).

6.3.3 Around Lovasz capacity number


Exercise 3.28 Let Γ be an n-node graph, and σ(Γ) be the optimal value in the problem
− 21 (e + µ)T
   
λ
min λ: 1 0 , (Sh)
λ,µ,ν − 2 (e + µ) A(µ, ν)

where e = (1, ..., 1)T ∈ Rn , A(µ, ν) = Diag(µ) + Z(ν), and Z(ν) is the matrix as follows:
558 SOLUTIONS TO SELECTED EXERCISES

• the dimension of ν is equal to the number of arcs in Γ, and the coordinates of ν are indexed by
these arcs;
• the diagonal entries of Z, same as the off-diagonal entries of Z corresponding to “empty” cells ij
(i.e., with i and j non-adjacent) are zeros;
• the off-diagonal entries of Z in a pair of symmetric “non-empty” cells ij, ji are equal to the
coordinate of ν indexed by the corresponding arc.
Prove that σ(Γ) is nothing but the Lovasz capacity Θ(Γ) of the graph.
Solution. In view of (3.8.2) all we need is to prove that σ(Γ) ≥ Θ(Γ).
Let (λ, µ, ν) be a feasible solution to (Sh); we should prove that there exists x such that
(λ, x) is a feasible solution to the Lovasz problem
min {λ : λIn − L(x)  0} (L)
λ,x

Setting y = 21 (e + µ), we see that

− 12 (e + µ)T
   
λ λ −y T
=  0.
− 12 (e + µ) Z(ν) + Diag(µ) −y Z(ν) + 2Diag(y) − In
The diagonal entries of Z = Z(ν) are zero, while the diagonal entries of Z + 2Diag(y) − In
must be nonnegative; we conclude that y > 0. Setting Y = Diag(y), we have
     
λ −y T 1 λ −y T 1
0⇒  0,
−y Z + 2Y − In Y −1 −y Z + 2Y − In Y −1
i.e.,  
λ −eT
−1 −1  0,
−e Y ZY + 2Y −1 − Y −2
whence by the Schur Complement Lemma
λ Y −1 ZY −1 + 2Y −1 − Y −2 − eeT  0,
 

or, which is the same,


λIn − eeT − λY −1 ZY −1  λ(In − 2Y −1 + Y −2 ) = λ(In − Y −1 )2 .
 

We see that λIn − eeT − λY −1 ZY −1  0; it remains to note that the matrix in the brackets
 

clearly is L(x) for certain x.

6.3.4 Around operator norms


Exercise 3.35.
1) Let T ⊂ Rn be a convex compact wet with a nonempty interior, and
T = cl{[x; t] ∈ Rn × R : t > 0, t−1 x ∈ T }
be the closed conic hull of T . Prove that T is a regular cone such that
T = {x : [x; 1] ∈ T},
and the cone dual to T is
T∗ = {[g; s] : s ≥ φT (−g)},
where
φT (y) = max xT y
x∈T

is the support function of T .


6.3. EXERCISES FOR LECTURE 3 559

Solution: The only not completely evident claim is the one about the dual cone. We have

[y; s] ∈ T∗ ⇔ y T x+st ≥ 0 ∀[x; t] ∈ T ⇔ y T x+s ≥ 0 ∀x ∈ T ⇔ −φT (−y)+t = min y T x+t ≥ 0


x∈T
2

2) Let T be a convex compact subset of Rm


+ with int T 6= ∅, and let

X[T ] = {X ∈ Sm
+ : Dg(X) ∈ T },

where, as always, Dg(X) is the vector of diagonal entries of matrix X. Prove that X[T ] is a convex
compact subset of Sn+ , the closed conic hull of X[T ] is the regular cone

X[T ] = {(X, t) ∈ Sm
+ × R : [Dg(X); t] ∈ T},

where T is the closed conic hull of T , and the cone dual to X[T ] is

X∗ [T ] = {Y ∈ Sm : ∃h ∈ Rm : Diag{h} + Y  0, r ≥ φT (h)},

with the same φT (·) as in 1).


Solution: all claims, except for the description of X∗ [T ], are evident. Now, the cone dual
to X[T ] lives in the space Sm × R, and (Y, r) belongs to this dual cone if and only if
Tr(Y X) + r ≥ 0 whenever X  0 and Dg(X) ∈ T , that is, if and only if

Opt(Y ) := min {Tr(Y X) : X  0, Dg(X) ∈ T } ≥ −r.

Now, by 1), Opt(Y ) is the optimal value in the conic problem

min {Tr(Y X) : X  0, [Dg(X); 1] ∈ T} (∗)


X

and under assumptions of 2), the feasible set of this problem is compact and with a nonempty
interior, implying that the optimal value in question is equal to the one of the dual problem.
By 1), the dual problem is as follows: we equip the constraint X  0 with Lagrange multiplier
Z  0, and the constraint [Dg(X); 1] ∈ T – with the Largange multiplier [g; s] ∈ T∗ , that
is, with φT (−g) ≤ s. Aggregating the constraints of (∗) with these Lagrange multipliers, we
get the inequality X
Tr(ZX) + Xii gi ≥ −s;
i

to get the dual problem, we impose on Z and [g; s], aside of the above conic constraints,
the requirement for the left hand side in the aggregated inequality to be equal to Tr(Y X)
identically in X ∈ Sm , and maximize under these restrictions the right hand side −s of the
aggregated inequality. Thus,

Opt(Y ) = max {−s : Y = Z + Diag{g}, Z  0, s ≥ φT (−g)} = − min {φT (−g) : Y  Diag{g}} ,


Z,[g;s] [g;s]

and

X∗ [T ] = {Y ∈ Sm : Opt(Y ) + r ≥ 0} = {Y : ∃h : Diag{h} + Y  0, r ≥ φT (h)} ,

as claimed. 2

3) Under the premise and in the notation of Theorem 3.4.3, let T posses a nonempty interior. Prove
that
s∗ (A) = − minu {φT (−u) : A  Diag{u}} (a)
(!)
s∗ (A) = minw {φT (w) : Diag{w}  A} (b)
560 SOLUTIONS TO SELECTED EXERCISES

Solution: We have by results of 1), 2):


s∗ (A) := minX {Tr(AX) : X  0, Dg(X) ∈ T } = minX {Tr(AX) : [X; 1] ∈ X[T ]}
= maxr {−r : [A; r] ∈ X∗ [T ]} [conic duality]
= − minu {φT (−u) : A  Diag{u}} [by 2)]
and
s∗ ((A) := maxX {Tr(AX) : X  0, Dg(X) ∈ T } = maxX {Tr(AX) : [X; 1] ∈ X[T ]}
= minr {r : [−A; −r] ∈ X∗ [T ]} [conic duality] 2
= minw {φT (w) : Diag{w}  A} [by 2)]

 
B
4) In the situation of 3), let matrix A from 3) be of special structure: A = with p × q
BT
matrix B. Prove that in this case
a. One has s∗ (A) = −s∗ (A) and m∗ (A) = −m∗ (A)
b. One has m∗ (A) = maxx∈T |xT Ax| ≤ s∗ (A) = minh {φT (h) : Diag{h}  A} ≤ 4−π π
m∗ (A)
c. Derive from Theorem 3.4.3 the following “partial refinement” of Theorem 3.4.6:
Theorem 3.8.1 Consider a special case of the situation considered in Theorem 3.4.6, specif-
ically, the one where in (3.4.29) K = dim z, z T Rk z = zk2 , k ≤ K, and similarly L = dim w
and wT S` w = w`2 , ` ≤ L. Then the efficiently computable upper bound
Diag{µ} 12 QT CP
   
Φπ→θ (C) = minλ≥0,µ≥0 φR (λ) + φS (µ) : 1 T T 0 (a)
2 P C Q Diag{λ}
(3.8.8)
(cf. (3.4.30)) on the operator norm Φπ→θ (C) = max θ(Cx) of a matrix C is tight within
x:π(x)≤1
absolute constant factor:
π
Φπ→θ (C) ≤ Φπ→θ (C) ≤ Φπ→θ (C), (3.8.9)
4−π
cf. (3.4.31).
Solution: In the situation of Theorem 3.4.3, alternating in whatever fashion signs of entries
of x ∈ T keeps x in T , so that when x = [x1 ; ...; xn ] ∈ T , vector x0 obstained from x by
multiplying the first p entries by −1 belongs to T as well. Since xT Ax = −[x0 ]T Ax0 , we
conclude that the set of values of xT Ax at points of T is symmetric w.r.t. the origin, so
that m∗ (A) = −m∗ (A). Next, with our a, the matrices Diag{z} + A and Diag{z} − A
simultaneously are or are not positive semidefinite. It follows that w is feasible for (!.a) if
and only if u = −w is feasible for (!.b), implying that s∗ (A) = −s∗ (A), as claimed in 4.a.
In 4.b, the first equality is readily given by the definition of m∗ combined with the second
equalaity in 4.a, and the second equality is (!.b), while the inequalities are exactly what is
stated by Theorem 3.4.3 in the situation of 4.a.
To justify 4.c, note that under the premise of Theorem 3.8.1 the operator norm Φπ→θ (C) of
C is nothing but the quantity
   
T B 1
max [w; z] [w; z] : ∃r ∈ R, s ∈ S : w` ≤ s` , ` ≤ L, zk ≤ rk , k ≤ K , B = QT CP.
2 2
[w;z] BT 2
Setting
 T = S × R, we find ourselves under the premise of Theorem 3.4.3 with A =
B
, and Φπ→θ (B) is nothing but what in this Theorem was called m∗ (A). From
BT
comparing (3.8.8.a) with (!.a), we conclude that Φπ→θ (C) is nothing but s∗ (A). With these
observations, (3.8.9) becomes nothing but the already proved 4.b. 2
6.3. EXERCISES FOR LECTURE 3 561

6.3.5 Around S-Lemma


For notation in what follows, see section 3.8.7 in Lecture 3.

6.3.5.1 A straightforward proof of the standard S-Lemma


Exercise 3.49.3) Given data A, B satisfying the premise of (SL.B), define the sets

Qx = {λ ≥ 0 : xT Bx ≥ λxT Ax}.

Prove that every two sets Qx0 , Qx00 have a point in common.
Solution: The case when x0 , x00 are collinear is trivial. Assuming that x0 , x00 are linearly
independent, consider the quadratic forms on the 2D plane:

α(z) = (sx0 + tx00 )T A(sx0 + tx00 ), β(z) = (sx0 + tx00 )T B(sx0 + tx00 ), z = (s, t)T .

By their origin, we have


α(z) ≥ 0, z 6= 0 ⇒ β(z) > 0. (!)
All we need is to prove that there exists λ ≥ 0 such that β(z) ≥ λα(z) for all z ∈ R2 ; such
a λ clearly is a common point of Qx0 and Qx00 .
As it is well-known from Linear Algebra, we can choose a coordinate system in R2 in such
a way that the matrix α of the form α(·) in these coordinates, let them be called u, v, is
diagonal:  
a 0
α= ;
0 b
let also  
p r
β=
r q
be the matrix of the form β(·) in the coordinates u, v. Let us consider all possible combina-
tions of signs of a, b:

• a ≥ 0, b ≥ 0. In this case, α(·) is nonnegative everywhere, whence by (!) β(·) ≥ 0.


Consequently, β(·) ≥ λα(·) with λ = 0.
• a < 0, b < 0. In this case the matrix of the quadratic form β(·) − λα(·) is
 
p + λ|a| r
.
r q + λ|b|

This matrix clearly is positive definite for all large enough positive λ, so that here again
β(·) ≥ λα(·) for properly chosen nonnegative λ.
• a = 0, b < 0. In this case α(1, 0) = 0 (the coordinates in question are u, v), so that by
(!) p > 0. The matrix of the form β(·) − λα(·) is
 
p r
,
r q + λ|b|

and since p > 0 and |b| > 0, this matrix is positive definite for all large enough positive
λ. Thus, here again β(·) ≥ λα(·) for properly chosen λ ≥ 0.
• a < 0, b = 0. This case is completely similar to the previous one.

3) is proved.
562 SOLUTIONS TO SELECTED EXERCISES

6.3.5.2 S-Lemma with a multi-inequality premise


Exercise 3.50 Demonstrate by example that if xT Ax, xT Bx, xT Cx are three quadratic forms with
symmetric matrices such that
∃x̄ : x̄T Ax̄ > 0, x̄T B x̄ > 0
(6.3.3)
x Ax ≥ 0, xT Bx ≥ 0 ⇒ xT Cx ≥ 0,
T

then not necessarily there exist λ, µ ≥ 0 such that C  λA + µB.


A solution:
 2     
λ 0 µν 0.5(µ − ν) λµ 0.5(µ − λ)
A= , B= , C=
0 −1 0.5(µ − ν) −1 0.5(µ − λ) −1
With a proper setup, e.g.,
λ = 1.100, µ = 0.818, ν = 1.344,
the above matrices satisfy both (3.8.18) and (3.8.19).

Exercise 3.55 Let n ≥ 3,


n
θ1 x21 − θ2 x22 + θi x2i : Rn → R,
P
f (x) =
i=3
θ1 ≥ θ2 ≥ 0, θ1 + θ2 > 0, −θ2 ≤ θi ≤ θ1 ∀i ≥ 3;
Y = {x : kxk2 = 1, f (x) = 0}.
1) Let x ∈ Y . Prove that x can be linked in Y by a continuous curve with a point x0 such that the
coordinates of x0 with indices 3, 4, ..., n vanish.
Proof: It suffices to build a continuous curve γ(t) ∈ Y , 0 ≤ t ≤ 1, of the form
γ(t) = (x1 (t), x2 (t), tx3 , tx4 , ..., txn )T , 0≤t≤1
n
θi x2i = θ2 x22 − θ1 x21 and g 2 = dT d =
P
which passes through x as t = 1. Setting s =
i=3
1 − x21 − x22 , we should verify that one can define continuous functions x1 (t), x2 (t) of t ∈ [0, 1]
satisfying the system of equations
θ1 x21 (t) − θ2 x22 (t) + t2 s = 0

x21 (t) + x22 (t) + t2 g 2 = 1
along with the “boundary conditions”
x1 (1) = x1 ;
x2 (1) = x2 .
Substituting v1 (t) = x21 (t), v2 (t) = x22 (t) and taking into account that θ1 , θ2 ≥ 0, θ1 + θ2 > 0,
we get
(θ1 + θ2 )v1 (t) = θ2 (1 − t2 g 2 ) − t2 s
= θ2 (1 − t2 g 2 − t2 (θ2 x22 − θ1 x21 )
= θ2 (1 − t2 [g 2 + x22 ]) + t2 θ1 x21
= θ2 (1 − t2 [1 − x21 ]) + t2 θ1 x21 ;
(θ1 + θ2 )v2 (t) = θ1 (1 − t2 g 2 ) + t2 s
= θ1 (1 − t2 g 2 ) + t2 (θ2 x22 − θ1 x21 )
= θ1 (1 − t2 [g 2 + x21 ]) + t2 θ2 x22
= θ1 (1 − t2 [1 − x22 ]) + t2 θ2 x22 .
We see that v1 (t), v2 (t) are continuous, nonnegative and equal x21 , x22 , respectively, as t = 1.
1/2 1/2
Taking x1 (t) = κ1 v1 , x2 (t) = κ2 v2 (t) with properly chosen κi = ±1, i = 1, 2, we get the
required curve γ(·).
6.3. EXERCISES FOR LECTURE 3 563

2) Prove that there exists a point z + = (z1 , z2 , z3 , 0, 0, ..., 0)T ∈ Y such that
(i) z1 z2 = 0;
(ii) given a point u = (u1 , u2 , 0, 0, ..., 0)T ∈ Y , you can either (ii.1) link u by continuous curves in Y
both to z + and to z̄ + = (z1 , z2 , −z3 , 0, 0, ..., 0)T ∈ Y , or (ii.2) link u both to z − = (−z1 , −z2 , z3 , 0, 0, ..., 0)T
and z̄ − = (−z1 , −z2 , −z3 , 0, 0, ..., 0)T (note that z + = −z̄ − , z̄ + = −z − ).
Proof: Recall that θ1 ≥ θ2 ≥ 0 and θ1 + θ2 > 0. Consider two possible cases: θ2 = 0 and
θ2 > 0.
Case of θ2 = 0: In this case it suffices to set z + = (0, 1, 0, 0, ..., 0)T . Indeed, the point clearly
belongs to Y and satisfies (i). Further, if u ∈ Y is such that u3 = ... = un = 0, then from
θ2 = 0 and the definition of Y it immediately follows that u1 = 0, u2 = ±1. Thus, either u
coincides with z + = z̄ + , or u coincides with z − = z̄ − ; in both cases, (ii) takes place.
Case of θ2 > 0: Let us set
h i
τ = min θ1θ−θ 1
; θ2
3 θ2 +θ3
;
[note thatpτ > 0 due to θ1 , θ2 > 0 and −θ2 ≤ θ3 ≤ θ1 ]
z1 = p(θ1 + θ2 )−1 (θ2 − τ [θ2 + θ3 ]);
z2 = √ (θ1 + θ2 )−1 (θ1 − τ [θ1 − θ3 ]);
z3 = τ;
z + = (z1 , z2 , z3 , 0, 0, ..., 0)T .
It is immediately seen that z + is well-defined and satisfies (i). Now let us verify that z +
(ii) as well. Let u = (u1 , u2 , 0, 0, ..., 0)T ∈ Y , and let the vector-function z(t),
satisfies √
0 ≤ t ≤ τ , be defined by the relations
p
z1 (t) = p(θ1 + θ2 )−1 (θ2 − t2 [θ2 + θ3 ]);
z2 (t) = (θ1 + θ2 )−1 (θ1 − t2 [θ1 − θ3 ]);
z3 (t) = t;
zi (t) = 0, i = 4, ..., n.
It √
is immediately seen that z(·) is well-defined, continuous, takes its values in Y and that
z( τ ) = z + . Now, z(0) is the vector
r r
θ2 θ1
ū ≡ ( , , 0, 0, ..., 0)T ;
θ1 + θ2 θ1 + θ2
from u ∈ Y , u3 = ... = un = 0 it immediately follows that |ui | = |ūi |, i = 1, 2. Now consider
four possible cases:
(++) u1 = ū1 , u2 = ū2
(−−) u1 = −ū1 , u2 = −ū2
(+−) u1 = ū1 , u2 = −ū2
(−+) u1 = −ū1 , u2 = ū2

• In the case of (++) the continuous curve γ(t) ≡ z(t) ∈ Y , 0 ≤ t ≤ τ , links u with
z + , while the continuous curve γ̄(t) = (z1 (t), z2 (t), −z3 (t), 0, 0, ..., 0)T ∈ Y links u with
z̄ + , so that (ii.1) takes place;

• In the case of (−−) the continuous curve γ(t) ≡ −z(t) ∈ Y , 0 ≤ t ≤ τ , links u with
z̄ − = −z + , while the continuous curve γ̄(t) = (−z1 (t), −z2 (t), z3 (t), 0, 0, ..., 0)T ∈ Y
links u with z − = −z̄ + , so that (ii.2) takes place;
• In the case of (+−) the continuous curves γ(t) ≡ (z1√(t), −z2 (t), z3 (t), 0, 0, ..., 0)T ∈ Y
and γ̄(t) = (z1 (t), −z2 (t), −z3 (t), 0, 0, ..., 0)T , 0 ≤ t ≤ τ , link u either with both points
of the pair (z + , z̄ + ), or with both points of the pair (z − , z̄ − ), depending on whether
z1 = 0 or z1 6= 0, z2 = 0 (note that at least one of these possibilities does take place
due to z1 z2 = 0, see (i)). Thus, in the case in question at least (ii.1) or (ii.2) does hold.
564 SOLUTIONS TO SELECTED EXERCISES

T
• In the case of (−+) the continuous curves γ(t) ≡ (−z √1 (t), z2 (t), z3 (t), 0, 0, ..., 0) ∈ Y
T
and γ̄(t) = (z1 (t), −z2 (t), −z3 (t), 0, 0, ..., 0) , 0 ≤ t ≤ τ , link u either with both points
of the pair (z − , z̄ − ), or with both points of the pair (z + , z̄ + ), depending on whether
z1 = 0 or z1 6= 0, z2 = 0, so that here again (ii.1) or (ii.2) does hold.

3) Conclude from 1-2) that Y satisfies the premise of Proposition 3.8.1 and thus complete the proof
of Proposition 3.8.3.
Proof: Let z + be given by 2), and let x, x0 ∈ Y . By 1), we can link in Y the point x with a
point v = (v1 , v2 , 0, 0, ..., 0)T ∈ Y , and the point x0 with a point v 0 = (v10 , v20 , 0, 0, ..., 0)T ∈ Y .
If for both u = v and u = v 0 (ii.1) holds, then we can link both v, v 0 by continuous curves
with z + ; thus, both x, x0 can be linked in Y with z + as well. We see that both x, x0 can
be linked in Y with the set {z + ; −z + }, as required in the premise of Proposition 3.8.1. The
same conclusion is valid if for both u = v, u = v 0 (ii.2) holds; here both x and x0 can be
linked in Y with z − , and the premise of Proposition 3.8.1 holds true.
Now consider the case when for one of the points u = v, u = v 0 , say, for u = v, (ii.1) holds,
while for the other one (ii.2) takes place. Here we can link in Y the point v (and thus – the
point x) with the point z + , and we can link in Y the point v 0 (and thus – the point x0 ) with
the point z̄ − = −z + . Thus, we can link in Y both x and x0 with the set {z + ; −z + }, and the
premise of Proposition 3.8.1 holds true. Thus, Y satisfies the premise of Proposition 3.8.1.

Exercise 3.56.2) Let Ai , i = 1, 2, 3, satisfy the premise of (SL.D). Assuming A1 = I, prove that the
set
H1 = {(v1 , v2 )T ∈ R2 | ∃x ∈ Sn−1 : v1 = f2 (x), v3 = f3 (x)}
is convex.
Proof: Let ` = {(v1 , v2 )t ∈ R2 | pv1 + qv2 + r = 0} be a line in the plane; we should prove
that W = X1 ∩ ` is a convex, or, which is the same, connected set (Exercise 3.52). There is
nothing to prove when W = ∅. Assuming W 6= ∅, let us set

f (x) = rf1 (x) + pf2 (x) + qf3 (x).

It is immediately seen that f is a homogeneous quadratic form on Rn , and that

W = F (Y ),

where  
f2 (x)
F (x) = ,
f3 (x)
n−1
Y ≡ {x ∈ S : f (y) = 0} = −Y.
By Proposition 3.8.3, the image Z of the set Y under the canonical projection S n−1 → Pn−1
is connected; since F is even, W = F (Y ) is the same as G(Z) for certain continuous mapping
G : Z → R2 (Proposition 3.8.2). Thus, W is connected by (C.2).
We have proved that Z1 is convex; the compactness of Z1 is evident.

Exercise 3.57 Demonstrate by example that (SL.C) not necessarily remains valid when skipping the
assumption “n ≥ 3” in the premise.
A solution: The linear combination

A + 0.005B − 1.15C

of the matrices built in the solution to Exercise 3.50 is positive definite.


6.3. EXERCISES FOR LECTURE 3 565

Exercise 3.58.1) Let A, B, C be three 2 × 2 symmetric matrices such that the system of inequalities
xT Ax ≥ 0, xT Bx ≥ 0 is strictly feasible and the inequality xT Cx is a consequence of the system.
1) Assume that there exists a nonsingular matrix Q such that both the matrices QAQT and QBQT
are diagonal. Prove that then there exist λ, µ ≥ 0 such that C  λA + µB.
Solution: W.l.o.g. we may assume that the matrices A, B themselves are diagonal. The
case when at least one of the matrices A, B is positive semidefinite is trivially reducible to
the usual S-Lemma, so that we can assume that both matrices A and B are not positive
semidefinite. Since the system of inequalities xT Ax > 0, xT Bx > 0 is feasible, we conclude
that the determinants of the matrices A, B are negative. Applying appropriate dilatations
of the coordinate axes, swapping, if necessary, the coordinates and multiplying A, B by
appropriate positive constants we may reduce the situation to the one where xT Ax = x21 − x22
and either
(a) xT Bx = θ2 x21 − x22 ,
or
(b) xT Bx = −θ2 x21 + x22 with certain θ > 0.
Case of (a): Here the situation is immediately reducible to the one considered in S-Lemma.
Indeed, in this case one of the inequalities in the system xT Ax ≥ 0, xT Bx ≥ 0 is a conse-
quence of the other inequality of the system; thus, a consequence xT Cx of the system is in
fact a consequence of a properly chosen single inequality from the system; thus, by S-Lemma
either C  λA, or C  λB with certain λ ≥ 0.
Case of (b): Observe, first, that θ < 1, since otherwise the system xT Ax ≥ 0, xT Bx ≥ 0 is
not strictly feasible. When θ < 1, the solution set of our system is the union of the following
four angles D++ , D+− , D−− , D−+ :

D++ = {x | x1 ≥ 0, θx1 ≤ x2 ≤ x1 };
D+− = reflection of D++ w.r.t. the x1 -axis;
D−− = −D++ ;
D−+ = reflection of D++ w.r.t. the x2 -axis.

Now, the case when C is positive semidefinite is trivial – here C  0×A+0×B. Thus, we may
assume that one eigenvalue of C is negative; the other should be nonnegative, since otherwise
xT Cx < 0 whenever x 6= 0, while we know that xT Cx ≥ 0 at the (nonempty!) solution set of
the system xT Ax > 0, xT Bx > 0. Since one eigenvalue of C is negative, and the other one
is nonnegative, the set XC = {x | xT Cx ≥ 0} is the union of an certain angle D (which can
reduce to a ray) and the angle −D. Since the inequality xT Cx ≥ 0 is a consequence of the
system xT Ax ≥ 0, xT Bx ≥ 0, we have D ∪ (−D) ⊃ D++ ∪ D+− ∪ D−− ∪ D−+ . Geometry
says that the latter inclusion can be valid only when D contains two “neighbouring”, w.r.t.
the cyclic order, of the angles D++ , D+− , D−− , D−+ ; but in this case the inequality xT Cx
is a consequence of an appropriate single inequality from the pair xT Ax ≥ 0, xT Bx ≥ 0.
Namely, when D ⊃ D++ ∪ D+− or D ⊃ D−− ∪ D−+ , XC contains the solution set of the
inequality xT Bx ≥ 0, while in the cases of D ⊃ D+− ∪ D−− and of D ⊃ D−+ ∪ D++ XC
contains the solution set of the inequality xT Ax ≥ 0. Applying the usual S-Lemma, we
conclude that in the case of (b) there exist λ, µ ≥ 0 (with one of these coefficients equal to
0) such that C  λA + µB.
566 SOLUTIONS TO SELECTED EXERCISES

6.4 Exercises for Lecture 4


6.4.1 Around canonical barriers
Exercise 4.2 Prove Proposition 4.3.2.

Solution: As explained in the Hint, it suffices to consider the case of K = Lk . Let x =


(u, t) ∈ int Lk , and let s = (v, τ ) = −∇Lk (x). We should prove that s ∈ int Lk and that
∇Lk (s) = −x. By (4.3.1), one has

2 2
v = − t2 −u T u u, τ = t2 −u Tut


4
τ ≥ 0 and τ 2 − v T v = t2 −x Tx > 0


s ∈ int Lk ;
2 2
u
!  
 2 
− τ 2 −v Tvv
4
t2 −uT u
t2 −uT u u
−∇Lk (s) = 2 = 2 2 = .
2
τ −v v T τ 4 2
t −u u T t t
t2 −uT u

6.4.2 Scalings of canonical cones


Exercise 4.4.1) Prove that
1) Whenever e ∈ Rk−1 is a unit vector and µ ∈ R, the linear transformation
p
u − [µtp− ( 1 + µ2 − 1)eT u]e
   
u
Lµ,e : 7→ (∗)
t 1 + µ2 t − µeT u

maps the cone Lk onto itself. Besides this, transformation (∗) preserves the “space-time interval” xT Jk x ≡
−x21 − ... − x2k−1 + x2k :

[Lµ,e x]T Jk [Lµ,e x] = xT Jk x ∀x ∈ Rk [⇔ LTµ,e Jk Lµ,e = Jk ]

and L−1
µ,e = Lµ,−e .
   
u v
Solution: Let x = ∈ Lk . Denoting s ≡ = Lµ,e x, we have
t τ
p p
v = u − [µt − ( 1 + µ2 − 1)eT u]e, τ = 1 + µ2 t − µeT u
p p ⇓
2 T
τ = 1 + µ t − µe p u ≥ 1 + µ2 t − µkuk2 ≥ 0 [since t ≥ kuk2 , kek2 = 1]
τ 2 − vT v = (1 + µ )t − 2µp 1 + µ2 teT u + µ2 (eT u)2
2 2

−uT u − [µt − (p 1 + µ2 − 1)eT u]2


+2(eT u)[µt − ( 1 + µ2 − 1)eT u]
= t2 − uT u [just arithmetic!]

Thus, x ∈ Lk ⇒ Lµ,e x ∈ Lk . To replace here ⇒ with ⇔, it suffices to verify (a straightfor-


ward computation!) that
L−1
µ,e = Lµ,−e ,

so that both Lµ,e and its inverse map Lk onto itself.


6.4. EXERCISES FOR LECTURE 4 567

6.4.3 Dikin ellipsoid


Exercise 4.8 Prove that if K is a canonical cone, K is the corresponding canonical barrier and
X ∈ int K, then the Dikin ellipsoid
p
WX = {Y | kY − XkX ≤ 1} [kHkX = h[∇2 K(X)]H, HiE ]

is contained in K.
Hint: Note that the property to be proved is “stable w.r.t. taking direct products”, so that
it suffices to verify it in the cases of K = Lk and K = Sk+ . Further, use the result of Exercise
4.7 to verify that the Dikin ellipsoid “conforms with scalings”: the image of WX under a
scaling Q is exactly WQX . Use this observation to reduce the general case to the one where
X = e(K) is the central point of the cone in question, and verify straightforwardly that
We(K) ⊂ K.

Solution: According to Hint, it suffices to verify the inclusion WX ⊂ K in the following two
particular cases:
A: K = Sk+ , X = Ik ;
 
k 0√ k−1
B: K = L , X = .
2
Note that in both cases ∇2 K(X) is the unit matrix, so that all we need to prove is that the
unit ball, centered at our particular X, is contained in our particular K.
A: We should prove that if kHkF ≤ 1, then I + H  0, which is evident: the moduli of
eigenvalues of H are ≤ kHkF ≤ 1, so that all these eigenvalues are ≥ −1.
k−1
B: We 
 should
 prove  if du ∈ R
 that , dt ∈ R satisfy dt2 + duT du ≤ 1, then the point
0√ du du
k−1
+ = √ belongs to Lk . In other words, we should verify that
2 dt 2 + dt
√ √
2 + dt ≥ 0 (which is evident) and that ( 2 + dt)2 − duT du ≥ 0. Here is the verification of
the latter statement:
√ √
( 2 + dt)2 − duT du = 2 + 2√2dt + dt2 − duT du
= 1 + 2√2dt + 2dt2 + (1 − dt2 − duT du)
≥ 1 + 2√ 2dt + 2dt2 [since dt2 + duT du ≤ 1]
2
= (1 + 2dt) ≥ 0.

Exercise 4.9.2,4) Let K be a canonical cone:


k
K = Sk+1 × ... × S+p × Lkp+1 × ... × Lkm ⊂ E = Sk1 × ... × Skp × Rkp+1 × ... × Rkm ,
(Cone)

and let X ∈ int K. Prove that


2) Whenever Y ∈ K, one has
h∇K(X), Y − XiE ≤ θ(K).
4) The conic cap KX is contained in the k · kX -ball, centered at X, of the radius θ(K):

Y ∈ KX ⇒ kY − XkX ≤ θ(K).

Solution to 2), 4): According to the Hint, it suffices to verify the statement in the case when
X = e(K) is the central point of K; in this case the Hessian ∇2 K(X) is just the unit matrix,
whence, by Proposition 4.3.1, ∇K(X) = −X.
568 SOLUTIONS TO SELECTED EXERCISES

2): We should prove that if Y ∈ K, then h∇K(X), Y − XiE ≤ θ(K). The statement clearly
is “stable w.r.t. taking direct products”, so that it suffices to prove it in the cases of K = Sk+
and K = Lk .
In the case of K = Sk+ what should be proved is

∀H ∈ Sk+ : Tr(Ik − H) ≤ k,

which is evident.
In the case of K = Lk what should be proved is
√ √
  
u
∀ : kuk2 ≤ t 2( 2 − t) ≤ 2,
t

which again is evident.

4): We should prove that if X = e(K), h−∇K(X), X−Y iE ≥ 0 and Y ∈ K, then kY −XkE ≤
θ(K).
Let Y ∈ K be such that h−∇K(X), X − Y iE ≥ 0, i.e., such that hX, X − Y iE ≥ 0. We may
think of Y as of a collection of a block-diagonal symmetric positive
  semidefinite matrix H
ui
with diagonal blocks of the sizes k1 ,...,kp and m − p vectors ∈ Lki , i = p + 1, ..., m
ti
(see (Cone)); the condition hX, X − Y iE ≥ 0 now becomes
m
X √ √
Tr(I − H ) + 2( 2 − ti ) ≥ 0. (∗)
| {z }
D i=p+1

p
P
We now have, denoting by Dj the eigenvalues of D and by n = ki the row size of D:
i=1

m √ 
kX − Y k2X = kX − Y k2E = Tr((I − H)2 ) + ( 2 − ti )2 + uTi ui
P
i=p+1
n m √ 
Dj2 2 − 2 2ti + t2i + uTi ui
P P
= +
j=1 i=p+1
n m √ 
Dj2 2 − 2 2ti + 2t2i
P P
≤ +
j=1 i=p+1
[since uTi ui ≤ t2i ]
n m √ 
Dj2 + 2ti )2 .
P P
= 1 + (1 −
j=1 i=p+1

Denoting q = m − p, Dn+i = 1 − 2tp+i , i = 1, ..., q, we come to the relation
n+q
X
kX − Y k2X =q+ Dj2 , (6.4.1)
j=1

while (∗) and relations H  0, ti ≥ 0 imply that

Dj ≤ 1, j = 1, ..., n + q;
n+q
P (6.4.2)
Dj ≥ −q.
j=1

Let A be the maximum of the right hand side in (6.4.1) over Dj ’s satisfying (6.4.2), and let
D∗ = (D1∗ , ..., Dn+q

)T be the corresponding maximizer (which clearly exists – (6.4.2) defines
a compact set!). In the case of n + q = 1 we clearly have A = 1 + q. Now let n + q > 1. We
6.4. EXERCISES FOR LECTURE 4 569

claim that among n+q coordinates of D∗ , n+q −1 are equal 1, and the remaining coordinate
equals to −(n + 2q − 1). Indeed, if there were at least two entries in D∗ which are less than
1, then subtracting from one of them a small enough in absolute value δ 6= 0 and adding
the same δ to the other coordinate, we preserve the feasibility
P 2 of the perturbed point w.r.t.
(6.4.2), and, with properly chosen sign of δ, increase Dj , which is impossible. Thus, at
j
least n + q − 1 coordinates of D∗ are equal
P to 1; among the points with this property which
satisfy (6.4.2) the one with the largest Dj2 clearly has the remaining coordinate equal to
j
1 − n − 2q, as claimed.
From our analysis it follows that

q + 1, n+q =1
A= ;
q + (n + q − 1) + (n + 2q − 1)2 = (n + 2q − 1)(n + 2q), n+q >1
recalling that θ(K) = n + 2q and taking into account (6.4.1) and the origin of A, we get
kX − Y kX ≤ θ(K),
as claimed.

6.4.4 More on canonical barriers


Exercise 4.11, second statement Prove that if K is a canonical cone, K is the associated canon-
ical barrier, X ∈ int K and H ∈ K, H 6= 0, then
inf K(X + tH) = −∞. (∗)
t≥0

Derive from this fact that


(!!) Whenever N is an affine plane which intersects the interior of K, K is below bounded
on the intersection N ∩ K if and only if the intersection is bounded.
Solution: as explained in Hint, it suffices to verify (∗) in the particular case when X is the
central point of K. It is also clear that (∗) is stable w.r.t. taking direct products, so that we
can restrict ourselves with the cases of K = Sk+ and K = Lk .
Case of K = Sk+ , X = e(Sk+ ) = Ik : Denoting by Hi ≥ 0 the eigenvalues of H and noting that
at least one of Hi is > 0 due to H 6= 0, we have for t > 0:
k
X
K(X + tH) = − ln Det(Ik + tH) = − ln(1 + tHi ) → −∞, t → ∞.
i=1
   
0√
k−1 u
Case of K = Lk , X = e(Lk ) = : Setting H = , we have s ≥ kuk2 and s > 0
2 s
due to H 6= 0. For t > 0 we have
√ √
K(X + tH) = − ln(( 2 +√ts)2 − t2 uT u) = − ln(2 + 2 2ts + t2 (s2 − uT u))
≤ − ln(2 + 2 2ts) → −∞, t → ∞.
To derive (!!), note that if U = N ∩ K is bounded, then K is below bounded on U just in
view of convexity of K (moreover, from the fact that K is a barrier for K it follows that
K attains its minimum on U ). It remains to prove that if U is unbounded, then K is not
below bounded on U . If U is unbounded, there exists a nonzero direction H ∈ K which is
parallel to N (take as H a limiting point of the sequence kYi − Xk−1
2 (Yi − X), where Yi ∈ U ,
kYi k2 → ∞ as i → ∞, and X is a once for ever fixed point from U ). By (∗), K is not below
bounded on the ray {X + tH | t ≥ 0}, and this ray clearly belongs to U . Thus, K is not
below bounded on U . 2
570 SOLUTIONS TO SELECTED EXERCISES

6.4.5 Around the primal path-following method


Exercise 4.15 Looking at the data in the table at the end of Section 4.5.3, do you believe that the
corresponding method is exactly the short-step primal path-following method from Theorem 4.5.1 with
the stepsize policy (4.5.12)?
Solution: The table cannot correspond to the indicated method. Indeed, we see from the
table that the duality gap along the 12-iteration trajectory is reduced by factor about 106 ;
since the duality gap in a short-step method is nearly inverse proportional to the value of
the penalty, the latter in course of our 12 iterations should be increased by factor of order of
105 – 106 . In our case θ(K)
√ = 3, and the policy (4.5.12) increases the penalty at an iteration
by the factor (1 + 0.1/ 3) ≈ 1.0577; with this policy, in 12 iterations the penalty would be
increased by 1.057712 < 2, which is very far from 105 !

6.4.6 An infeasible start path-following method


Exercise 4.18 Consider the problem
n o
max hC,
e Y i | Y ∈ (M + R) ∩ K
E
e . (Aux)
X e

where

e = K × K × S1 × S1
K + +
|{z} |{z}
=R+ =R+

(K is a canonical cone),

U
  
 U + rB ∈ L, 
 V 
 
M =   V − rC ∈ L⊥ ,
 s

hC, U iE − hB, V iE + s = 0


r
is a linear subspace in the space E
e where the cone K
e lives, and

C ∈ L, B ∈ L⊥ .

It is given that the problem (Aux) is feasible. Prove that the feasible set of (Aux) is unbounded.
Solution: According to the Hint, we should prove that the linear space M⊥ does not intersect
int K.
e
ξ
 
η
Let us compute M⊥ . A collection  , ξ, η ∈ E, s, r ∈ S1 = R is in M⊥ if and only if
 
s
r
the linear equation in variables X, S, σ, τ

hX, ξiE + hS, ηiE + σs + τ r = 0

is a corollary of the system of linear equations

X + τ B ∈ L, S − τ C ∈ L⊥ , hX, CiE − hS, BiE + σ = 0;

by Linear Algebra, this is the case if and only if there exist U ∈ L⊥ , V ∈ L and a real λ such
that
(a) ξ = U + λC,
(b) η = V − λB,
(Pr)
(c) s = λ,
(d) r = hU, BiE − hV, CiE .
6.4. EXERCISES FOR LECTURE 4 571

we have obtained a parameterization of M⊥ via the “parameters” U, V, λ running through,


respectively, L⊥ , L and R. Now assume, on contrary to what should be proved, that the
intersection of M⊥ and int K
e is nonempty. In other words, assume that there exist U ∈ L⊥ ,
V ∈ L and λ ∈ R which, being substituted in (Pr), result in a collection (ξ, η, s, r) such
that ξ ∈ int K, η ∈ int K, s > 0, r > 0. From (Pr.c) it follows that λ > 0; since (Pr) is
homogeneous, we may normalize our U, V, λ to make λ = 1, still keeping ξ ∈ int K, η ∈ int K,
s > 0, r > 0. Assuming λ = 1 and taking into account that U, B ∈ L⊥ , V, C ∈ L, we get
from (Pr.a − b):
hξ, ηiE = hC, V iE − hB, U iE ;
adding this equality to (Pr.d), we get
hξ, ηiE + r = 0,
which is impossible, since both r > 0 and hξ, ηiE > 0 (recall that the cone K is self-dual and
ξ, η ∈ int K). We have come to the desired contradiction. 2

Exercise 4.19 Let X̄, S̄ be a strictly feasible pair of primal-dual solutions to the primal-dual pair of
problems
min {hC, XiE | X ∈ (L − B) ∩ K} (P)
X 
max hB, SiE | S ∈ (L⊥ + C) ∩ K

(D)
S
so that there exists γ ∈ (0, 1] such that
γkXkE ≤ hS̄, XiE ∀X ∈ K,
γkSkE ≤ hX̄, SiE ∀S ∈ K.
X
 
S
Prove that if Y =   is feasible for (Aux), then
σ
τ
kY kEe ≤ ατ + β, 
α = γ −1 hX̄, CiE − hS̄, BiE + 1,

(6.4.3)
β = γ −1 hX̄ + B, DiE + hS̄ − C, P iE + d .


Solution: We have X̄ = Ū − B, Ū ∈ L, S̄ = V̄ + C, V̄ ∈ L⊥ . Taking into account the


constraints of (Aux), we get
hŪ , S − τ C − DiE = 0⇒
hŪ , SiE = hŪ , τ C + DiE ⇒
hX̄, SiE = −hB, SiE + hŪ , τ C + DiE ,
hV̄ , X + τ B − P iE = 0⇒
hV̄ , XiE = hV̄ , −τ B + P iE ⇒
hS̄, XiE = hC, XiE + hV̄ , −τ B + P iE ,
⇒  
hX̄, SiE + hS̄, XiE =  XiE − hB, SiE ] +τ hŪ , CiE − hV̄ , BiE
[hC,
+ hŪ , DiE
 + hV̄ , P iE   
= d − σ + τ hŪ , CiE − hV̄ , BiE + hŪ , DiE + hV̄ , P iE
whence
   
γ [kXkE + kSkE ] + σ ≤ τ hŪ , CiE − hV̄ , BiE + hŪ , DiE + hV̄ , P iE + d ,
and (6.4.3) follows (recall that hC, BiE = 0). 2
572 SOLUTIONS TO SELECTED EXERCISES

Exercise 4.23 Let K be a canonical cone, let the primal-dual pair of problems
min {hC, XiE | X ∈ (L − B) ∩ K} (P)
X 
max hB, SiE | S ∈ (L⊥ + C) ∩ K

(D)
S

be strictly primal-dual feasible and be normalized by hC, BiE = 0, let (X∗ , S∗ ) be a primal-dual optimal
solution to the pair, and let X, S “-satisfy” the feasibility and optimality conditions for (P), (D), i.e.,

(a) X ∈ K ∩ (L − B + ∆X), k∆XkE ≤ ,


(b) S ∈ K ∩ (L⊥ + C + ∆S), k∆SkE ≤ ,
(c) hC, XiE − hB, SiE ≤ .

Prove that
hC, XiE − Opt(P) ≤ (1 + kX∗ + BkE ),
Opt(D) − hB, SiE ≤ (1 + kS∗ − CkE ).
Solution: We have S − C − ∆S ∈ L⊥ , X∗ + B ∈ L, whence

0 = hS − C − ∆S, X∗ + BiE
= hS, X∗ iE −Opt(P) + hS, BiE + h−∆S, X∗ + BiE
| {z }
≥0:X∗ ,S∈K=K∗

−Opt(P) ≤ −hS, BiE + h∆S, X∗ + BiE ≤ −hS, BiE + kX∗ + BkE .

Combining the resulting inequality and (c), we get the first of the inequalities to be proved;
the second of them is given by “symmetric” reasoning.
Appendix A

Prerequisites from Linear Algebra


and Analysis

Regarded as mathematical entities, the objective and the constraints in a Mathematical Programming
problem are functions of several real variables; therefore before entering the Optimization Theory and
Methods, we need to recall several basic notions and facts about the spaces Rn where these functions
live, same as about the functions themselves. The reader is supposed to know most of the facts to follow,
so he/she should not be surprised by a “cooking book” style which use below.

A.1 Space Rn : algebraic structure


Basically all events and constructions to be considered will take place in the space Rn of n-dimensional
real vectors. This space can be described as follows.

A.1.1 A point in Rn
A point in Rn (called also an n-dimensional vector) is an ordered collection x = (x1 , ..., xn ) of n reals,
called the coordinates, or components, or entries of vector x; the space Rn itself is the set of all collections
of this type.

A.1.2 Linear operations


n
R is equipped with two basic operations:
• Addition of vectors. This operation takes on input two vectors x = (x1 , ..., xn ) and y = (y1 , ..., yn )
and produces from them a new vector

x + y = (x1 + y1 , ..., xn + yn )

with entries which are sums of the corresponding entries in x and in y.


• Multiplication of vectors by reals. This operation takes on input a real λ and an n-dimensional
vector x = (x1 , ..., xn ) and produces from them a new vector

λx = (λx1 , ..., λxn )

with entries which are λ times the entries of x.


The as far as addition and multiplication by reals are concerned, the arithmetic of Rn inherits most of
the common rules of Real Arithmetic, like x + y = y + x, (x + y) + z = x + (y + z), (λ + µ)(x + y) =
λx + µx + λy + µy, λ(µx) = (λµ)x, etc.

573
574 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

A.1.3 Linear subspaces


Linear subspaces in Rn are, by definition, nonempty subsets of Rn which are closed with respect to
addition of vectors and multiplication of vectors by reals:

 L 6= ∅;
L ⊂ Rn is a linear subspace ⇔ x, y ∈ L ⇒ x + y ∈ L;
x ∈ L, λ ∈ R ⇒ λx ∈ L.

A.1.3.A. Examples of linear subspaces:


1. The entire Rn ;
1)
2. The trivial subspace containing the single zero vector 0 = (0, ..., 0) ; (this vector/point is called
also the origin)
3. The set {x ∈ Rn : x1 = 0} of all vectors x with the first coordinate equal to zero.
The latter example admits a natural extension:
4. The set of all solutions to a homogeneous (i.e., with zero right hand side) system of linear equations
 

 a11 x1 + ... + a1n xn = 0  
a21 x1 + ... + a2n xn = 0
 
n
x∈R : (A.1.1)

 ... 

am1 x1 + ... + amn xn = 0
 

always is a linear subspace in Rn . This example is “generic”, that is, every linear subspace in Rn is
the solution set of a (finite) system of homogeneous linear equations, see Proposition A.3.6 below.
5. Linear span of a set of vectors. Given a nonempty set X of vectors, one can form a linear subspace
Lin(X), called the linear span of X; this subspace consists of all vectors x which can be represented
N
P N
P
as linear combinations λi xi of vectors from X (in λi xi , N is an arbitrary positive integer,
i=1 i=1
λi are reals and xi belong to X). Note that

Lin(X) is the smallest linear subspace which contains X: if L is a linear subspace such
that L ⊃ X, then L ⊃ L(X) (why?).

The “linear span” example also is generic:

Every linear subspace in Rn is the linear span of an appropriately chosen finite set of
vectors from Rn .

(see Theorem A.1.2.(i) below).

A.1.3.B. Sums and intersections of linear subspaces. Let {Lα }α∈I be a family (finite or
infinite) of linear subspaces of Rn . From this family, one can build two sets:
P
1. The sum Lα of the subspaces Lα which consists of all vectors which can be represented as finite
α
sums of vectors taken each from its own subspace of the family;
T
2. The intersection Lα of the subspaces from the family.
α

1)
Pay attention to the notation: we use the same symbol 0 to denote the real zero and the n-dimensional
vector with all coordinates equal to zero; these two zeros are not the same, and one should understand from the
context (it always is very easy) which zero is meant.
A.1. SPACE RN : ALGEBRAIC STRUCTURE 575

Theorem A.1.1PLet {Lα }α∈I be a family of linear subspaces of Rn . Then


(i) The sum Lα of the subspaces from the family is itself a linear subspace of Rn ; it is the smallest
α
of those subspaces of Rn T
which contain every subspace from the family;
(ii) The intersection Lα of the subspaces from the family is itself a linear subspace of Rn ; it is the
α
largest of those subspaces of Rn which are contained in every subspace from the family.

A.1.4 Linear independence, bases, dimensions


A collection X = {x1 , ..., xN } of vectors from Rn is called linearly independent, if no nontrivial (i.e., with
at least one nonzero coefficient) linear combination of vectors from X is zero.
Example of linearly independent set: the collection of n standard basic orths e1 = (1, 0, ..., 0),
e2 = (0, 1, 0, ..., 0), ..., en = (0, ..., 0, 1).
Examples of linearly dependent sets: (1) X = {0}; (2) X = {e1 , e1 }; (3) X = {e1 , e2 , e1 +e2 }.
A collection of vectors f 1 , ..., f m is called a basis in Rn , if
1. The collection is linearly independent;
2. Every vector from Rn is a linear combination of vectors from the collection (i.e., Lin{f 1 , ..., f m } =
Rn ).
Example of a basis: The collection of standard basic orths e1 , ..., en is a basis in Rn .
Examples of non-bases: (1) The collection {e2 , ..., en }. This collection is linearly indepen-
dent, but not every vector is a linear combination of the vectors from the collection; (2)
The collection {e1 , e1 , e2 , ..., en }. Every vector is a linear combination of vectors form the
collection, but the collection is not linearly independent.
Besides the bases of the entire Rn , one can speak about the bases of linear subspaces:
A collection {f 1 , ..., f m } of vectors is called a basis of a linear subspace L, if
1. The collection is linearly independent,
2. L = Lin{f 1 , ..., f m }, i.e., all vectors f i belong to L, and every vector from L is a linear combination
of the vectors f 1 , ..., f m .
In order to avoid trivial remarks, it makes sense to agree once for ever that
An
P empty set of vectors is linearly independent, and an empty linear combination of vectors
λi xi equals to zero.
i∈∅

With this convention, the trivial linear subspace L = {0} also has a basis, specifically, an empty set of
vectors.
Theorem A.1.2 (i) Let L be a linear subspace of Rn . Then L admits a (finite) basis, and all bases of
L are comprised of the same number of vectors; this number is called the dimension of L and is denoted
by dim (L).
We have seen that Rn admits a basis comprised of n elements (the standard basic orths).
From (i) it follows that every basis of Rn contains exactly n vectors, and the dimension of
Rn is n.
(ii) The larger is a linear subspace of Rn , the larger is its dimension: if L ⊂ L0 are linear subspaces
of Rn , then dim (L) ≤ dim (L0 ), and the equality takes place if and only if L = L0 .
We have seen that the dimension of Rn is n; according to the above convention, the trivial
linear subspace {0} of Rn admits an empty basis, so that its dimension is 0. Since {0} ⊂
L ⊂ Rn for every linear subspace L of Rn , it follows from (ii) that the dimension of a linear
subspace in Rn is an integer between 0 and n.
576 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

(iii) Let L be a linear subspace in Rn . Then


(iii.1) Every linearly independent subset of vectors from L can be extended to a basis of L;
(iii.2) From every spanning subset X for L – i.e., a set X such that Lin(X) = L – one can extract
a basis of L.
It follows from (iii) that
– every linearly independent subset of L contains at most dim (L) vectors, and if it contains
exactly dim (L) vectors, it is a basis of L;
– every spanning set for L contains at least dim (L) vectors, and if it contains exactly dim (L)
vectors, it is a basis of L.
(iv) Let L be a linear subspace in Rn , and f 1 , ..., f m be a basis in L. Then every vector x ∈ L admits
exactly one representation
X m
x= λi (x)f i
i=1

as a linear combination of vectors from the basis, and the mapping

x 7→ (λ1 (x), ..., λm (x)) : L → Rm

is a one-to-one mapping of L onto Rm which is linear, i.e. for every i = 1, ..., m one has

λi (x + y) = λi (x) + λi (y) ∀(x, y ∈ L);


(A.1.2)
λi (νx) = νλi (x) ∀(x ∈ L, ν ∈ R).

The reals λi (x), i = 1, ..., m, are called the coordinates of x ∈ L in the basis f 1 , ..., f m .
E.g., the coordinates of a vector x ∈ Rn in the standard basis e1 , ..., en of Rn – the one
comprised of the standard basic orths – are exactly the entries of x.
(v) [Dimension formula] Let L1 , L2 be linear subspaces of Rn . Then

dim (L1 ∩ L2 ) + dim (L1 + L2 ) = dim (L1 ) + dim (L2 ).

A.1.5 Linear mappings and matrices


A function A(x) (another name – mapping) defined on Rn and taking values in Rm is called linear, if it
preserves linear operations:

A(x + y) = A(x) + A(y) ∀(x, y ∈ Rn ); A(λx) = λA(x) ∀(x ∈ Rn , λ ∈ R).

It is immediately seen that a linear mapping from Rn to Rm can be represented as multiplication by an


m × n matrix:
A(x) = Ax,
and this matrix is uniquely defined by the mapping: the columns Aj of A are just the images of the
standard basic orths ej under the mapping A:

Aj = A(ej ).

Linear mappings from Rn into Rm can be added to each other:

(A + B)(x) = A(x) + B(x)

and multiplied by reals:


(λA)(x) = λA(x),
A.1. SPACE RN : ALGEBRAIC STRUCTURE 577

and the results of these operations again are linear mappings from Rn to Rm . The addition of linear
mappings and multiplication of these mappings by reals correspond to the same operations with the
matrices representing the mappings: adding/multiplying by reals mappings, we add, respectively, multiply
by reals the corresponding matrices.
Given two linear mappings A(x) : Rn → Rm and B(y) : Rm → Rk , we can build their superposition

C(x) ≡ B(A(x)) : Rn → Rk ,

which is again a linear mapping, now from Rn to Rk . In the language of matrices representing the
mappings, the superposition corresponds to matrix multiplication: the k × n matrix C representing the
mapping C is the product of the matrices representing A and B:

A(x) = Ax, B(y) = By ⇒ C(x) ≡ B(A(x)) = B · (Ax) = (BA)x.

Important convention. When speaking about adding n-dimensional vectors and multiplying them
by reals, it is absolutely unimportant whether we treat the vectors as the column ones, or the row ones,
or write down the entries in rectangular tables, or something else. However, when matrix operations
(matrix-vector multiplication, transposition, etc.) become involved, it is important whether we treat
our vectors as columns, as rows, or as something else. For the sake of definiteness, from now on we
treat all vectors as column ones, independently of how we refer to them in the text. For example, when
saying for the first time what a vector is, we wrote x = (x1 , ..., xn ), which might suggest that we were
speaking about row vectors. We stress that it is  not the case, and the only reason for using the notation
x1
 .. 
x = (x1 , ..., xn ) instead of the “correct” one x =  .  is to save space and to avoid ugly formulas like
xn
 
x1
f ( ... ) when speaking about functions with vector arguments. After we have agreed that there is no
 

xn
such thing as a row vector in this course, we can use (and do use) without any harm whatever notation
we want.

Exercise A.1 1. Mark in the list below those subsets of Rn which are linear subspaces, find out their
dimensions and point out their bases:
(a) Rn
(b) {0}
(c) ∅
n
(d) {x ∈ Rn :
P
ixi = 0}
i=1
n
(e) {x ∈ Rn : ix2i = 0}
P
i=1
n
n
P
(f ) {x ∈ R : ixi = 1}
i=1
n
(g) {x ∈ Rn : ix2i = 1}
P
i=1

2. It is known that L is a subspace of Rn with exactly one basis. What is L?


3. Consider the space Rm×n of m × n matrices with real entries. As far as linear operations –
addition of matrices and multiplication of matrices by reals – are concerned, this space can be
treated as certain RN .
(a) Find the dimension of Rm×n and point out a basis in this space
578 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

(b) In the space Rn×n of square n × n matrices, there are two interesting subsets: the set Sn
of symmetric matrices {A = [Aij ] : Aij = Aij } and the set Jn of skew-symmetric matrices
{A = [Aij ] : Aij = −Aji }.
i. Verify that both Sn and Jn are linear subspaces of Rn×n
ii. Find the dimension and point out a basis in Sn
iii. Find the dimension and point out a basis in Jn
iv. What is the sum of Sn and Jn ? What is the intersection of Sn and Jn ?

A.2 Space Rn : Euclidean structure


So far, we were interested solely in the algebraic structure of Rn , or, which is the same, in the properties
of the linear operations (addition of vectors and multiplication of vectors by scalars) the space is endowed
with. Now let us consider another structure on Rn – the standard Euclidean structure – which allows to
speak about distances, angles, convergence, etc., and thus makes the space Rn a much richer mathematical
entity.

A.2.1 Euclidean structure


The standard Euclidean structure on Rn is given by the standard inner product – an operation which
takes on input two vectors x, y and produces from them a real, specifically, the real
n
X
hx, yi ≡ xT y = xi yi
i=1

The basic properties of the inner product are as follows:

1. [bi-linearity]: The real-valued function hx, yi of two vector arguments x, y ∈ Rn is linear with
respect to every one of the arguments, the other argument being fixed:

hλu + µv, yi = λhu, yi + µhv, yi ∀(u, v, y ∈ Rn , λ, µ ∈ R)


hx, λu + µvi = λhx, ui + µhx, vi ∀(x, u, v ∈ Rn , λ, µ ∈ R)

2. [symmetry]: The function hx, yi is symmetric:

hx, yi = hy, xi ∀(x, y ∈ Rn ).

3. [positive definiteness]: The quantity hx, xi always is nonnegative, and it is zero if and only if x is
zero.

Remark A.2.1 The outlined 3 properties – bi-linearity, symmetry and positive definiteness – form a
definition of an Euclidean inner product, and there are infinitely many different from each other ways to
satisfy these properties; in other words, there are infinitely many different Euclidean inner products on
Rn . The standard inner product hx, yi = xT y is just a particular case of this general notion. Although
in the sequel we normally work with the standard inner product, the reader should remember that the
facts we are about to recall are valid for all Euclidean inner products, and not only for the standard one.

The notion of an inner product underlies a number of purely algebraic constructions, in particular, those
of inner product representation of linear forms and of orthogonal complement.
A.2. SPACE RN : EUCLIDEAN STRUCTURE 579

A.2.2 Inner product representation of linear forms on Rn


A linear form on Rn is a real-valued function f (x) on Rn which is additive (f (x + y) = f (x) + f (y)) and
homogeneous (f (λx) = λf (x))
n
P
Example of linear form: f (x) = ixi
i=1
Examples of non-linear functions: (1) f (x) = x1 + 1; (2) f (x) = x21 − x22 ; (3) f (x) = sin(x1 ).
When adding/multiplying by reals linear forms, we again get linear forms (scientifically speaking: “linear
forms on Rn form a linear space”). Euclidean structure allows to identify linear forms on Rn with vectors
from Rn :

Theorem A.2.1 Let h·, ·i be a Euclidean inner product on Rn .


(i) Let f (x) be a linear form on Rn . Then there exists a uniquely defined vector f ∈ Rn such that
the form is just the inner product with f :

f (x) = hf, xi ∀x

(ii) Vice versa, every vector f ∈ Rn defines, via the formula

f (x) ≡ hf, xi,

a linear form on Rn ;
(iii) The above one-to-one correspondence between the linear forms and vectors on Rn is linear:
adding linear forms (or multiplying a linear form by a real), we add (respectively, multiply by the real)
the vector(s) representing the form(s).

A.2.3 Orthogonal complement


An Euclidean structure allows to associate with a linear subspace L ⊂ Rn another linear subspace L⊥
– the orthogonal complement (or the annulator) of L; by definition, L⊥ consists of all vectors which are
orthogonal to every vector from L:

L⊥ = {f : hf, xi = 0 ∀x ∈ L}.

Theorem A.2.2 (i) Twice taken, orthogonal complement recovers the original subspace: whenever L is
a linear subspace of Rn , one has
(L⊥ )⊥ = L;
(ii) The larger is a linear subspace L, the smaller is its orthogonal complement: if L1 ⊂ L2 are linear
subspaces of Rn , then L⊥ 1 ⊃ L2

(iii) The intersection of a subspace and its orthogonal complement is trivial, and the sum of these
subspaces is the entire Rn :
L ∩ L⊥ = {0}, L + L⊥ = Rn .

Remark A.2.2 From Theorem A.2.2.(iii) and the Dimension formula (Theorem A.1.2.(v)) it follows,
first, that for every subspace L in Rn one has

dim (L) + dim (L⊥ ) = n.

Second, every vector x ∈ Rn admits a unique decomposition

x = xL + xL ⊥
580 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

into a sum of two vectors: the first of them, xL , belongs to L, and the second, xL⊥ , belongs to L⊥ .
This decomposition is called the orthogonal decomposition of x taken with respect to L, L⊥ ; xL is called
the orthogonal projection of x onto L, and xL⊥ – the orthogonal projection of x onto the orthogonal
complement of L. Both projections depend on x linearly, for example,

(x + y)L = xL + yL , (λx)L = λxL .

The mapping x 7→ xL is called the orthogonal projector onto L.

A.2.4 Orthonormal bases


A collection of vectors f 1 , ..., f m is called orthonormal w.r.t. Euclidean inner product h·, ·i, if distinct
vector from the collection are orthogonal to each other:

i 6= j ⇒ hf i , f j i = 0

and inner product of every vector f i with itself is unit:

hf i , f i i = 1, i = 1, ..., m.

Theorem A.2.3 (i) An orthonormal collection f 1 , ..., f m always is linearly independent and is therefore
a basis of its linear span L = Lin(f 1 , ..., f m ) (such a basis in a linear subspace is called orthonormal). The
coordinates of a vector x ∈ L w.r.t. an orthonormal basis f 1 , ..., f m of L are given by explicit formulas:
m
X
x= λi (x)f i ⇔ λi (x) = hx, f i i.
i=1

Example of an orthonormal basis in Rn : The standard basis {e1 , ..., en } is orthonormal with
respect to the standard inner product hx, yi = xT y on Rn (but is not orthonormal w.r.t.
other Euclidean inner products on Rn ).
Proof of (i): Taking inner product of both sides in the equality
X
x= λj (x)f j
j

with f i , we get

h λj (x)f j , f i i
P
hx, fi i =
Pj
= λj (x)hf j , f i i [bilinearity of inner product]
j
= λi (x) [orthonormality of {f i }]

Similar computation demonstrates that if 0 is represented as a linear combination of f i with


certain coefficients λi , then λi = h0, f i i = 0, i.e., all the coefficients are zero; this means that
an orthonormal system is linearly independent.
(ii) If f 1 , ..., f m is an orthonormal basis in a linear subspace L, then the inner product of two vectors
x, y ∈ L in the coordinates λi (·) w.r.t. this basis is given by the standard formula
m
X
hx, yi = λi (x)λi (y).
i=1
A.2. SPACE RN : EUCLIDEAN STRUCTURE 581

Proof:
λi (x)f i , y = λi (y)f i
P P
x =
iP i
h λi (x)f i , λi (y)f i i
P
⇒ hx, yi =
Pi i
= λi (x)λj (y)hf i , f j i [bilinearity of inner product]
i,j
[orthonormality of {f i }]
P
= λi (x)λi (y)
i

(iii) Every linear subspace L of Rn admits an orthonormal basis; moreover, every orthonormal system
f , ..., f m of vectors from L can be extended to an orthonormal basis in L.
1

Important corollary: All Euclidean spaces of the same dimension are “the same”. Specifi-
cally, if L is an m-dimensional space in a space Rn equipped with an Euclidean inner product
h·, ·i, then there exists a one-to-one mapping x 7→ A(x) of L onto Rm such that
• The mapping preserves linear operations:
A(x + y) = A(x) + A(y) ∀(x, y ∈ L); A(λx) = λA(x) ∀(x ∈ L, λ ∈ R);

• The mapping converts the h·, ·i inner product on L into the standard inner product on
Rm :
hx, yi = (A(x))T A(y) ∀x, y ∈ L.
Indeed, by (iii) L admits an orthonormal basis f 1 , ..., f m ; using (ii), one can immediately
check that the mapping
x 7→ A(x) = (λ1 (x), ..., λm (x))
which maps x ∈ L into the m-dimensional vector comprised of the coordinates of x in the
basis f 1 , ..., f m , meets all the requirements.
Proof of (iii) is given by important by its own right Gram-Schmidt orthogonalization process
as follows. We start with an arbitrary basis h1 , ..., hm in L and step by step convert it into
an orthonormal basis f 1 , ..., f m . At the beginning of a step t of the construction, we already
have an orthonormal collection f 1 , ..., f t−1 such that Lin{f 1 , ..., f t−1 } = Lin{h1 , ..., ht−1 }.
At a step t we
1. Build the vector
t−1
X
g t = ht − hht , f j if j .
j=1

It is easily seen (check it!) that


(a) One has
Lin{f 1 , ..., f t−1 , g t } = Lin{h1 , ..., ht }; (A.2.1)
t
(b) g 6= 0 (derive this fact from (A.2.1) and the linear independence of the collection
h1 , ..., hm );
(c) g t is orthogonal to f 1 , ..., f t−1
2. Since g t 6= 0, the quantity hg t , g t i is positive (positive definiteness of the inner product),
so that the vector
1
ft = p gt
hg t , g t i
is well defined. It is immediately seen (check it!) that the collection f 1 , ..., f t is or-
thonormal and
Lin{f 1 , ..., f t } = Lin{f 1 , ..., f t−1 , g t } = Lin{h1 , ..., ht }.
Step t of the orthogonalization process is completed.
582 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

After m steps of the optimization process, we end up with an orthonormal system f 1 , ..., f m
of vectors from L such that

Lin{f 1 , ..., f m } = Lin{h1 , ..., hm } = L,

so that f 1 , ..., f m is an orthonormal basis in L.


The construction can be easily modified (do it!) to extend a given orthonormal system of
vectors from L to an orthonormal basis of L.

Exercise A.2 1. What is the orthogonal complement (w.r.t. the standard inner product) of the sub-
n
space {x ∈ Rn : xi = 0} in Rn ?
P
i=1

2. Find an orthonormal basis (w.r.t. the standard inner product) in the linear subspace {x ∈ Rn :
x1 = 0} of Rn

3. Let L be a linear subspace of Rn , and f 1 , ..., f m be an orthonormal basis in L. Prove that for every
x ∈ Rn , the orhoprojection xL of x onto L is given by the formula
m
X
xL = (xT f i )f i .
i=1

4. Let L1 , L2 be linear subspaces in Rn . Verify the formulas

(L1 + L2 )⊥ = L⊥ ⊥
1 ∩ L2 ; (L1 ∩ L2 )⊥ = L⊥ ⊥
1 + L2 .

5. Consider the space of m × n matrices Rm×n , and let us equip it with the “standard inner product”
(called the Frobenius inner product)
X
hA, Bi = Aij Bij
i,j

(as if we were treating m×n matrices as mn-dimensional vectors, writing the entries of the matrices
column by column, and then taking the standard inner product of the resulting long vectors).

(a) Verify that in terms of matrix multiplication the Frobenius inner product can be written as

hA, Bi = Tr(AB T ),

where Tr(C) is the trace (the sum of diagonal elements) of a square matrix C.
(b) Build an orthonormal basis in the linear subspace Sn of symmetric n × n matrices
(c) What is the orthogonal complement of the subspace Sn of symmetric n × n matrices in the
space Rn×n of square n × n matrices?
 
2 1 2
(d) Find the orthogonal decomposition, w.r.t. S , of the matrix
3 4

A.3 Affine subspaces in Rn


Many of events to come will take place not in the entire Rn , but in its affine subspaces which, geometri-
cally, are planes of different dimensions in Rn . Let us become acquainted with these subspaces.
A.3. AFFINE SUBSPACES IN RN 583

A.3.1 Affine subspaces and affine hulls


Definition of an affine subspace. Geometrically, a linear subspace L of Rn is a special plane –
the one passing through the origin of the space (i.e., containing the zero vector). To get an arbitrary
plane M , it suffices to subject an appropriate special plane L to a translation – to add to all points from
L a fixed shifting vector a. This geometric intuition leads to the following
Definition A.3.1 [Affine subspace] An affine subspace (a plane) in Rn is a set of the form

M = a + L = {y = a + x | x ∈ L}, (A.3.1)

where L is a linear subspace in Rn and a is a vector from Rn 2)


.
E.g., shifting the linear subspace L comprised of vectors with zero first entry by a vector a = (a1 , ..., an ),
we get the set M = a + L of all vectors x with x1 = a1 ; according to our terminology, this is an affine
subspace.
Immediate question about the notion of an affine subspace is: what are the “degrees of freedom” in
decomposition (A.3.1) – how “strict” M determines a and L? The answer is as follows:
Proposition A.3.1 The linear subspace L in decomposition (A.3.1) is uniquely defined by M and is the
set of all differences of the vectors from M :

L = M − M = {x − y | x, y ∈ M }. (A.3.2)

The shifting vector a is not uniquely defined by M and can be chosen as an arbitrary vector from M .

A.3.2 Intersections of affine subspaces, affine combinations and affine hulls


An immediate conclusion of Proposition A.3.1 is as follows:
Corollary A.3.1 Let {Mα } be an arbitrary family of affine subspaces in Rn , and assume that the set
M = ∩α Mα is nonempty. Then Mα is an affine subspace.
From Corollary A.3.1 it immediately follows that for every nonempty subset Y of Rn there exists the
smallest affine subspace containing Y – the intersection of all affine subspaces containing Y . This smallest
affine subspace containing Y is called the affine hull of Y (notation: Aff(Y )).
All this resembles a lot the story about linear spans. Can we further extend this analogy and to get
a description of the affine hull Aff(Y ) in terms of elements of Y similar to the one of the linear span
(“linear span of X is the set of all linear combinations of vectors from X”)? Sure we can!
Let us choose somehow a point y0 ∈ Y , and consider the set

X = Y − y0 .

All affine subspaces containing Y should contain also y0 and therefore, by Proposition A.3.1, can be
represented as M = y0 + L, L being a linear subspace. It is absolutely evident that an affine subspace
M = y0 + L contains Y if and only if the subspace L contains X, and that the larger is L, the larger is
M:
L ⊂ L0 ⇒ M = y0 + L ⊂ M 0 = y0 + L0 .
Thus, to find the smallest among affine subspaces containing Y , it suffices to find the smallest among the
linear subspaces containing X and to translate the latter space by y0 :

Aff(Y ) = y0 + Lin(X) = y0 + Lin(Y − y0 ). (A.3.3)


2)
according to our convention on arithmetic of sets, I was supposed to write in (A.3.1) {a} + L instead of
a + L – we did not define arithmetic sum of a vector and a set. Usually people ignore this difference and omit
the brackets when writing down singleton sets in similar expressions: we shall write a + L instead of {a} + L, Rd
instead of R{d}, etc.
584 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

Now, we know what is Lin(Y − y0 ) – this is a set of all linear combinations of vectors from Y − y0 , so
that a generic element of Lin(Y − y0 ) is
k
X
x= µi (yi − y0 ) [k may depend of x]
i=1

with yi ∈ Y and real coefficients µi . It follows that the generic element of Aff(Y ) is
k
X k
X
y = y0 + µi (yi − y0 ) = λi yi ,
i=1 i=0

where X
λ0 = 1 − µi , λi = µi , i ≥ 1.
i

We see that a generic element of Aff(Y ) is a linear combination of vectors from Y . Note, however, that
the coefficients λi in this combination are not completely arbitrary: their sum is equal to 1. Linear
combinations of this type – with the unit sum of coefficients – have a special name – they are called
affine combinations.
We have seen that every vector from Aff(Y ) is an affine combination of vectors of Y . Whether the
inverse is true, i.e., whether Aff(Y ) contains all affine combinations of vectors from Y ? The answer is
positive. Indeed, if
X k
y= λi yi
i=1
P
is an affine combination of vectors from Y , then, using the equality i λi = 1, we can write it also as
k
X
y = y0 + λi (yi − y0 ),
i=1

y0 being the “marked” vector we used in our previous reasoning, and the vector of this form, as we
already know, belongs to Aff(Y ). Thus, we come to the following

Proposition A.3.2 [Structure of affine hull]

Aff(Y ) = {the set of all affine combinations of vectors from Y }.

When Y itself is an affine subspace, it, of course, coincides with its affine hull, and the above Proposition
leads to the following

Corollary A.3.2 An affine subspace M is closed with respect to taking affine combinations of its mem-
bers – every combination of this type is a vector from M . Vice versa, a nonempty set which is closed with
respect to taking affine combinations of its members is an affine subspace.

A.3.3 Affinely spanning sets, affinely independent sets, affine dimension


Affine subspaces are closely related to linear subspaces, and the basic notions associated with linear
subspaces have natural and useful affine analogies. Here we introduce these notions and discuss their
basic properties.

Affinely spanning sets. Let M = a + L be an affine subspace. We say that a subset Y of M is


affinely spanning for M (we say also that Y spans M affinely, or that M is affinely spanned by Y ), if
M = Aff(Y ), or, which is the same due to Proposition A.3.2, if every point of M is an affine combination
of points from Y . An immediate consequence of the reasoning of the previous Section is as follows:
A.3. AFFINE SUBSPACES IN RN 585

Proposition A.3.3 Let M = a + L be an affine subspace and Y be a subset of M , and let y0 ∈ Y . The
set Y affinely spans M – M = Aff(Y ) – if and only if the set
X = Y − y0
spans the linear subspace L: L = Lin(X).

Affinely independent sets. A linearly independent set x1 , ..., xk is a set such that no nontrivial
linear combination of x1 , ..., xk equals to zero. An equivalent definition is given by Theorem A.1.2.(iv):
x1 , ..., xk are linearly independent, if the coefficients in a linear combination
k
X
x= λ i xi
i=1

are uniquely defined by the value x of the combination. This equivalent form reflects the essence of
the matter – what we indeed need, is the uniqueness of the coefficients in expansions. Accordingly, this
equivalent form is the prototype for the notion of an affinely independent set: we want to introduce this
notion in such a way that the coefficients λi in an affine combination
k
X
y= λ i yi
i=0

of “affinely independent” set of vectors y0 , ..., yk would be uniquely defined by y. Non-uniqueness would
mean that
X k k
X
λi yi = λ0i yi
i=0 i=0
for two different collections of coefficients λi and λ0i with unit sums of coefficients; if it is the case, then
m
X
(λi − λ0i )yi = 0,
i=0

so that yi ’s are linearly dependent


P and, moreover,
P there Pexists a nontrivial zero combination of then with
zero sum of coefficients (since i (λi − λ0i ) = i λi − i λ0i = 1 − 1 = 0). Our reasoning can be inverted
– if there exists a nontrivial linear combination of yi ’s with zero sum of coefficients which is zero, then
the coefficients in the representation of a vector as an affine combination of yi ’s are not uniquely defined.
Thus, in order to get uniqueness we should for sure forbid relations
k
X
µ i yi = 0
i=0

with nontrivial zero sum coefficients µi . Thus, we have motivated the following
Definition A.3.2 [Affinely independent set] A collection y0 , ..., yk of n-dimensional vectors is called
affinely independent, if no nontrivial linear combination of the vectors with zero sum of coefficients is
zero:
Xk Xk
λi yi = 0, λi = 0 ⇒ λ0 = λ1 = ... = λk = 0.
i=1 i=0

With this definition, we get the result completely similar to the one of Theorem A.1.2.(iv):
Corollary A.3.3 Let y0 , ..., yk be affinely independent. Then the coefficients λi in an affine combination
k
X X
y= λi yi [ λi = 1]
i=0 i

of the vectors y0 , ..., yk are uniquely defined by the value y of the combination.
586 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

Verification of affine independence of a collection can be immediately reduced to verification of linear


independence of closely related collection:
Proposition A.3.4 k + 1 vectors y0 , ..., yk are affinely independent if and only if the k vectors (y1 −
y0 ), (y2 − y0 ), ..., (yk − y0 ) are linearly independent.
From the latter Proposition it follows, e.g., that the collection 0, e1 , ..., en comprised of the origin and
the standard basic orths is affinely independent. Note that this collection is linearly dependent (as
every collection containing zero). You should definitely know the difference between the two notions of
independence we deal with: linear independence means that no nontrivial linear combination of the vectors
can be zero, while affine independence means that no nontrivial linear combination from certain restricted
class of them (with zero sum of coefficients) can be zero. Therefore, there are more affinely independent
sets than the linearly independent ones: a linearly independent set is for sure affinely independent, but
not vice versa.

Affine bases and affine dimension. Propositions A.3.2 and A.3.3 reduce the notions of affine
spanning/affine independent sets to the notions of spanning/linearly independent ones. Combined with
Theorem A.1.2, they result in the following analogies of the latter two statements:
Proposition A.3.5 [Affine dimension] Let M = a + L be an affine subspace in Rn . Then the following
two quantities are finite integers which are equal to each other:
(i) minimal # of elements in the subsets of M which affinely span M ;
(ii) maximal # of elements in affine independent subsets of M .
The common value of these two integers is by 1 more than the dimension dim L of L.
By definition, the affine dimension of an affine subspace M = a + L is the dimension dim L of L. Thus, if
M is of affine dimension k, then the minimal cardinality of sets affinely spanning M , same as the maximal
cardinality of affine independent subsets of M , is k + 1.
Theorem A.3.1 [Affine bases] Let M = a + L be an affine subspace in Rn .
A. Let Y ⊂ M . The following three properties of X are equivalent:
(i) Y is an affine independent set which affinely spans M ;
(ii) Y is affine independent and contains 1 + dim L elements;
(iii) Y affinely spans M and contains 1 + dim L elements.
A subset Y of M possessing the indicated equivalent to each other properties is called an affine basis of
M . Affine bases in M are exactly the collections y0 , ..., ydim L such that y0 ∈ M and (y1 −y0 ), ..., (ydim L −
y0 ) is a basis in L.
B. Every affinely independent collection of vectors of M either itself is an affine basis of M , or can
be extended to such a basis by adding new vectors. In particular, there exists affine basis of M .
C. Given a set Y which affinely spans M , you can always extract from this set an affine basis of M .
We already know that the standard basic orths e1 , ..., en form a basis of the entire space Rn . And what
about affine bases in Rn ? According to Theorem A.3.1.A, you can choose as such a basis a collection
e0 , e0 + e1 , ..., e0 + en , e0 being an arbitrary vector.

Barycentric coordinates. Let M be an affine subspace, and let y0 , ..., yk be an affine basis of M .
Since the basis, by definition, affinely spans M , every vector y from M is an affine combination of the
vectors of the basis:
Xk Xk
y= λi yi [ λi = 1],
i=0 i=0
and since the vectors of the affine basis are affinely independent, the coefficients of this combination are
uniquely defined by y (Corollary A.3.3). These coefficients are called barycentric coordinates of y with
respect to the affine basis in question. In contrast to the usual coordinates with respect to a (linear)
basis, the barycentric coordinates could not be quite arbitrary: their sum should be equal to 1.
A.3. AFFINE SUBSPACES IN RN 587

A.3.4 Dual description of linear subspaces and affine subspaces


To the moment we have introduced the notions of linear subspace and affine subspace and have pre-
sented a scheme of generating these entities: to get, e.g., a linear subspace, you start from an arbitrary
nonempty set X ⊂ Rn and add to it all linear combinations of the vectors from X. When replacing
linear combinations with the affine ones, you get a way to generate affine subspaces.
The just indicated way of generating linear subspaces/affine subspaces resembles the approach of a
worker building a house: he starts with the base and then adds to it new elements until the house is
ready. There exists, anyhow, an approach of an artist creating a sculpture: he takes something large and
then deletes extra parts of it. Is there something like “artist’s way” to represent linear subspaces and
affine subspaces? The answer is positive and very instructive.

A.3.4.1 Affine subspaces and systems of linear equations


Let L be a linear subspace. According to Theorem A.2.2.(i), it is an orthogonal complement – namely,
the orthogonal complement to the linear subspace L⊥ . Now let a1 , ..., am be a finite spanning set in L⊥ .
A vector x which is orthogonal to a1 , ..., am is orthogonal to the entire L⊥ (since every vector from L⊥
is a linear combination of a1 , ..., am and the inner product is bilinear); and of course vice versa, a vector
orthogonal to the entire L⊥ is orthogonal to a1 , ..., am . We see that

L = (L⊥ )⊥ = {x | aTi x = 0, i = 1, ..., k}. (A.3.4)

Thus, we get a very important, although simple,


Proposition A.3.6 [“Outer” description of a linear subspace] Every linear subspace L in Rn is a set of
solutions to a homogeneous linear system of equations

aTi x = 0, i = 1, ..., m, (A.3.5)

given by properly chosen m and vectors a1 , ..., am .


Proposition A.3.6 is an “if and only if” statement: as we remember from Example A.1.3.A.4, solution set
to a homogeneous system of linear equations with n variables always is a linear subspace in Rn .
From Proposition A.3.6 and the facts we know about the dimension we can easily derive several
important consequences:
• Systems (A.3.5) which define a given linear subspace L are exactly the systems given by the vectors
a1 , ..., am which span L⊥ 3)
• The smallest possible number m of equations in (A.3.5) is the dimension of L⊥ , i.e., by Remark
A.2.2, is codim L ≡ n − dim L 4)
Now, an affine subspace M is, by definition, a translation of a linear subspace: M = a + L. As we know,
vectors x from L are exactly the solutions of certain homogeneous system of linear equations

aTi x = 0, i = 1, ..., m.

It is absolutely clear that adding to these vectors a fixed vector a, we get exactly the set of solutions to
the inhomogeneous solvable linear system

aTi x = bi ≡ aTi a, i = 1, ..., m.

Vice versa, the set of solutions to a solvable system of linear equations

aTi x = bi , i = 1, ..., m,
3)
the reasoning which led us to Proposition A.3.6 says that [a1 , ..., am span L⊥ ] ⇒ [(A.3.5) defines L]; now we
claim that the inverse also is true
4
to make this statement true also in the extreme case when L = Rn (i.e., when codim L = 0), we from now
on make a convention that an empty set of equations or inequalities defines, as the solution set, the entire space
588 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

with n variables is the sum of a particular solution to the system and the solution set to the corresponding
homogeneous system (the latter set, as we already know, is a linear subspace in Rn ), i.e., is an affine
subspace. Thus, we get the following
Proposition A.3.7 [“Outer” description of an affine subspace]
Every affine subspace M = a + L in Rn is a set of solutions to a solvable linear system of equations

aTi x = bi , i = 1, ..., m, (A.3.6)

given by properly chosen m and vectors a1 , ..., am .


Vice versa, the set of all solutions to a solvable system of linear equations with n variables is an affine
subspace in Rn .
The linear subspace L associated with M is exactly the set of solutions of the homogeneous (with the
right hand side set to 0) version of system (A.3.6).
We see, in particular, that an affine subspace always is closed.

Comment. The “outer” description of a linear subspace/affine subspace – the “artist’s” one – is in
many cases much more useful than the “inner” description via linear/affine combinations (the “worker’s”
one). E.g., with the outer description it is very easy to check whether a given vector belongs or does not
belong to a given linear subspace/affine subspace, which is not that easy with the inner one5) . In fact
both descriptions are “complementary” to each other and perfectly well work in parallel: what is difficult
to see with one of them, is clear with another. The idea of using “inner” and “outer” descriptions of
the entities we meet with – linear subspaces, affine subspaces, convex sets, optimization problems – the
general idea of duality – is, I would say, the main driving force of Convex Analysis and Optimization,
and in the sequel we would all the time meet with different implementations of this fundamental idea.

A.3.5 Structure of the simplest affine subspaces


This small subsection deals mainly with terminology. According to their dimension, affine subspaces in
Rn are named as follows:
• Subspaces of dimension 0 are translations of the only 0-dimensional linear subspace {0}, i.e., are
singleton sets – vectors from Rn . These subspaces are called points; a point is a solution to a
square system of linear equations with nonsingular matrix.
• Subspaces of dimension 1 (lines). These subspaces are translations of one-dimensional linear sub-
spaces of Rn . A one-dimensional linear subspace has a single-element basis given by a nonzero
vector d and is comprised of all multiples of this vector. Consequently, line is a set of the form

{y = a + td | t ∈ R}

given by a pair of vectors a (the origin of the line) and d (the direction of the line), d 6= 0. The
origin of the line and its direction are not uniquely defined by the line; you can choose as origin
any point on the line and multiply a particular direction by nonzero reals.
In the barycentric coordinates a line is described as follows:

l = {λ0 y0 + λ1 y1 | λ0 + λ1 = 1} = {λy0 + (1 − λ)y1 | λ ∈ R},

where y0 , y1 is an affine basis of l; you can choose as such a basis any pair of distinct points on the
line.
The “outer” description a line is as follows: it is the set of solutions to a linear system with n
variables and n − 1 linearly independent equations.
5)
in principle it is not difficult to certify that a given point belongs to, say, a linear subspace given as the linear
span of some set – it suffices to point out a representation of the point as a linear combination of vectors from
the set. But how could you certify that the point does not belong to the subspace?
A.4. SPACE RN : METRIC STRUCTURE AND TOPOLOGY 589

• Subspaces of dimension > 2 and < n − 1 have no special names; sometimes they are called affine
planes of such and such dimension.
• Affine subspaces of dimension n − 1, due to important role they play in Convex Analysis, have
a special name – they are called hyperplanes. The outer description of a hyperplane is that a
hyperplane is the solution set of a single linear equation

aT x = b

with nontrivial left hand side (a 6= 0). In other words, a hyperplane is the level set a(x) = const of
a nonconstant linear form a(x) = aT x.
• The “largest possible” affine subspace – the one of dimension n – is unique and is the entire Rn .
This subspace is given by an empty system of linear equations.

A.4 Space Rn : metric structure and topology


Euclidean structure on the space Rn gives rise to a number of extremely important metric notions –
distances, convergence, etc. For the sake of definiteness, we associate these notions with the standard
inner product xT y.

A.4.1 Euclidean norm and distances


By positive definiteness, the quantity xT x always is nonnegative, so that the quantity
√ q
|x| ≡ kxk2 = xT x = x21 + x22 + ... + x2n

is well-defined; this quantity is called the (standard) Euclidean norm of vector x (or simply the norm
of x) and is treated as the distance from the origin to x. The distance between two arbitrary points
x, y ∈ Rn is, by definition, the norm |x − y| of the difference x − y. The notions we have defined satisfy
all basic requirements on the general notions of a norm and distance, specifically:
1. Positivity of norm: The norm of a vector always is nonnegative; it is zero if and only is the vector
is zero:
|x| ≥ 0 ∀x; |x| = 0 ⇔ x = 0.
2. Homogeneity of norm: When a vector is multiplied by a real, its norm is multiplied by the absolute
value of the real:
|λx| = |λ| · |x| ∀(x ∈ Rn , λ ∈ R).
3. Triangle inequality: Norm of the sum of two vectors is ≤ the sum of their norms:

|x + y| ≤ |x| + |y| ∀(x, y ∈ Rn ).

In contrast to the properties of positivity and homogeneity, which are absolutely evident,
the Triangle inequality is not trivial and definitely requires a proof. The proof goes
through a fact which is extremely important by its own right – the Cauchy Inequality,
which perhaps is the most frequently used inequality in Mathematics:
Theorem A.4.1 [Cauchy’s Inequality] The absolute value of the inner product of two
vectors does not exceed the product of their norms:

|xT y| ≤ |x||y| ∀(x, y ∈ Rn )

and is equal to the product of the norms if and only if one of the vectors is proportional
to the other one:

|xT y| = |x||y| ⇔ {∃α : x = αy or ∃β : y = βx}


590 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

Proof is immediate: we may assume that both x and y are nonzero (otherwise the
Cauchy inequality clearly is equality, and one of the vectors is constant times (specifi-
cally, zero times) the other one, as announced in Theorem). Assuming x, y 6= 0, consider
the function
f (λ) = (x − λy)T (x − λy) = xT x − 2λxT y + λ2 y T y.
By positive definiteness of the inner product, this function – which is a second order
polynomial – is nonnegative on the entire axis, whence the discriminant of the polyno-
mial
(xT y)2 − (xT x)(y T y)
is nonpositive:
(xT y)2 ≤ (xT x)(y T y).
Taking square roots of both sides, we arrive at the Cauchy Inequality. We also see that
the inequality is equality if and only if the discriminant of the second order polynomial
f (λ) is zero, i.e., if and only if the polynomial has a (multiple) real root; but due to
positive definiteness of inner product, f (·) has a root λ if and only if x = λy, which
proves the second part of Theorem. 2

From Cauchy’s Inequality to the Triangle Inequality: Let x, y ∈ Rn . Then


|x + y|2 = (x + y)T (x + y) [definition of norm]
= xT x + y T y + 2xT y [opening parentheses]
xT x + y T y +2|x||y| [Cauchy’s Inequality]
≤ |{z}
|{z}
|x|2 |y|2
= (|x| + |y|)2
⇒ |x + y| ≤ |x| + |y| 2

The properties of norm (i.e., of the distance to the origin) we have established induce properties of the
distances between pairs of arbitrary points in Rn , specifically:
1. Positivity of distances: The distance |x−y| between two points is positive, except for the case when
the points coincide (x = y), when the distance between x and y is zero;
2. Symmetry of distances: The distance from x to y is the same as the distance from y to x:
|x − y| = |y − x|;

3. Triangle inequality for distances: For every three points x, y, z, the distance from x to z does not
exceed the sum of distances between x and y and between y and z:
|z − x| ≤ |y − x| + |z − y| ∀(x, y, z ∈ Rn )

A.4.2 Convergence
Equipped with distances, we can define the fundamental notion of convergence of a sequence of vectors.
Specifically, we say that a sequence x1 , x2 , ... of vectors from Rn converges to a vector x̄, or, equivalently,
that x̄ is the limit of the sequence {xi } (notation: x̄ = lim xi ), if the distances from x̄ to xi go to 0 as
i→∞
i → ∞:
x̄ = lim xi ⇔ |x̄ − xi | → 0, i → ∞,
i→∞
or, which is the same, for every  > 0 there exists i = i() such that the distance between every point xi ,
i ≥ i(), and x̄ does not exceed :
|x̄ − xi | → 0, i → ∞ ⇔ ∀ > 0∃i() : i ≥ i() ⇒ |x̄ − xi | ≤  .
 
A.4. SPACE RN : METRIC STRUCTURE AND TOPOLOGY 591

Exercise A.3 Verify the following facts:


1. x̄ = lim xi if and only if for every j = 1, ..., n the coordinates # j of the vectors xi converge, as
i→∞
i → ∞, to the coordinate # j of the vector x̄;
2. If a sequence converges, its limit is uniquely defined;
3. Convergence is compatible with linear operations:
— if xi → x and y i → y as i → ∞, then xi + y i → x + y as i → ∞;
— if xi → x and λi → λ as i → ∞, then λi xi → λx as i → ∞.

A.4.3 Closed and open sets


After we have at our disposal distance and convergence, we can speak about closed and open sets:
• A set X ⊂ Rn is called closed, if it contains limits of all converging sequences of elements of X:
n o
xi ∈ X, x = lim xi ⇒ x ∈ X
i→∞

• A set X ⊂ Rn is called open, if whenever x belongs to X, all points close enough to x also belong
to X:
∀(x ∈ X)∃(δ > 0) : |x0 − x| < δ ⇒ x0 ∈ X.
An open set containing a point x is called a neighbourhood of x.

Examples of closed sets: (1) Rn ; (2) ∅; (3) the sequence xi = (i, 0, ..., 0), i = 1, 2, 3, ...; (4)
n
{x ∈ Rn : aij xj = 0, i = 1, ..., m} (in other words: a linear subspace in Rn always is
P
i=1
n
closed, see Proposition A.3.6);(5) {x ∈ Rn :
P
aij xj = bi , i = 1, ..., m} (in other words: an
i=1
affine subset of Rn always is closed, see Proposition A.3.7);; (6) Any finite subset of Rn
Examples of non-closed sets: (1) Rn \{0}; (2) the sequence xi = (1/i, 0, ..., 0), i = 1, 2, 3, ...;
n
(3) {x ∈ Rn : xj > 0, j = 1, ..., n}; (4) {x ∈ Rn :
P
xj > 5}.
i=1
n
Examples of open sets: (1) Rn ; (2) ∅; (3) {x ∈ Rn :
P
aij xj > bj , i = 1, ..., m}; (4)
j=1
complement of a finite set.
Examples of non-open sets: (1) A nonempty finite set; (2) the sequence xi = (1/i, 0, ..., 0),
i = 1, 2, 3, ..., and the sequence xi = (i, 0, 0, ..., 0), i = 1, 2, 3, ...; (3) {x ∈ Rn : xj ≥ 0, j =
n
1, ..., n}; (4) {x ∈ Rn :
P
xj ≥ 5}.
i=1

Exercise A.4 Mark in the list to follows those sets which are closed and those which are open:
1. All vectors with integer coordinates
2. All vectors with rational coordinates
3. All vectors with positive coordinates
4. All vectors with nonnegative coordinates
5. {x : |x| < 1};
6. {x : |x| = 1};
7. {x : |x| ≤ 1};
592 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

8. {x : |x| ≥ 1}:
9. {x : |x| > 1};
10. {x : 1 < |x| ≤ 2}.
Verify the following facts
1. A set X ⊂ Rn is closed if and only if its complement X̄ = Rn \X is open;
2. Intersection of every family (finite or infinite) of closed sets is closed. Union of every family (finite
of infinite) of open sets is open.
3. Union of finitely many closed sets is closed. Intersection of finitely many open sets is open.

A.4.4 Local compactness of Rn


A fundamental fact about convergence in Rn , which in certain sense is characteristic for this series of
spaces, is the following
Theorem A.4.2 From every bounded sequence {xi }∞ n
i=1 of points from R one can extract a converg-
ij ∞
ing subsequence {x }j=1 . Equivalently: A closed and bounded subset X of Rn is compact, i.e., a set
possessing the following two equivalent to each other properties:
(i) From every sequence of elements of X one can extract a subsequence which converges to certain
point of X; S
(ii) From every open covering of X (i.e., a family {Uα }α∈A of open sets such that X ⊂ Uα ) one
α∈A
N
S
can extract a finite sub-covering, i.e., a finite subset of indices α1 , ..., αN such that X ⊂ Uαi .
i=1

A.5 Continuous functions on Rn


A.5.1 Continuity of a function
Let X ⊂ Rn and f (x) : X → Rm be a function (another name – mapping) defined on X and taking
values in Rm .
1. f is called continuous at a point x̄ ∈ X, if for every sequence xi of points of X converging to x̄ the
sequence f (xi ) converges to f (x̄). Equivalent definition:
f : X → Rm is continuous at x̄ ∈ X, if for every  > 0 there exists δ > 0 such that

x ∈ X, |x − x̄| < δ ⇒ |f (x) − f (x̄)| < .

2. f is called continuous on X, if f is continuous at every point from X. Equivalent definition: f


preserves convergence: whenever a sequence of points xi ∈ X converges to a point x ∈ X, the
sequence f (xi ) converges to f (x).
Examples of continuous mappings:
1. An affine mapping
 Pm 
A x + b1
 j=1 1j j 
 
..
f (x) =   ≡ Ax + b : Rn → Rm
 
 m . 
 P 
Amj xj + bm
j=1

is continuous on the entire Rn (and thus – on every subset of Rn ) (check it!).


A.5. CONTINUOUS FUNCTIONS ON RN 593

2. The norm |x| is a continuous on Rn (and thus – on every subset of Rn ) real-valued


function (check it!).

Exercise A.5 • Consider the function


(
x21 −x22
f (x1 , x2 ) = x21 +x22
, (x1 , x2 ) 6= 0 : R2 → R.
0, x1 = x2 = 0

Check whether this function is continuous on the following sets:

1. R2 ;
2. R2 \{0};
3. {x ∈ R2 : x1 = 0};
4. {x ∈ R2 : x2 = 0};
5. {x ∈ R2 : x1 + x2 = 0};
6. {x ∈ R2 : x1 − x2 = 0};
7. {x ∈ R2 : |x1 − x2 | ≤ x41 + x42 };

• Let f : Rn → Rm be a continuous mapping. Mark those of the following statements which always
are true:

1. If U is an open set in Rm , then so is the set f −1 (U ) = {x : f (x) ∈ U };


2. If U is an open set in Rn , then so is the set f (U ) = {f (x) : x ∈ U };
3. If F is a closed set in Rm , then so is the set f −1 (F ) = {x : f (x) ∈ F };
4. If F is an closed set in Rn , then so is the set f (F ) = {f (x) : x ∈ F }.

A.5.2 Elementary continuity-preserving operations


All “elementary” operations with mappings preserve continuity. Specifically,

Theorem A.5.1 Let X be a subset in Rn .


(i) [stability of continuity w.r.t. linear operations] If f1 (x), f2 (x) are continuous functions on X
taking values in Rm and λ1 (x), λ2 (x) are continuous real-valued functions on X, then the function

f (x) = λ1 (x)f1 (x) + λ2 (x)f2 (x) : X → Rm

is continuous on X;
(ii) [stability of continuity w.r.t. superposition] Let

• X ⊂ Rn , Y ⊂ Rm ;

• f : X → Rm be a continuous mapping such that f (x) ∈ Y for every x ∈ X;

• g : Y → Rk be a continuous mapping.

Then the composite mapping


h(x) = g(f (x)) : X → Rk

is continuous on X.
594 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

A.5.3 Basic properties of continuous functions on Rn


The basic properties of continuous functions on Rn can be summarized as follows:
Theorem A.5.2 Let X be a nonempty closed and bounded subset of Rn .
(i) If a mapping f : X → Rm is continuous on X, it is bounded on X: there exists C < ∞ such that
|f (x)| ≤ C for all x ∈ X.
Proof. Assume, on the contrary to what should be proved, that f is unbounded, so that
for every i there exists a point xi ∈ X such that |f (xi )| > i. By Theorem A.4.2, we can
extract from the sequence {xi } a subsequence {xij }∞ j=1 which converges to a point x̄ ∈ X.
The real-valued function g(x) = |f (x)| is continuous (as the superposition of two continuous
mappings, see Theorem A.5.1.(ii)) and therefore its values at the points xij should converge,
as j → ∞, to its value at x̄; on the other hand, g(xij ) ≥ ij → ∞ as j → ∞, and we get the
desired contradiction.
(ii) If a mapping f : X → Rm is continuous on X, it is uniformly continuous: for every  > 0 there
exists δ > 0 such that
x, y ∈ X, |x − y| < δ ⇒ |f (x) − f (y)| < .
Proof. Assume, on the contrary to what should be proved, that there exists  > 0 such
that for every δ > 0 one can find a pair of points x, y in X such that |x − y| < δ and
|f (x) − f (y)| ≥ . In particular, for every i = 1, 2, ... we can find two points xi , y i in X
such that |xi − y i | ≤ 1/i and |f (xi ) − f (y i )| ≥ . By Theorem A.4.2, we can extract from
the sequence {xi } a subsequence {xij }∞ j=1 which converges to certain point x̄ ∈ X. Since
|y − x | ≤ 1/ij → 0 as j → ∞, the sequence {y ij }∞
ij ij
j=1 converges to the same point x̄ as the
sequence {xij }∞j=1 (why?) Since f is continuous, we have

lim f (y ij ) = f (x̄) = lim f (xij ),


j→∞ j→∞

whence lim (f (xij ) − f (y ij )) = 0, which contradicts the fact that |f (xij ) − f (y ij )| ≥  > 0
j→∞
for all j.
(iii) Let f be a real-valued continuous function on X. The f attains its minimum on X:
Argmin f ≡ {x ∈ X : f (x) = inf f (y)} =
6 ∅,
X y∈X

same as f attains its maximum at certain points of X:


Argmax f ≡ {x ∈ X : f (x) = sup f (y)} =
6 ∅.
X y∈X

Proof: Let us prove that f attains its maximum on X (the proof for minimum is completely
similar). Since f is bounded on X by (i), the quantity
f ∗ = sup f (x)
x∈X

is finite; of course, we can find a sequence {xi } of points from X such that f ∗ = lim f (xi ).
i→∞
By Theorem A.4.2, we can extract from the sequence {xi } a subsequence {xij }∞
j=1 which
converges to certain point x̄ ∈ X. Since f is continuous on X, we have
f (x̄) = lim f (xij ) = lim f (xi ) = f ∗ ,
j→∞ i→∞

so that the maximum of f on X indeed is achieved (e.g., at the point x̄).

Exercise A.6 Prove that in general no one of the three statements in Theorem A.5.2 remains valid when
X is closed, but not bounded, same as when X is bounded, but not closed.
A.6. DIFFERENTIABLE FUNCTIONS ON RN 595

A.6 Differentiable functions on Rn


A.6.1 The derivative
The reader definitely is familiar with the notion of derivative of a real-valued function f (x) of real variable
x:
f (x + ∆x) − f (x)
f 0 (x) = lim
∆x→0 ∆x
This definition does not work when we pass from functions of single real variable to functions of several
real variables, or, which is the same, to functions with vector arguments. Indeed, in this case the shift in
the argument ∆x should be a vector, and we do not know what does it mean to divide by a vector...
A proper way to extend the notion of the derivative to real- and vector-valued functions of vector
argument is to realize what in fact is the meaning of the derivative in the univariate case. What f 0 (x)
says to us is how to approximate f in a neighbourhood of x by a linear function. Specifically, if f 0 (x)
exists, then the linear function f 0 (x)∆x of ∆x approximates the change f (x + ∆x) − f (x) in f up to a
remainder which is of highest order as compared with ∆x as ∆x → 0:

|f (x + ∆x) − f (x) − f 0 (x)∆x| ≤ ō(|∆x|) as ∆x → 0.

In the above formula, we meet with the notation ō(|∆x|), and here is the explanation of this notation:
ō(|∆x|) is a common name of all functions φ(∆x) of ∆x which are well-defined in a neigh-
bourhood of the point ∆x = 0 on the axis, vanish at the point ∆x = 0 and are such that

φ(∆x)
→ 0 as ∆x → 0.
|∆x|

For example,
1. (∆x)2 = ō(|∆x|), ∆x → 0,
2. |∆x|1.01 = ō(|∆x|), ∆x → 0,
3. sin2 (∆x) = ō(|∆x|), ∆x → 0,
4. ∆x 6= ō(|∆x|), ∆x → 0.
Later on we shall meet with the notation “ō(|∆x|k ) as ∆x → 0”, where k is a positive integer.
The definition is completely similar to the one for the case of k = 1:
ō(|∆x|k ) is a common name of all functions φ(∆x) of ∆x which are well-defined in a neigh-
bourhood of the point ∆x = 0 on the axis, vanish at the point ∆x = 0 and are such that

φ(∆x)
→ 0 as ∆x → 0.
|∆x|k

Note that if f (·) is a function defined in a neighbourhood of a point x on the axis, then there perhaps
are many linear functions a∆x of ∆x which well approximate f (x + ∆x) − f (x), in the sense that the
remainder in the approximation
f (x + ∆x) − f (x) − a∆x
tends to 0 as ∆x → 0; among these approximations, however, there exists at most one which approximates
f (x + ∆x) − f (x) “very well” – so that the remainder is ō(|∆x|), and not merely tends to 0 as ∆x → 0.
Indeed, if
f (x + ∆x) − f (x) − a∆x = ō(|∆x|),
then, dividing both sides by ∆x, we get

f (x + ∆x) − f (x) ō(|∆x|)


−a= ;
∆x ∆x
596 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

by definition of ō(·), the right hand side in this equality tends to 0 as ∆x → 0, whence
f (x + ∆x) − f (x)
a = lim = f 0 (x).
∆x→0 ∆x
Thus, if a linear function a∆x of ∆x approximates the change f (x + ∆x) − f (x) in f up to the remainder
which is ō(|∆x|) as ∆x → 0, then a is the derivative of f at x. You can easily verify that the inverse state-
ment also is true: if the derivative of f at x exists, then the linear function f 0 (x)∆x of ∆x approximates
the change f (x + ∆x) − f (x) in f up to the remainder which is ō(|∆x|) as ∆x → 0.
The advantage of the “ō(|∆x|)”-definition of derivative is that it can be naturally extended onto
vector-valued functions of vector arguments (you should just replace “axis” with Rn in the definition of
ō) and enlightens the essence of the notion of derivative: when it exists, this is exactly the linear function
of ∆x which approximates the change f (x + ∆x) − f (x) in f up to a remainder which is ō(|∆x|). The
precise definition is as follows:
Definition A.6.1 [Frechet differentiability] Let f be a function which is well-defined in a neighbourhood
of a point x ∈ Rn and takes values in Rm . We say that f is differentiable at x, if there exists a linear
function Df (x)[∆x] of ∆x ∈ Rn taking values in Rm which approximates the change f (x + ∆x) − f (x)
in f up to a remainder which is ō(|∆x|):
|f (x + ∆x) − f (x) − Df (x)[∆x]| ≤ ō(|∆x|). (A.6.1)
Equivalently: a function f which is well-defined in a neighbourhood of a point x ∈ Rn and takes values
in Rm is called differentiable at x, if there exists a linear function Df (x)[∆x] of ∆x ∈ Rn taking values
in Rm such that for every  > 0 there exists δ > 0 satisfying the relation
|∆x| ≤ δ ⇒ |f (x + ∆x) − f (x) − Df (x)[∆x]| ≤ |∆x|.

A.6.2 Derivative and directional derivatives


We have defined what does it mean that a function f : Rn → Rm is differentiable at a point x, but
did not say yet what is the derivative. The reader could guess that the derivative is exactly the “linear
function Df (x)[∆x] of ∆x ∈ Rn taking values in Rm which approximates the change f (x + ∆x) − f (x)
in f up to a remainder which is ≤ ō(|∆x|)” participating in the definition of differentiability. The guess
is correct, but we cannot merely call the entity participating in the definition the derivative – why do we
know that this entity is unique? Perhaps there are many different linear functions of ∆x approximating
the change in f up to a remainder which is ō(|∆x|). In fact there is no more than a single linear function
with this property due to the following observation:
Let f be differentiable at x, and Df (x)[∆x] be a linear function participating in the definition
of differentiability. Then
f (x + t∆x) − f (x)
∀∆x ∈ Rn : Df (x)[∆x] = lim . (A.6.2)
t→+0 t
In particular, the derivative Df (x)[·] is uniquely defined by f and x.
Proof. We have
|f (x + t∆x) − f (x) − Df (x)[t∆x]| ≤ ō(|t∆x|)

| f (x+t∆x)−f
t
(x)
− Df (x)[t∆x]
t | ≤ ō(|t∆x|)
t
m [since Df (x)[·] is linear]
| f (x+t∆x)−f
t
(x)
− Df (x)[∆x]| ≤ ō(|t∆x|)
t
⇓  
passing to limit as t → +0;
Df (x)[∆x] = lim f (x+t∆x)−f (x)
t→+0 t note that ō(|t∆x|)
t → 0, t → +0
A.6. DIFFERENTIABLE FUNCTIONS ON RN 597

Pay attention to important remarks as follows:


1. The right hand side limit in (A.6.2) is an important entity called the directional derivative of f
taken at x along (a direction) ∆x; note that this quantity is defined in the “purely univariate”
fashion – by dividing the change in f by the magnitude of a shift in a direction ∆x and passing
to limit as the magnitude of the shift approaches 0. Relation (A.6.2) says that the derivative, if
exists, is, at every ∆x, nothing that the directional derivative of f taken at x along ∆x. Note,
however, that differentiability is much more than the existence of directional derivatives along all
directions ∆x; differentiability requires also the directional derivatives to be “well-organized” – to
depend linearly on the direction ∆x. It is easily seen that just existence of directional derivatives
does not imply their “good organization”: for example, the Euclidean norm

f (x) = |x|

at x = 0 possesses directional derivatives along all directions:

f (0 + t∆x) − f (0)
lim = |∆x|;
t→+0 t
these derivatives, however, depend non-linearly on ∆x, so that the Euclidean norm is not differ-
entiable at the origin (although is differentiable everywhere outside the origin, but this is another
story).
2. It should be stressed that the derivative, if exists, is what it is: a linear function of ∆x ∈ Rn taking
values in Rm . As we shall see in a while, we can represent this function by something “tractable”,
like a vector or a matrix, and can understand how to compute such a representation; however,
an intelligent reader should bear in mind that a representation is not exactly the same as the
represented entity. Sometimes the difference between derivatives and the entities which represent
them is reflected in the terminology: what we call the derivative, is also called the differential,
while the word “derivative” is reserved for the vector/matrix representing the differential.

A.6.3 Representations of the derivative


indexderivatives!representation ofBy definition, the derivative of a mapping f : Rn → Rm at a point x
is a linear function Df (x)[∆x] taking values in Rm . How could we represent such a function?

Case of m = 1 – the gradient. Let us start with real-valued functions (i.e., with the case of
m = 1); in this case the derivative is a linear real-valued function on Rn . As we remember, the standard
Euclidean structure on Rn allows to represent every linear function on Rn as the inner product of the
argument with certain fixed vector. In particular, the derivative Df (x)[∆x] of a scalar function can be
represented as
Df (x)[∆x] = [vector]T ∆x;
what is denoted ”vector” in this relation, is called the gradient of f at x and is denoted by ∇f (x):

Df (x)[∆x] = (∇f (x))T ∆x. (A.6.3)

How to compute the gradient? The answer is given by (A.6.2). Indeed, let us look what (A.6.3) and
(A.6.2) say when ∆x is the i-th standard basic orth. According to (A.6.3), Df (x)[ei ] is the i-th coordinate
of the vector ∇f (x); according to (A.6.2),

Df (x)[ei ] = lim f (x+teti )−f (x) ,  ∂f (x)
t→+0
⇒ Df (x)[ei ] = .
Df (x)[ei ] = −Df (x)[−ei ] = − lim f (x−teti )−f (x) = lim f (x+teti )−f (x)  ∂xi
t→+0 t→−0

Thus,
598 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

If a real-valued function f is differentiable at x, then the first order partial derivatives of f


at x exist, and the gradient of f at x is just the vector with the coordinates which are the
first order partial derivatives of f taken at x:
 ∂f (x) 
∂x1

∇f (x) =  .. 
.
 . 
∂f (x)
∂xn

The derivative of f , taken at x, is the linear function of ∆x given by


n
X ∂f (x)
Df (x)[∆x] = (∇f (x))T ∆x = (∆x)i .
i=1
∂xi

General case – the Jacobian. Now let f : Rn → Rm with m ≥ 1. In this case, Df (x)[∆x],
regarded as a function of ∆x, is a linear mapping from Rn to Rm ; as we remember, the standard way to
represent a linear mapping from Rn to Rm is to represent it as the multiplication by m × n matrix:
Df (x)[∆x] = [m × n matrix] · ∆x. (A.6.4)
What is denoted by “matrix” in (A.6.4), is called the Jacobian of f at x and is denoted by f 0 (x). How
to compute the entries of the Jacobian? Here again the answer is readily given by (A.6.2). Indeed, on
one hand, we have
Df (x)[∆x] = f 0 (x)∆x, (A.6.5)
whence
[Df (x)[ej ]]i = (f 0 (x))ij , i = 1, ..., m, j = 1, ..., n.
On the other hand, denoting  
f1 (x)
f (x) =  ..
,
 
.
fm (x)
the same computation as in the case of gradient demonstrates that
∂fi (x)
[Df (x)[ej ]]i =
∂xj
and we arrive at the following conclusion:
If a vector-valued function f (x) = (f1 (x), ..., fm (x)) is differentiable at x, then the first order
partial derivatives of all fi at x exist, and the Jacobian of f at x is just the m × n matrix
with the entries [ ∂f∂x
i (x)
j
]i,j (so that the rows in the Jacobian are [∇f1 (x)]T ,..., [∇fm (x)]T .
The derivative of f , taken at x, is the linear vector-valued function of ∆x given by
[∇f1 (x)]T ∆x
 

Df (x)[∆x] = f 0 (x)∆x =  ..
.
 
.
[∇fm (x)]T ∆x

Remark A.6.1 Note that for a real-valued function f we have defined both the gradient ∇f (x) and the
Jacobian f 0 (x). These two entities are “nearly the same”, but not exactly the same: the Jacobian is a
vector-row, and the gradient is a vector-column linked by the relation
f 0 (x) = (∇f (x))T .
Of course, both these representations of the derivative of f yield the same linear approximation of the
change in f :
Df (x)[∆x] = (∇f (x))T ∆x = f 0 (x)∆x.
A.6. DIFFERENTIABLE FUNCTIONS ON RN 599

A.6.4 Existence of the derivative


We have seen that the existence of the derivative of f at a point implies the existence of the first order
partial derivatives of the (components f1 , ..., fm of) f . The inverse statement is not exactly true – the
existence of all first order partial derivatives ∂f∂x
i (x)
j
not necessarily implies the existence of the derivative;
we need a bit more:

Theorem A.6.1 [Sufficient condition for differentiability] Assume that


1. The mapping f = (f1 , ..., fm ) : Rn → Rm is well-defined in a neighbourhood U of a point x0 ∈ Rn ,

2. The first order partial derivatives of the components fi of f exist everywhere in U ,


and

3. The first order partial derivatives of the components fi of f are continuous at the point x0 .
Then f is differentiable at the point x0 .

A.6.5 Calculus of derivatives


The calculus of derivatives is given by the following result:

Theorem A.6.2 (i) [Differentiability and linear operations] Let f1 (x), f2 (x) be mappings defined in a
neighbourhood of x0 ∈ Rn and taking values in Rm , and λ1 (x), λ2 (x) be real-valued functions defined
in a neighbourhood of x0 . Assume that f1 , f2 , λ1 , λ2 are differentiable at x0 . Then so is the function
f (x) = λ1 (x)f1 (x) + λ2 (x)f2 (x), with the derivative at x0 given by

Df (x0 )[∆x] = [Dλ1 (x0 )[∆x]]f1 (x0 ) + λ1 (x0 )Df1 (x0 )[∆x]
+[Dλ2 (x0 )[∆x]]f2 (x0 ) + λ2 (x0 )Df2 (x0 )[∆x]

f 0 (x0 ) = f1 (x0 )[∇λ1 (x0 )]T + λ1 (x0 )f10 (x0 )
+f2 (x0 )[∇λ2 (x0 )]T + λ2 (x0 )f20 (x0 ).

(ii) [chain rule] Let a mapping f : Rn → Rm be differentiable at x0 , and a mapping g : Rm → Rn


be differentiable at y0 = f (x0 ). Then the superposition h(x) = g(f (x)) is differentiable at x0 , with the
derivative at x0 given by
Dh(x0 )[∆x] = Dg(y0 )[Df (x0 )[∆x]]

h0 (x0 ) = g 0 (y0 )f 0 (x0 )
If the outer function g is real-valued, then the latter formula implies that

∇h(x0 ) = [f 0 (x0 )]T ∇g(y0 )

(recall that for a real-valued function φ, φ0 = (∇φ)T ).

A.6.6 Computing the derivative


Representations of the derivative via first order partial derivatives normally allow to compute it by the
standard Calculus rules, in a completely mechanical fashion, not thinking at all of what we are computing.
The examples to follow (especially the third of them) demonstrate that it often makes sense to bear in
mind what is the derivative; this sometimes yield the result much faster than blind implementing Calculus
rules.
600 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

Example 1: The gradient of an affine function. An affine function


n
X
f (x) = a + gi xi ≡ a + g T x : Rn → R
i=1

is differentiable at every point (Theorem A.6.1) and its gradient, of course, equals g:

(∇f (x))T ∆x = lim t−1 [f (x + t∆x) − f (x)] [(A.6.2)]


t→+0
= lim t−1 [tg T ∆x] [arithmetics]
t→+0

and we arrive at
∇(a + g T x) = g

Example 2: The gradient of a quadratic form. For the time being, let us define a
homogeneous quadratic form on Rn as a function
X
f (x) = Aij xi xj = xT Ax,
i,j

where A is an n × n matrix. Note that the matrices A and AT define the same quadratic form, and
therefore the symmetric matrix B = 21 (A + AT ) also produces the same quadratic form as A and AT .
It follows that we always may assume (and do assume from now on) that the matrix A producing the
quadratic form in question is symmetric.
A quadratic form is a simple polynomial and as such is differentiable at every point (Theorem A.6.1).
What is the gradient of f at a point x? Here is the computation:

(∇f (x))T ∆x = Df (x)[∆x]


 
= lim (x + t∆x)T A(x + t∆x) − xT Ax
t→+0

 [(A.6.2)]
= lim xT Ax + t(∆x)T Ax + txT A∆x + t2 (∆x)T A∆x − xT Ax
t→+0

 [opening parentheses]
lim t−1 2t(Ax)T ∆x + t2 (∆x)T A∆x

=
t→+0
[arithmetics + symmetry of A]
= 2(Ax)T ∆x

We conclude that
∇(xT Ax) = 2Ax

(recall that A = AT ).

Example 3: The derivative of the log-det barrier. Let us compute the derivative of the
log-det barrier (playing an extremely important role in modern optimization)

F (X) = ln Det(X);

here X is an n × n matrix (or, if you prefer, n2 -dimensional vector). Note that F (X) is well-defined
and differentiable in a neighbourhood of every point X̄ with positive determinant (indeed, Det(X) is a
polynomial of the entries of X and thus – is everywhere continuous and differentiable with continuous
partial derivatives, while the function ln(t) is continuous and differentiable on the positive ray; by The-
orems A.5.1.(ii), A.6.2.(ii), F is differentiable at every X such that Det(X) > 0). The reader is kindly
asked to try to find the derivative of F by the standard techniques; if the result will not be obtained in,
A.6. DIFFERENTIABLE FUNCTIONS ON RN 601

say, 30 minutes, please look at the 8-line computation to follow (in this computation, Det(X̄) > 0, and
G(X) = Det(X)):

DF (X̄)[∆X]
= D ln(G(X̄))[DG(X̄)[∆X]] [chain rule]
0
= G−1 (X̄)DG(X̄)[∆X] [ln (t) = t−1 ]
−1 −1
 
= Det (X̄) lim t Det(X̄ + t∆X) − Det(X̄) [definition of G and (A.6.2)]
t→+0
−1 −1 −1
 
= Det (X̄) lim t Det(X̄(I + tX̄ ∆X)) − Det(X̄)
t→+0
−1 −1
Det(X̄)(Det(I + tX̄ −1 ∆X) − 1)
 
= Det (X̄) lim t [Det(AB) = Det(A)Det(B)]
−1
 t→+0 −1

= lim t Det(I + tX̄ ∆X) − 1
t→+0
−1
= Tr(X̄ ∆X) = [X̄ −1 ]ji (∆X)ij
P
i,j

where the concluding equality


X
lim t−1 [Det(I + tA) − 1] = Tr(A) ≡ Aii (A.6.6)
t→+0
i

is immediately given by recalling what is the determinant of I + tA: this is a polynomial of t which is the
sum of products, taken along all diagonals of a n × n matrix and assigned certain signs, of the entries of
I +tA. At every one of these diagonals, except for the main one, there are at least two cells with the entries
proportional to t, so that the corresponding products do not contribute to the constant and the linear in t
terms in Det(I + tA) and thus do not affect the limit in (A.6.6). The only product which does contribute
to the linear and the constant terms in Det(I + tA) is the product (1 + tA11 )(1 + tA22 )...(1 + tAnn ) coming
from the main diagonal; it is clear that in this product the constant term is 1, and the linear in t term is
t(A11 + ... + Ann ), and (A.6.6) follows.

A.6.7 Higher order derivatives


Let f : Rn → Rm be a mapping which is well-defined and differentiable at every point x from an open
set U . The Jacobian of this mapping J(x) is a mapping from Rn to the space Rm×n matrices, i.e., is a
mapping taking values in certain RM (M = mn). The derivative of this mapping, if it exists, is called the
second derivative of f ; it again is a mapping from Rn to certain RM and as such can be differentiable,
and so on, so that we can speak about the second, the third, ... derivatives of a vector-valued function of
vector argument. A sufficient condition for the existence of k derivatives of f in U is that f is Ck in U ,
i.e., that all partial derivatives of f of orders ≤ k exist and are continuous everywhere in U (cf. Theorem
A.6.1).
We have explained what does it mean that f has k derivatives in U ; note, however, that according to
the definition, highest order derivatives at a point x are just long vectors; say, the second order derivative
of a scalar function f of 2 variables is the Jacobian of the mapping x 7→ f 0 (x) : R2 → R2 , i.e., a mapping
from R2 to R2×2 = R4 ; the third order derivative of f is therefore the Jacobian of a mapping from R2
to R4 , i.e., a mapping from R2 to R4×2 = R8 , and so on. The question which should be addressed now
is: What is a natural and transparent way to represent the highest order derivatives?
The answer is as follows:
(∗) Let f : Rn → Rm be Ck on an open set U ⊂ Rn . The derivative of order ` ≤ k of f ,
taken at a point x ∈ U , can be naturally identified with a function

D` f (x)[∆x1 , ∆x2 , ..., ∆x` ]

of ` vector arguments ∆xi ∈ Rn , i = 1, ..., `, and taking values in Rm . This function is linear
in every one of the arguments ∆xi , the other arguments being fixed, and is symmetric with
respect to permutation of arguments ∆x1 , ..., ∆x` .
602 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

In terms of f , the quantity D` f (x)[∆x1 , ∆x2 , ..., ∆x` ] (full name: “the `-th derivative (or
differential) of f taken at a point x along the directions ∆x1 , ..., ∆x` ”) is given by

∂`
D` f (x)[∆x1 , ∆x2 , ..., ∆x` ] = f (x + t1 ∆x1 + t2 ∆x2 + ... + t` ∆x` ).


∂t` ∂t`−1 ...∂t1 t1 =...=t` =0
(A.6.7)
The explanation to our claims is as follows. Let f : Rn → Rm be Ck on an open set U ⊂ Rn .
1. When ` = 1, (∗) says to us that the first order derivative of f , taken at x, is a linear function
Df (x)[∆x1 ] of ∆x1 ∈ Rn , taking values in Rm , and that the value of this function at every ∆x1
is given by the relation

Df (x)[∆x1 ] = f (x + t1 ∆x1 ) (A.6.8)
∂t1 t1 =0
(cf. (A.6.2)), which is in complete accordance with what we already know about the derivative.

2. To understand what is the second derivative, let us take the first derivative Df (x)[∆x1 ], let us
temporarily fix somehow the argument ∆x1 and treat the derivative as a function of x. As a
function of x, ∆x1 being fixed, the quantity Df (x)[∆x1 ] is again a mapping which maps U into
Rm and is differentiable by Theorem A.6.1 (provided, of course, that k ≥ 2). The derivative of
this mapping is certain linear function of ∆x ≡ ∆x2 ∈ Rn , depending on x as on a parameter; and
of course it depends on ∆x1 as on a parameter as well. Thus, the derivative of Df (x)[∆x1 ] in x is
certain function
D2 f (x)[∆x1 , ∆x2 ]
of x ∈ U and ∆x1 , ∆x2 ∈ Rn and taking values in Rm . What we know about this function is
that it is linear in ∆x2 . In fact, it is also linear in ∆x1 , since it is the derivative in x of certain
function (namely, of Df (x)[∆x1 ]) linearly depending on the parameter ∆x1 , so that the derivative
of the function in x is linear in the parameter ∆x1 as well (differentiation is a linear operation
with respect to a function we are differentiating: summing up functions and multiplying them by
real constants, we sum up, respectively, multiply by the same constants, the derivatives). Thus,
D2 f (x)[∆x1 , ∆x2 ] is linear in ∆x1 when x and ∆x2 are fixed, and is linear in ∆x2 when x and
∆x1 are fixed. Moreover, we have

D2 f (x)[∆x1 , ∆x2 ] = ∂t∂2 t2 =0 Df (x + t2 ∆x2 )[∆x1 ] [cf. (A.6.8)]

= ∂t∂2 t2 =0 ∂
∂t1 t1 =0 f (x + t2 ∆x 2
+ t1 ∆x 1
) [by (A.6.8)]
(A.6.9)
2
= ∂t∂2 ∂t1 f (x + t1 ∆x1 + t2 ∆x2 )
t1 =t2 =0

as claimed in (A.6.7) for ` = 2. The only piece of information about the second derivative which
is contained in (∗) and is not justified yet is that D2 f (x)[∆x1 , ∆x2 ] is symmetric in ∆x1 , ∆x2 ;
but this fact is readily given by the representation (A.6.7), since, as they prove in Calculus, if a
function φ possesses continuous partial derivatives of orders ≤ ` in a neighbourhood of a point,
then these derivatives in this neighbourhood are independent of the order in which they are taken;
it follows that

2
f (x + t1 ∆x1 + t2 ∆x2 )

D2 f (x)[∆x1 , ∆x2 ] = ∂t∂2 ∂t1 [(A.6.9)]
t1 =t2 =0 | {z }
φ(t1 ,t2 )
2
= ∂t∂1 ∂t2 φ(t1 , t2 )
t1 =t2 =0
2
= ∂t∂1 ∂t2 f (x + t2 ∆x2 + t1 ∆x1 )
t1 =t2 =0
= D2 f (x)[∆x2 , ∆x1 ] [the same (A.6.9)]
A.6. DIFFERENTIABLE FUNCTIONS ON RN 603

3. Now it is clear how to proceed: to define D3 f (x)[∆x1 , ∆x2 , ∆x3 ], we fix in the second order
derivative D2 f (x)[∆x1 , ∆x2 ] the arguments ∆x1 , ∆x2 and treat it as a function of x only, thus
arriving at a mapping which maps U into Rm and depends on ∆x1 , ∆x2 as on parameters (lin-
early in every one of them). Differentiating the resulting mapping in x, we arrive at a function
D3 f (x)[∆x1 , ∆x2 , ∆x3 ] which by construction is linear in every one of the arguments ∆x1 , ∆x2 ,
∆x3 and satisfies (A.6.7); the latter relation, due to the Calculus result on the symmetry of partial
derivatives, implies that D3 f (x)[∆x1 , ∆x2 , ∆x3 ] is symmetric in ∆x1 , ∆x2 , ∆x3 . After we have at
our disposal the third derivative D3 f , we can build from it in the already explained fashion the
fourth derivative, and so on, until k-th derivative is defined.

Remark A.6.2 Since D` f (x)[∆x1 , ..., ∆x` ] is linear in every one of ∆xi , we can expand the derivative
in a multiple sum:
n
∆xi = ∆xij ej
P
j=1

n n (A.6.10)
` 1 ` `
∆x1j1 ej1 , ..., ∆x`j` ej` ]
P P
D f (x)[∆x , ..., ∆x ] = D f (x)[
j 1 =1 j ` =1
D` f (x)[ej1 , ..., ej` ]∆x1j1 ...∆x`j`
P
=
1≤j1 ,...,j` ≤n

What is the origin of the coefficients D` f (x)[ej1 , ..., ej` ]? According to (A.6.7), one has

∂`

D` f (x)[ej1 , ..., ej` ] = ∂t` ∂t`−1
...∂t1 f (x + t1 ej1 + t2 ej2 + ... + t` ej` )
t1 =...=t` =0
∂`
= ∂xj` ∂xj`−1 ...∂xj1 f (x).

so that the coefficients in (A.6.10) are nothing but the partial derivatives, of order `, of f .

Remark A.6.3 An important particular case of relation (A.6.7) is the one when ∆x1 = ∆x2 = ... = ∆x` ;
let us call the common value of these ` vectors d. According to (A.6.7), we have
∂`

`

D f (x)[d, d, ..., d] = f (x + t1 d + t2 d + ... + t` d).
∂t` ∂t`−1 ...∂t1 t1 =...=t` =0
This relation can be interpreted as follows: consider the function
φ(t) = f (x + td)
of a real variable t. Then (check it!)
∂`

(`)
f (x + t1 d + t2 d + ... + t` d) = D` f (x)[d, ..., d].

φ (0) =
∂t` ∂t`−1 ...∂t1 t1 =...=t` =0

In other words, D` f (x)[d, ..., d] is what is called `-th directional derivative of f taken at x along the
direction d; to define this quantity, we pass from function f of several variables to the univariate function
φ(t) = f (x + td) – restrict f onto the line passing through x and directed by d – and then take the
“usual” derivative of order ` of the resulting function of single real variable t at the point t = 0 (which
corresponds to the point x of our line).

Representation of higher order derivatives. k-th order derivative Dk f (x)[·, ..., ·] of a Ck


function f : Rn → Rm is what it is – it is a symmetric k-linear mapping on Rn taking values in Rm
and depending on x as on a parameter. Choosing somehow coordinates in Rn , we can represent such a
mapping in the form
X ∂ k f (x)
Dk f (x)[∆x1 , ..., ∆xk ] = (∆x1 )i1 ...(∆xk )ik .
∂xik ∂xik−1 ...∂xi1
1≤i1 ,...,ik ≤n
604 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

We may say that the derivative can be represented by k-index collection of m-dimensional vectors
∂ k f (x)
∂xik ∂xik−1 ...∂xi1 . This collection, however, is a difficult-to-handle entity, so that such a representation
does not help. There is, however, a case when the collection becomes an entity we know to handle; this
is the case of the second-order derivative of a scalar
h function
i (k = 2, m = 1). In this case, the collection
∂ 2 f (x)
in question is just a symmetric matrix H(x) = ∂xi ∂xj . This matrix is called the Hessian of f at
1≤i,j≤n
x. Note that
D2 f (x)[∆x1 , ∆x2 ] = ∆xT1 H(x)∆x2 .

A.6.8 Calculus of Ck mappings


The calculus of Ck mappings can be summarized as follows:

Theorem A.6.3 (i) Let U be an open set in Rn , f1 (·), f2 (·) : Rn → Rm be Ck in U , and let real-valued
functions λ1 (·), λ2 (·) be Ck in U . Then the function

f (x) = λ1 (x)f1 (x) + λ2 (x)f2 (x)

is Ck in U .
(ii) Let U be an open set in Rn , V be an open set in Rm , let a mapping f : Rn → Rm be Ck in
U and such that f (x) ∈ V for x ∈ U , and, finally, let a mapping g : Rm → Rp be Ck in V . Then the
superposition
h(x) = g(f (x))

is Ck in U .

Remark A.6.4 For higher order derivatives, in contrast to the first order ones, there is no simple
“chain rule” for computing the derivative of superposition. For example, the second-order derivative of
the superposition h(x) = g(f (x)) of two C2 -mappings is given by the formula

Dh(x)[∆x1 , ∆x2 ] = Dg(f (x))[D2 f (x)[∆x1 , ∆x2 ]] + D2 g(x)[Df (x)[∆x1 ], Df (x)[∆x2 ]]

(check it!). We see that both the first- and the second-order derivatives of f and g contribute to the
second-order derivative of the superposition h.
The only case when there does exist a simple formula for high order derivatives of a superposition is
the case when the inner function is affine: if f (x) = Ax + b and h(x) = g(f (x)) = g(Ax + b) with a C`
mapping g, then
D` h(x)[∆x1 , ..., ∆x` ] = D` g(Ax + b)[A∆x1 , ..., A∆x` ]. (A.6.11)

A.6.9 Examples of higher-order derivatives


Example 1: Second-order derivative of an affine function f (x) = a + bT x is, of course,
identically zero. Indeed, as we have seen,

Df (x)[∆x1 ] = bT ∆x1

is independent of x, and therefore the derivative of Df (x)[∆x1 ] in x, which should give us the second
derivative D2 f (x)[∆x1 , ∆x2 ], is zero. Clearly, the third, the fourth, etc., derivatives of an affine function
are zero as well.
A.6. DIFFERENTIABLE FUNCTIONS ON RN 605

Example 2: Second-order derivative of a homogeneous quadratic form f (x) = xT Ax


(A is a symmetric n × n matrix). As we have seen,

Df (x)[∆x1 ] = 2xT A∆x1 .

Differentiating in x, we get

D2 f (x)[∆x1 , ∆x2 ] = lim t−1 2(x + t∆x2 )T A∆x1 − 2xT A∆x1 = 2(∆x2 )T A∆x1 ,
 
t→+0

so that
D2 f (x)[∆x1 , ∆x2 ] = 2(∆x2 )T A∆x1
Note that the second derivative of a quadratic form is independent of x; consequently, the third, the
fourth, etc., derivatives of a quadratic form are identically zero.

Example 3: Second-order derivative of the log-det barrier F (X) = ln Det(X). As we


have seen, this function of an n × n matrix is well-defined and differentiable on the set U of matrices with
positive determinant (which is an open set in the space Rn×n of n × n matrices). In fact, this function
is C∞ in U . Let us compute its second-order derivative. As we remember,

DF (X)[∆X 1 ] = Tr(X −1 ∆X 1 ). (A.6.12)

To differentiate the right hand side in X, let us first find the derivative of the mapping G(X) = X −1
which is defined on the open set of non-degenerate n × n matrices. We have

DG(X)[∆X] = lim t−1 (X + t∆X)−1 − X −1


 
t→+0
= lim t−1 (X(I + tX −1 ∆X))−1 − X −1
 
t→+0
= lim t−1 (I + t |X −1 −1 −1 −1
 
t→+0 {z∆X}) X − X
 Y 
−1 −1
(I + tY ) − I X −1
 
= lim t
t→+0 
−1 −1
= lim t [I − (I + tY )] (I + tY ) X −1
t→+0 
= lim [−Y (I + tY )−1 ] X −1
t→+0
= −Y X −1
= −X −1 ∆XX −1

and we arrive at the important by its own right relation

D(X −1 )[∆X] = −X −1 ∆XX −1 , [X ∈ Rn×n , Det(X) 6= 0]

which is the “matrix extension” of the standard relation (x−1 )0 = −x−2 , x ∈ R.


Now we are ready to compute the second derivative of the log-det barrier:

F (X) = ln Det(X)

DF (X)[∆X 1 ] = Tr(X −1 ∆X 1 )
 ⇓
D2 F (X)[∆X 1 , ∆X 2 ] = lim t−1 Tr((X + t∆X 2 )−1 ∆X 1 ) − Tr(X −1 ∆X 1 )

t→+0
= lim Tr t−1 (X + t∆X 2 )−1 ∆X 1 − X −1 ∆X 1

t→+0
= lim Tr t−1 (X + t∆X 2 )−1 − X −1 ∆X 1
  
t→+0
= Tr [−X −1 ∆X 2 X −1 ]∆X 1 ,

606 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

and we arrive at the formula

D2 F (X)[∆X 1 , ∆X 2 ] = −Tr(X −1 ∆X 2 X −1 ∆X 1 ) [X ∈ Rn×n , Det(X) > 0]

Since Tr(AB) = Tr(BA) (check it!) for all matrices A, B such that the product AB makes sense and
is square, the right hand side in the above formula is symmetric in ∆X 1 , ∆X 2 , as it should be for the
second derivative of a C2 function.

A.6.10 Taylor expansion


Assume that f : Rn → Rm is Ck in a neighbourhood U of a point x̄. The Taylor expansion of order k
of f , built at the point x̄, is the function
1 1
Fk (x) = f (x̄) + 1! Df (x̄)[x − x̄] + 2! D2 f (x̄)[x − x̄, x − x̄]
1 2 1
+ 3! D f (x̄)[x − x̄, x − x̄, x − x̄] + ... + k! Dk f (x̄)[x − x̄, ..., x − x̄] (A.6.13)
| {z }
k times

We are already acquainted with the Taylor expansion of order 1

F1 (x) = f (x̄) + Df (x̄)[x − x̄]

– this is the affine function of x which approximates “very well” f (x) in a neighbourhood of x̄, namely,
within approximation error ō(|x − x̄|). Similar fact is true for Taylor expansions of higher order:
Theorem A.6.4 Let f : Rn → Rm be Ck in a neighbourhood of x̄, and let Fk (x) be the Taylor expansion
of f at x̄ of degree k. Then
(i) Fk (x) is a vector-valued polynomial of full degree ≤ k (i.e., every one of the coordinates of the
vector Fk (x) is a polynomial of x1 , ..., xn , and the sum of powers of xi ’s in every term of this polynomial
does not exceed k);
(ii) Fk (x) approximates f (x) in a neighbourhood of x̄ up to a remainder which is ō(|x− x̄|k ) as x → x̄:
For every  > 0, there exists δ > 0 such that

|x − x̄| ≤ δ ⇒ |Fk (x) − f (x)| ≤ |x − x̄|k .

Fk (·) is the unique polynomial with components of full degree ≤ k which approximates f up to a remainder
which is ō(|x − x̄|k ).
(iii) The value and the derivatives of Fk of orders 1, 2, ..., k, taken at x̄, are the same as the value and
the corresponding derivatives of f taken at the same point.
As stated in Theorem, Fk (x) approximates f (x) for x close to x̄ up to a remainder which is ō(|x − x̄|k ).
In many cases, it is not enough to know that the reminder is “ō(|x − x̄|k )” — we need an explicit bound
on this remainder. The standard bound of this type is as follows:
Theorem A.6.5 Let k be a positive integer, and let f : Rn → Rm be Ck+1 in a ball Br = Br (x̄) = {x ∈
Rn : |x − x̄| < r} of a radius r > 0 centered at a point x̄. Assume that the directional derivatives of order
k + 1, taken at every point of Br along every unit direction, do not exceed certain L < ∞:

|Dk+1 f (x)[d, ..., d]| ≤ L ∀(x ∈ Br )∀(d, |d| = 1).

Then for the Taylor expansion Fk of order k of f taken at x̄ one has

L|x − x̄|k+1
|f (x) − Fk (x)| ≤ ∀(x ∈ Br ).
(k + 1)!
Thus, in a neighbourhood of x̄ the remainder of the k-th order Taylor expansion, taken at x̄, is of order
of L|x − x̄|k+1 , where L is the maximal (over all unit directions and all points from the neighbourhood)
magnitude of the directional derivatives of order k + 1 of f .
A.7. SYMMETRIC MATRICES 607

A.7 Symmetric matrices


A.7.1 Spaces of matrices
Let S be the space of symmetric m × m matrices, and Mm,n be the space of rectangular m × n
m

matrices with real entries. From the viewpoint of their linear structure (i.e., the operations of addition
and multiplication by reals) Sm is just the arithmetic linear space Rm(m+1)/2 of dimension m(m+1) 2 :
by arranging the elements of a symmetric m × m matrix X in a single column, say, in the row-by-row
order, you get a usual m2 -dimensional column vector; multiplication of a matrix by a real and addition of
matrices correspond to the same operations with the “representing vector(s)”. When X runs through Sm ,
2
the vector representing X runs through m(m + 1)/2-dimensional subspace of Rm consisting of vectors
satisfying the “symmetry condition” – the coordinates coming from symmetric to each other pairs of
entries in X are equal to each other. Similarly, Mm,n as a linear space is just Rmn , and it is natural to
equip Mm,n with the inner product defined as the usual inner product of the vectors representing the
matrices:
m X
X n
hX, Y i = Xij Yij = Tr(X T Y ).
i=1 j=1

Here Tr stands for the trace – the sum of diagonal elements of a (square) matrix. With this inner product
(called the Frobenius inner product), Mm,n becomes a legitimate Euclidean space, and we may use in
connection with this space all notions based upon the Euclidean structure, e.g., the (Frobenius) norm of
a matrix v
um X n
uX q
p
kXk2 = hX, Xi = t Xij2 = Tr(X T X)
i=1 j=1

and likewise the notions of orthogonality, orthogonal complement of a linear subspace, etc. The same
applies to the space Sm equipped with the Frobenius inner product; of course, the Frobenius inner product
of symmetric matrices can be written without the transposition sign:

hX, Y i = Tr(XY ), X, Y ∈ Sm .

A.7.2 Main facts on symmetric matrices


Let us focus on the space Sm of symmetric matrices. The most important property of these matrices is
as follows:

Theorem A.7.1 [Eigenvalue decomposition] n × n matrix A is symmetric if and only if it admits an


orthonormal system of eigenvectors: there exist orthonormal basis {e1 , ..., en } such that

Aei = λi ei , i = 1, ..., n, (A.7.1)

for reals λi .

In connection with Theorem A.7.1, it is worthy to recall the following notions and facts:

A.7.2.A. Eigenvectors and eigenvalues. An eigenvector of an n×n matrix A is a nonzero vector


e (real or complex) such that Ae = λe for (real or complex) scalar λ; this scalar is called the eigenvalue
of A corresponding to the eigenvector e.
Eigenvalues of A are exactly the roots of the characteristic polynomial

π(z) = Det(zI − A) = z n + b1 z n−1 + b2 z n−2 + ... + bn

of A.
Theorem A.7.1 states, in particular, that for a symmetric matrix A, all eigenvalues are real, and the
corresponding eigenvectors can be chosen to be real and to form an orthonormal basis in Rn .
608 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

A.7.2.B. Eigenvalue decomposition of a symmetric matrix. Theorem A.7.1 admits equiv-


alent reformulation as follows (check the equivalence!):

Theorem A.7.2 An n × n matrix A is symmetric if and only if it can be represented in the form

A = U ΛU T , (A.7.2)

where
• U is an orthogonal matrix: U −1 = U T (or, which is the same, U T U = I, or, which is the same,
U U T = I, or, which is the same, the columns of U form an orthonormal basis in Rn , or, which is
the same, the columns of U form an orthonormal basis in Rn ).

• Λ is the diagonal matrix with the diagonal entries λ1 , ..., λn .

Representation (A.7.2) with orthogonal U and diagonal Λ is called the eigenvalue decomposition of A.
In such a representation,
• The columns of U form an orthonormal system of eigenvectors of A;

• The diagonal entries in Λ are the eigenvalues of A corresponding to these eigenvectors.

A.7.2.C. Vector of eigenvalues. When speaking about eigenvalues λi (A) of a symmetric n × n


matrix A, we always arrange them in the non-ascending order:

λ1 (A) ≥ λ2 (A) ≥ ... ≥ λn (A);

λ(A) ∈ Rn denotes the vector of eigenvalues of A taken in the above order.

A.7.2.D. Freedom in eigenvalue decomposition. Part of the data Λ, U in the eigenvalue de-
composition (A.7.2) is uniquely defined by A, while the other data admit certain “freedom”. Specifically,
the sequence λ1 , ..., λn of eigenvalues of A (i.e., diagonal entries of Λ) is exactly the sequence of roots
of the characteristic polynomial of A (every root is repeated according to its multiplicity) and thus is
uniquely defined by A (provided that we arrange the entries of the sequence in the non-ascending order).
The columns of U are not uniquely defined by A. What is uniquely defined, are the linear spans E(λ) of
the columns of U corresponding to all eigenvalues equal to certain λ; such a linear span is nothing but
the spectral subspace {x : Ax = λx} of A corresponding to the eigenvalue λ. There are as many spec-
tral subspaces as many different eigenvalues; spectral subspaces corresponding to different eigenvalues of
symmetric matrix are orthogonal to each other, and their sum is the entire space. When building an
orthogonal matrix U in the spectral decomposition, one chooses an orthonormal eigenbasis in the spectral
subspace corresponding to the largest eigenvalue and makes the vectors of this basis the first columns
in U , then chooses an orthonormal basis in the spectral subspace corresponding to the second largest
eigenvalue and makes the vector from this basis the next columns of U , and so on.

A.7.2.E. “Simultaneous” decomposition of commuting symmetric matrices. Let


A1 , ..., Ak be n × n symmetric matrices. It turns out that the matrices commute with each other
(Ai Aj = Aj Ai for all i, j) if and only if they can be “simultaneously diagonalized”, i.e., there exist
a single orthogonal matrix U and diagonal matrices Λ1 ,...,Λk such that

Ai = U Λi U T , i = 1, ..., k.

You are welcome to prove this statement by yourself; to simplify your task, here are two simple and
important by their own right statements which help to reach your target:
A.7. SYMMETRIC MATRICES 609

A.7.2.E.1: Let λ be a real and A, B be two commuting n × n matrices. Then the spectral
subspace E = {x : Ax = λx} of A corresponding to λ is invariant for B (i.e., Be ∈ E for
every e ∈ E).
A.7.2.E.2: If A is an n × n matrix and L is an invariant subspace of A (i.e., L is a linear
subspace such that Ae ∈ L whenever e ∈ L), then the orthogonal complement L⊥ of L is
invariant for the matrix AT . In particular, if A is symmetric and L is invariant subspace of
A, then L⊥ is invariant subspace of A as well.

A.7.3 Variational characterization of eigenvalues


Theorem A.7.3 [VCE – Variational Characterization of Eigenvalues] Let A be a symmetric matrix.
Then
λ` (A) = min max xT Ax, ` = 1, ..., n, (A.7.3)
E∈E` x∈E,xT x=1

where E` is the family of all linear subspaces in Rn of the dimension n − ` + 1.

VCE says that to get the largest eigenvalue λ1 (A), you should maximize the quadratic form xT Ax over
the unit sphere S = {x ∈ Rn : xT x = 1}; the maximum is exactly λ1 (A). To get the second largest
eigenvalue λ2 (A), you should act as follows: you choose a linear subspace E of dimension n − 1 and
maximize the quadratic form xT Ax over the cross-section of S by this subspace; the maximum value
of the form depends on E, and you minimize this maximum over linear subspaces E of the dimension
n − 1; the result is exactly λ2 (A). To get λ3 (A), you replace in the latter construction subspaces of the
dimension n − 1 by those of the dimension n − 2, and so on. In particular, the smallest eigenvalue λn (A)
is just the minimum, over all linear subspaces E of the dimension n − n + 1 = 1, i.e., over all lines passing
through the origin, of the quantities xT Ax, where x ∈ E is unit (xT x = 1); in other words, λn (A) is just
the minimum of the quadratic form xT Ax over the unit sphere S.
Proof of the VCE is pretty easy. Let e1 , ..., en be an orthonormal eigenbasis of A: Ae` =
λ` (A)e` . For 1 ≤ ` ≤ n, let F` = Lin{e1 , ..., e` }, G` = Lin{e` , e`+1 , ..., en }. Finally, for
x ∈ Rn let ξ(x) be the vector of coordinates of x in the orthonormal basis e1 , ..., en . Note
that
xT x = ξ T (x)ξ(x),
since {e1 , ..., en } is an orthonormal basis, and that

xT Ax = xT A ξi (x)ei ) = xT
P P
λi (A)ξi (x)ei =
i i
P T
λi (A)ξi (x) (x ei )
i | {z } (A.7.4)
ξi (x)
λi (A)ξi2 (x).
P
=
i

Now, given `, 1 ≤ ` ≤ n, let us set E = G` ; note that E is a linear subspace of the dimension
n − ` + 1. In view of (A.7.4), the maximum of the quadratic form xT Ax over the intersection
of our E with the unit sphere is
( n n
)
X X
max λi (A)ξi2 : ξi2 = 1 ,
i=` i=`

and the latter quantity clearly equals to max λi (A) = λ` (A). Thus, for appropriately chosen
`≤i≤n
E ∈ E` , the inner maximum in the right hand side of (A.7.3) equals to λ` (A), whence the
right hand side of (A.7.3) is ≤ λ` (A). It remains to prove the opposite inequality. To this
end, consider a linear subspace E of the dimension n−`+1 and observe that it has nontrivial
intersection with the linear subspace F` of the dimension ` (indeed, dim E + dim F` = (n −
` + 1) + ` > n, so that dim (E ∩ F ) > 0 by the Dimension formula). It follows that there
610 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

exists a unit vector y belonging to both E and F` . Since y is a unit vector from F` , we have
P̀ P̀ 2
y= ηi ei with ηi = 1, whence, by (A.7.4),
i=1 i=1

`
X
y T Ay = λi (A)ηi2 ≥ min λi (A) = λ` (A).
1≤i≤`
i=1

Since y is in E, we conclude that

max xT Ax ≥ y T Ay ≥ λ` (A).
x∈E:xT x=1

Since E is an arbitrary subspace form E` , we conclude that the right hand side in (A.7.3) is
≥ λ` (A). 2

A simple and useful byproduct of our reasoning is the relation (A.7.4):

Corollary A.7.1 For a symmetric matrix A, the quadratic form xT Ax is weighted sum of squares of
the coordinates ξi (x) of x taken with respect to an orthonormal eigenbasis of A; the weights in this sum
are exactly the eigenvalues of A: X
xT Ax = λi (A)ξi2 (x).
i

A.7.3.1 Corollaries of the VCE


VCE admits a number of extremely important corollaries as follows:

A.7.3.A. Eigenvalue characterization of positive (semi)definite matrices. Recall that


a matrix A is called positive definite (notation: A  0), if it is symmetric and the quadratic form xT Ax
is positive outside the origin; A is called positive semidefinite (notation: A  0), if A is symmetric and
the quadratic form xT Ax is nonnegative everywhere. VCE provides us with the following eigenvalue
characterization of positive (semi)definite matrices:

Proposition A.7.1 A symmetric matrix A is positive semidefinite if and only if its eigenvalues are
nonnegative; A is positive definite if and only if all eigenvalues of A are positive

Indeed, A is positive definite, if and only if the minimum value of xT Ax over the unit sphere is positive,
and is positive semidefinite, if and only if this minimum value is nonnegative; it remains to note that by
VCE, the minimum value of xT Ax over the unit sphere is exactly the minimum eigenvalue of A.

A.7.3.B. -Monotonicity of the vector of eigenvalues. Let us write A  B (A  B) to


express that A, B are symmetric matrices of the same size such that A − B is positive semidefinite
(respectively, positive definite).

Proposition A.7.2 If A  B, then λ(A) ≥ λ(B), and if A  B, then λ(A) > λ(B).

Indeed, when A  B, then, of course,

max xT Ax ≥ max xT Bx
x∈E:xT x=1 x∈E:xT x=1

for every linear subspace E, whence

λ` (A) = min max xT Ax ≥ min max xT Bx = λ` (B), ` = 1, ..., n,


E∈E` x∈E:xT x=1 E∈E` x∈E:xT x=1

i.e., λ(A) ≥ λ(B). The case of A  B can be considered similarly.


A.7. SYMMETRIC MATRICES 611

A.7.3.C. Eigenvalue Interlacement Theorem. We shall formulate this extremely important


theorem as follows:

Theorem A.7.4 [Eigenvalue Interlacement Theorem] Let A be a symmetric n × n matrix and Ā be the
angular (n − k) × (n − k) submatrix of A. Then, for every ` ≤ n − k, the `-th eigenvalue of Ā separates
the `-th and the (` + k)-th eigenvalues of A:

λ` (A)  λ` (Ā)  λ`+k (A). (A.7.5)

Indeed, by VCE, λ` (Ā) = minE∈Ē` maxx∈E:xT x=1 xT Ax, where Ē` is the family of all linear subspaces of
the dimension n − k − ` + 1 contained in the linear subspace {x ∈ Rn : xn−k+1 = xn−k+2 = ... = xn = 0}.
Since Ē` ⊂ E`+k , we have

λ` (Ā) = min max xT Ax ≥ min max xT Ax = λ`+k (A).


E∈Ē` x∈E:xT x=1 E∈E`+k x∈E:xT x=1

We have proved the left inequality in (A.7.5). Applying this inequality to the matrix −A, we get

−λ` (Ā) = λn−k−` (−Ā) ≥ λn−` (−A) = −λ` (A),

or, which is the same, λ` (Ā) ≤ λ` (A), which is the first inequality in (A.7.5).

A.7.4 Positive semidefinite matrices and the semidefinite cone


A.7.4.A. Positive semidefinite matrices. Recall that an n × n matrix A is called positive
semidefinite (notation: A  0), if A is symmetric and produces nonnegative quadratic form:

A  0 ⇔ {A = AT and xT Ax ≥ 0 ∀x}.

A is called positive definite (notation: A  0), if it is positive semidefinite and the corresponding quadratic
form is positive outside the origin:

A  0 ⇔ {A = AT and xT Ax > 00 ∀x 6= 0}.

It makes sense to list a number of equivalent definitions of a positive semidefinite matrix:

Theorem A.7.5 Let A be a symmetric n × n matrix. Then the following properties of A are equivalent
to each other:
(i) A  0
(ii) λ(A) ≥ 0
(iii) A = DT D for certain rectangular matrix D
(iv) A = ∆T ∆ for certain upper triangular n × n matrix ∆
(v) A = B 2 for certain symmetric matrix B;
(vi) A = B 2 for certain B  0.
The following properties of a symmetric matrix A also are equivalent to each other:
(i0 ) A  0
(ii0 ) λ(A) > 0
(iii0 ) A = DT D for certain rectangular matrix D of rank n
(iv0 ) A = ∆T ∆ for certain nondegenerate upper triangular n × n matrix ∆
(v0 ) A = B 2 for certain nondegenerate symmetric matrix B;
(vi0 ) A = B 2 for certain B  0.

Proof. (i)⇔(ii): this equivalence is stated by Proposition A.7.1.


(ii)⇔(vi): Let A = U ΛU T be the eigenvalue decomposition of A, so that U is orthogonal and Λ
is diagonal with nonnegative diagonal entries λi (A) (we are in the situation of (ii) !). Let Λ1/2 be the
612 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

1/2
diagonal matrix with the diagonal entries λi (A); note that (Λ1/2 )2 = Λ. The matrix B = U Λ1/2 U T is
1/2
symmetric with nonnegative eigenvalues λi (A), so that B  0 by Proposition A.7.1, and

B 2 = U Λ1/2 U T 1/2 T 1/2 2 T T


| {zU} Λ U = U (Λ ) U = U ΛU = A,
I

as required in (vi).
(vi)⇒(v): evident.
(v)⇒(iv): Let A = B 2 with certain symmetric B, and let bi be i-th column of B. Applying the
Gram-Schmidt orthogonalization process (see proof of Theorem A.2.3.(iii)), we can find an orthonormal
i
P
system of vectors u1 , ..., un and lower triangular matrix L such that bi = Lij uj , or, which is the
j=1
same, B T = LU , where U is the orthogonal matrix with the rows uT1 , ..., uTn . We now have A = B 2 =
B T (B T )T = LU U T LT = LLT . We see that A = ∆T ∆, where the matrix ∆ = LT is upper triangular.
(iv)⇒(iii): evident.
(iii)⇒(i): If A = DT D, then xT Ax = (Dx)T (Dx) ≥ 0 for all x.
We have proved the equivalence of the properties (i) – (vi). Slightly modifying the reasoning (do it
yourself!), one can prove the equivalence of the properties (i0 ) – (vi0 ). 2

Remark A.7.1 (i) [Checking positive semidefiniteness] Given an n × n symmetric matrix A, one can
check whether it is positive semidefinite by a purely algebraic finite algorithm (the so called Lagrange diag-
onalization of a quadratic form) which requires at most O(n3 ) arithmetic operations. Positive definiteness
of a matrix can be checked also by the Choleski factorization algorithm which finds the decomposition in
(iv0 ), if it exists, in approximately 61 n3 arithmetic operations.
There exists another useful algebraic criterion (Sylvester’s criterion) for positive semidefiniteness of
a matrix; according to this criterion, a symmetric matrix A is positive definite if and only if its angular
minors are positive, and A is positive semidefinite
 if and
 only if all its principal minors are nonnegative.
a b
For example, a symmetric 2 × 2 matrix A = is positive semidefinite if and only if a ≥ 0, c ≥ 0
b c
2
and Det(A) ≡ ac − b ≥ 0.
(ii) [Square root of a positive semidefinite matrix] By the first chain of equivalences in Theorem A.7.5,
a symmetric matrix A is  0 if and only if A is the square of a positive semidefinite matrix B. The latter
matrix is uniquely defined by A  0 and is called the square root of A (notation: A1/2 ).

A.7.4.B. The semidefinite cone. When adding symmetric matrices and multiplying them by
reals, we add, respectively multiply by reals, the corresponding quadratic forms. It follows that
A.7.4.B.1: The sum of positive semidefinite matrices and a product of a positive semidefinite
matrix and a nonnegative real is positive semidefinite,
or, which is the same (see Section B.1.4),
A.7.4.B.2: n × n positive semidefinite matrices form a cone Sn+ in the Euclidean space Sn
of symmetric n × n matrices,
Pthe Euclidean structure being given by the Frobenius inner
product hA, Bi = Tr(AB) = Aij Bij .
i,j

The cone Sn+


is called the semidefinite cone of size n. It is immediately seen that the semidefinite cone
Sn+ is “good” (regular, see Lecture 1), specifically,
• Sn+ is closed: the limit of a converging sequence of positive semidefinite matrices is positive semidef-
inite;
• Sn+ is pointed: the only n × n matrix A such that both A and −A are positive semidefinite is the
zero n × n matrix;
A.7. SYMMETRIC MATRICES 613

• Sn+ possesses a nonempty interior which is comprised of positive definite matrices.


Note that the relation A  B means exactly that A−B ∈ Sn+ , while A  B is equivalent to A−B ∈ int Sn+ .
The “matrix inequalities” A  B (A  B) match the standard properties of the usual scalar inequalities,
e.g.:
AA [reflexivity]
A  B, B  A ⇒ A = B [antisymmetry]
A  B, B  C ⇒ A  C [transitivity]
A  B, C  D ⇒ A + C  B + D [compatibility with linear operations, I]
A  B, λ ≥ 0 ⇒ λA  λB [compatibility with linear operations, II]
Ai  Bi , Ai → A, Bi → B as i → ∞ ⇒ A  B [closedness]
with evident modifications when  is replaced with , or
A  B, C  D ⇒ A + C  B + D,
etc. Along with these standard properties of inequalities, the inequality  possesses a nice additional
property:
A.7.4.B.3: In a valid -inequality
AB
one can multiply both sides from the left and by the right by a (rectangular) matrix and its
transpose:
A, B ∈ Sn , A  B, V ∈ Mn,m

V T AV  V T BV
Indeed, we should prove that if A − B  0, then also V T (A − B)V  0, which is immediate
– the quadratic form y T [V T (A − B)V ]y = (V y)T (A − B)(V y) of y is nonnegative along with
the quadratic form xT (A − B)x of x.
An important additional property of the semidefinite cone is its self-duality:
Theorem A.7.6 A symmetric matrix Y has nonnegative Frobenius inner products with all positive
semidefinite matrices if and only if Y itself is positive semidefinite.
Proof. “if” part: Assume that Y  0, and let us prove that then Tr(Y X) ≥ 0 for every X  0. Indeed,
the eigenvalue decomposition of Y can be written as
n
X
Y = λi (Y )ei eTi ,
i=1

where ei are the orthonormal eigenvectors of Y . We now have


n n
λi (Y )ei eTi )X) = λi (Y )Tr(ei eTi X)
P P
Tr(Y X) = Tr((
i=1 i=1
n (A.7.6)
λi (Y )Tr(eTi Xei ),
P
=
i=1

where the concluding equality is given by the following well-known property of the trace:
A.7.4.B.4: Whenever matrices A, B are such that the product AB makes sense and is a
square matrix, one has
Tr(AB) = Tr(BA).
Indeed, we should verify that if A ∈ Mp,q and B ∈ Mq,p , then Tr(AB) = Tr(BA). The
p P
P q
left hand side quantity in our hypothetic equality is Aij Bji , and the right hand side
i=1 j=1
q P
P p
quantity is Bji Aij ; they indeed are equal.
j=1 i=1
614 APPENDIX A. PREREQUISITES FROM LINEAR ALGEBRA AND ANALYSIS

Looking at the concluding quantity in (A.7.6), we see that it indeed is nonnegative whenever X  0
(since Y  0 and thus λi (Y ) ≥ 0 by P.7.5).
”only if” part: We are given Y such that Tr(Y X) ≥ 0 for all matrices X  0, and we should prove
that Y  0. This is immediate: for every vector x, the matrix X = xxT is positive semidefinite (Theorem
A.7.5.(iii)), so that 0 ≤ Tr(Y xxT ) = Tr(xT Y x) = xT Y x. Since the resulting inequality xT Y x ≥ 0 is
valid for every x, we have Y  0. 2
Appendix B

Convex sets in Rn

B.1 Definition and basic properties


B.1.1 A convex set
In the school geometry a figure is called convex if it contains, along with every pair of its points x, y,
also the entire segment [x, y] linking the points. This is exactly the definition of a convex set in the
multidimensional case; all we need is to say what does it mean “the segment [x, y] linking the points
x, y ∈ Rn ”. This is said by the following

Definition B.1.1 [Convex set]


1) Let x, y be two points in Rn . The set

[x, y] = {z = λx + (1 − λ)y : 0 ≤ λ ≤ 1}

is called a segment with the endpoints x, y.


2) A subset M of Rn is called convex, if it contains, along with every pair of its points x, y, also the
entire segment [x, y]:
x, y ∈ M, 0 ≤ λ ≤ 1 ⇒ λx + (1 − λ)y ∈ M.

Note that by this definition an empty set is convex (by convention, or better to say, by the exact
sense of the definition: for the empty set, you cannot present a counterexample to show that it is not
convex).

B.1.2 Examples of convex sets


B.1.2.A. Affine subspaces and polyhedral sets
Example B.1.1 A linear/affine subspace of Rn is convex.

Convexity of affine subspaces immediately follows from the possibility to represent these sets as solution
sets of systems of linear equations (Proposition A.3.7), due to the following simple and important fact:

Proposition B.1.1 The solution set of an arbitrary (possibly, infinite) system

aTα x ≤ bα , α ∈ A (!)

of nonstrict linear inequalities with n unknowns x – the set

S = {x ∈ Rn : aTα x ≤ bα , α ∈ A}

is convex.

615
616 APPENDIX B. CONVEX SETS IN RN

In particular, the solution set of a finite system

Ax ≤ b

of m nonstrict inequalities with n variables (A is m × n matrix) is convex; a set of this latter type is
called polyhedral.

Exercise B.1 Prove Proposition B.1.1.

Remark B.1.1 Note that every set given by Proposition B.1.1 is not only convex, but also closed (why?).
In fact, from Separation Theorem (Theorem B.2.9 below) it follows that
Every closed convex set in Rn is the solution set of a (perhaps, infinite) system of nonstrict
linear inequalities.

Remark B.1.2 Note that replacing some of the nonstrict linear inequalities aTα x ≤ bα in (!) with their
strict versions aTα x < bα , we get a system with the solution set which still is convex (why?), but now not
necessary is closed.

B.1.2.B. Unit balls of norms


Let k · k be a norm on Rn i.e., a real-valued function on Rn satisfying the three characteristic properties
of a norm (Section A.4.1), specifically:
A. [positivity] kxk ≥ 0 for all x ∈ Rn ; kxk = 0 is and only if x = 0;

B. [homogeneity] For x ∈ Rn and λ ∈ R, one has

kλxk = |λ|kxk;

C. [triangle inequality] For all x, y ∈ Rn one has

kx + yk ≤ kxk + kyk.

Example B.1.2 The unit ball of the norm k · k – the set

{x ∈ E : kxk ≤ 1},

same as every other k · k-ball


{x : kx − ak ≤ r}
(a ∈ Rn and r ≥ 0 are fixed) is convex. √
In particular, Euclidean balls (k · k-balls associated with the standard Euclidean norm kxk2 = xT x)
are convex.

The standard examples of norms on Rn are the `p -norms



n
1/p
 P |x |p

, 1≤p<∞
i
kxkp = i=1 .
 max |xi |,
 p=∞
1≤i≤n

These indeed are norms (which is not clear in advance). When p = 2, we get the usual Euclidean norm;
of course, you know how the Euclidean ball looks. When p = 1, we get
n
X
kxk1 = |xi |,
i=1
B.1. DEFINITION AND BASIC PROPERTIES 617

and the unit ball is the hyperoctahedron


n
X
V = {x ∈ Rn : |xi | ≤ 1}
i=1

When p = ∞, we get
kxk∞ = max |xi |,
1≤i≤n

and the unit ball is the hypercube

V = {x ∈ Rn : −1 ≤ xi ≤ 1, 1 ≤ i ≤ n}.

Exercise B.2 † Prove that unit balls of norms on Rn are exactly the same as convex sets V in Rn
satisfying the following three properties:
1. V is symmetric w.r.t. the origin: x ∈ V ⇒ −x ∈ V ;
2. V is bounded and closed;
3. V contains a neighbourhood of the origin.
A set V satisfying the outlined properties is the unit ball of the norm

kxk = inf t ≥ 0 : t−1 x ∈ V .




Hint: You could find useful to verify and to exploit the following facts:
1. A norm k · k on Rn is Lipschitz continuous with respect to the standard Euclidean distance: there
exists Ck·k < ∞ such that |kxk − kyk| ≤ Ck·k kx − yk2 for all x, y
2. Vice versa, the Euclidean norm is Lipschitz continuous with respect to a given norm k · k: there
exists ck·k < ∞ such that |kxk2 − kyk2 | ≤ ck·k kx − yk for all x, y

B.1.2.C. Ellipsoids
Example B.1.3 [Ellipsoid] Let Q be a n × n matrix which is symmetric (Q = QT ) and positive definite
(xT Qx ≥ 0, with ≥ being = if and only if x = 0). Then, for every nonnegative r, the Q-ellipsoid of
radius r centered at a – the set
{x : (x − a)T Q(x − a) ≤ r2 }
is convex.

To see that an ellipsoid {x : (x − a)T Q(x − a) ≤ r2 } is convex, note that since Q is positive
definite, the matrix Q1/2 is well-defined and positive definite. Now, if k · k is a norm on Rn
and P is a nonsingular n ×p n matrix, the function kP xk is a norm along with k · k (why?).
Thus, the function kxkQ ≡ xT Qx = kQ1/2 xk2 is a norm along with k · k2 , and the ellipsoid
in question clearly is just k · kQ -ball of radius r centered at a.

B.1.2.D. Neighbourhood of a convex set


Example B.1.4 Let M be a convex set in Rn , and let  > 0. Then, for every norm k · k on Rn , the
-neighbourhood of M , i.e., the set

M = {y ∈ Rn : distk·k (y, M ) ≡ inf ky − xk ≤ }


x∈M

is convex.

Exercise B.3 Justify the statement of Example B.1.4.


618 APPENDIX B. CONVEX SETS IN RN

B.1.3 Inner description of convex sets: Convex combinations and convex hull
B.1.3.A. Convex combinations
Recall the notion of linear combination y of vectors y1 , ..., ym – this is a vector represented as
m
X
y= λi yi ,
i=1

where λi are real coefficients. Specifying this definition, we have come to the notion of an affine combi-
nation - this is a linear combination with the sum of coefficients equal to one. The last notion in this
genre is the one of convex combination.

Definition B.1.2 A convex combination of vectors y1 , ..., ym is their affine combination with nonnegative
coefficients, or, which is the same, a linear combination
m
X
y= λi yi
i=1

with nonnegative coefficients with unit sum:


m
X
λi ≥ 0, λi = 1.
i=1

The following statement resembles those in Corollary A.3.2:

Proposition B.1.2 A set M in Rn is convex if and only if it is closed with respect to taking all convex
combinations of its elements, i.e., if and only if every convex combination of vectors from M again is a
vector from M .

Exercise B.4 Prove Proposition B.1.2.


Hint: Assuming λ1 , ..., λm > 0, one has
m m
X X λi
λi yi = λ1 y1 + (λ2 + λ3 + ... + λm ) µi yi , µi = .
i=1 i=2
λ2 + λ3 + ... + λm

B.1.3.B. Convex hull


Same as the property to be linear/affine subspace, the property to be convex is preserved by taking
intersections (why?):

Proposition B.1.3 Let {Mα }α be an arbitrary family of convex subsets of Rn . Then the intersection

M = ∩α Mα

is convex.

As an immediate consequence, we come to the notion of convex hull Conv(M ) of a nonempty subset
in Rn (cf. the notions of linear/affine hull):

Corollary B.1.1 [Convex hull]


Let M be a nonempty subset in Rn . Then among all convex sets containing M (these sets do exist, e.g.,
Rn itself ) there exists the smallest one, namely, the intersection of all convex sets containing M . This
set is called the convex hull of M [ notation: Conv(M )].
B.1. DEFINITION AND BASIC PROPERTIES 619

The linear span of M is the set of all linear combinations of vectors from M , the affine hull is the set
of all affine combinations of vectors from M . As you guess,
Proposition B.1.4 [Convex hull via convex combinations] For a nonempty M ⊂ Rn :
Conv(M ) = {the set of all convex combinations of vectors from M }.

Exercise B.5 Prove Proposition B.1.4.

B.1.3.C. Simplex
The convex hull of m + 1 affinely independent points y0 , ..., ym (Section A.3.3) is called m-dimensional
simplex with the vertices y0 , .., ym . By results of Section A.3.3, every point x of an m-dimensional simplex
with vertices y0 , ..., ym admits exactly one representation as a convex combination of the vertices; the
corresponding coefficients form the unique solution to the system of linear equations
m
X m
X
λi xi = x, λi = 1.
i=0 i=0

This system is solvable if and only if x ∈ M = Aff({y0 , .., ym }), and the components of the solution (the
barycentric coordinates of x) are affine functions of x ∈ Aff(M ); the simplex itself is comprised of points
from M with nonnegative barycentric coordinates.

B.1.4 Cones
A nonempty subset M of Rn is called conic, if it contains, along with every point x ∈ M , the entire ray
Rx = {tx : t ≥ 0} spanned by the point:
x ∈ M ⇒ tx ∈ M ∀t ≥ 0.
A convex conic set is called a cone.
Proposition B.1.5 A nonempty subset M of Rn is a cone if and only if it possesses the following pair
of properties:
• is conic: x ∈ M, t ≥ 0 ⇒ tx ∈ M ;
• contains sums of its elements: x, y ∈ M ⇒ x + y ∈ M .

Exercise B.6 Prove Proposition B.1.5.


As an immediate consequence, we get that a cone is closed with respect to taking linear combinations
with nonnegative coefficients of the elements, and vice versa – a nonempty set closed with respect to
taking these combinations is a cone.
Example B.1.5 The solution set of an arbitrary (possibly, infinite) system
aTα x ≤ 0, α ∈ A
of homogeneous linear inequalities with n unknowns x – the set

K = {x : aTα x ≤ 0 ∀α ∈ A}
– is a cone.
In particular, the solution set to a homogeneous finite system of m homogeneous linear inequalities
Ax ≤ 0
(A is m × n matrix) is a cone; a cone of this latter type is called polyhedral.
620 APPENDIX B. CONVEX SETS IN RN

Note that the cones given by systems of linear homogeneous nonstrict inequalities necessarily are closed.
From Separation Theorem B.2.9 it follows that, vice versa, every closed convex cone is the solution set
to such a system, so that Example B.1.5 is the generic example of a closed convex cone.
Cones form a very important family of convex sets, and one can develop theory of cones
absolutely similar (and in a sense, equivalent) to that one of all convex sets. E.g., introducing
the notion of conic combination of vectors x1 , ..., xk as a linear combination of the vectors with
nonnegative coefficients, you can easily prove the following statements completely similar to
those for general convex sets, with conic combination playing the role of convex one:
• A set is a cone if and only if it is nonempty and is closed with respect to taking all
conic combinations of its elements;
• Intersection of a family of cones is again a cone; in particular, for every nonempty set
M ⊂ Rn there exists the smallest cone containing M – its conic!hull Cone (M ), and
this conic hull is comprised of all conic combinations of vectors from M .
In particular, the conic hull of a nonempty finite set M = {u1 , ..., uN } of vectors in Rn is
the cone
XN
Cone (M ) = { λi ui : λi ≥ 0, i = 1, ..., N }.
i=1

B.1.5 Calculus of convex sets


Proposition B.1.6 The following operations preserve convexity of sets:
T
1. Taking intersection: if Mα , α ∈ A, are convex sets, so is the set Mα .
α

2. Taking direct product: if M1 ⊂ Rn1 and M2 ⊂ Rn2 are convex sets, so is the set

M1 × M2 = {y = (y1 , y2 ) ∈ Rn1 × Rn2 = Rn1 +n2 : y1 ∈ M1 , y2 ∈ M2 }.

3. Arithmetic summation and multiplication by reals: if M1 , ..., Mk are convex sets in Rn and
λ1 , ..., λk are arbitrary reals, then the set

Xk
λ1 M1 + ... + λk Mk = { λi xi : xi ∈ Mi , i = 1, ..., k}
i=1

is convex.
4. Taking the image under an affine mapping: if M ⊂ Rn is convex and x 7→ A(x) ≡ Ax + b is an
affine mapping from Rn into Rm (A is m × n matrix, b is m-dimensional vector), then the set

A(M ) = {y = A(x) ≡ Ax + a : x ∈ M }

is a convex set in Rm ;
5. Taking the inverse image under affine mapping: if M ⊂ Rn is convex and y 7→ Ay + b is an affine
mapping from Rm to Rn (A is n × m matrix, b is n-dimensional vector), then the set

A−1 (M ) = {y ∈ Rm : A(y) ∈ M }

is a convex set in Rm .

Exercise B.7 Prove Proposition B.1.6.


B.1. DEFINITION AND BASIC PROPERTIES 621

B.1.6 Topological properties of convex sets


Convex sets and closely related objects - convex functions - play the central role in Optimization. To
play this role properly, the convexity alone is insufficient; we need convexity plus closedness.

B.1.6.A. The closure


It is clear from definition of a closed set (Section A.4.3) that the intersection of a family of closed sets
in Rn is also closed. From this fact it, as always, follows that for every subset M of Rn there exists the
smallest closed set containing M ; this set is called the closure of M and is denoted cl M . In Analysis
they prove the following inner description of the closure of a set in a metric space (and, in particular, in
Rn ):
The closure of a set M ⊂ Rn is exactly the set comprised of the limits of all converging sequences of
elements of M .
With this fact in mind, it is easy to prove that, e.g., the closure of the open Euclidean ball

{x : |x − a| < r} [r > 0]

is the closed ball {x : kx − ak2 ≤ r}. Another useful application example is the closure of a set

M = {x : aTα x < bα , α ∈ A}

given by strict linear inequalities: if such a set is nonempty, then its closure is given by the nonstrict
versions of the same inequalities:

cl M = {x : aTα x ≤ bα , α ∈ A}.

Nonemptiness of M in the latter example is essential: the set M given by two strict inequal-
ities
x < 0, −x < 0
in R clearly is empty, so that its closure also is empty; in contrast to this, applying formally
the above rule, we would get wrong answer

cl M = {x : x ≤ 0, x ≥ 0} = {0}.

B.1.6.B. The interior


Let M ⊂ Rn . We say that a point x ∈ M is an interior point of M , if some neighbourhood of the point
is contained in M , i.e., there exists centered at x ball of positive radius which belongs to M :

∃r > 0 Br (x) ≡ {y : ky − xk2 ≤ r} ⊂ M.

The set of all interior points of M is called the interior of M [notation: int M ].
E.g.,
• The interior of an open set is the set itself;
• The interior of the closed ball {x : kx − ak2 ≤ r} is the open ball {x : kx − ak2 < r} (why?)
• The interior of a polyhedral set {x : Ax ≤ b} with matrix A not containing zero rows is the set
{x : Ax < b} (why?)
The latter statement is not, generally speaking, valid for sets of solutions of infinite
systems of linear inequalities. E.g., the system of inequalities
1
x≤ , n = 1, 2, ...
n
622 APPENDIX B. CONVEX SETS IN RN

in R has, as a solution set, the nonpositive ray R− = {x ≤ 0}; the interior of this ray
is the negative ray {x < 0}. At the same time, strict versions of our inequalities
1
x< , n = 1, 2, ...
n
define the same nonpositive ray, not the negative one.
It is also easily seen (this fact is valid for arbitrary metric spaces, not for Rn only), that
• the interior of an arbitrary set is open
The interior of a set is, of course, contained in the set, which, in turn, is contained in its closure:

int M ⊂ M ⊂ cl M. (B.1.1)

The complement of the interior in the closure – the set

∂M = cl M \int M

– is called the boundary of M , and the points of the boundary are called boundary points of M (Warning:
these points not necessarily belong to M , since M can be less than cl M ; in fact, all boundary points
belong to M if and only if M = cl M , i.e., if and only if M is closed).
The boundary of a set clearly is closed (as the intersection of two closed sets cl M and Rn \int M ; the
latter set is closed as a complement to an open set). From the definition of the boundary,

M ⊂ int M ∪ ∂M [= cl M ],

so that a point from M is either an interior, or a boundary point of M .

B.1.6.C. The relative interior


Many of the constructions in Optimization possess nice properties in the interior of the set the construction
is related to and may lose these nice properties at the boundary points of the set; this is why in many
cases we are especially interested in interior points of sets and want the set of these points to be “enough
massive”. What to do if it is not the case – e.g., there are no interior points at all (look at a segment in
the plane)? It turns out that in these cases we can use a good surrogate of the “normal” interior – the
relative interior defined as follows.

Definition B.1.3 [Relative interior] Let M ⊂ Rn . We say that a point x ∈ M is relative interior for
M , if M contains the intersection of a small enough ball centered at x with Aff(M ):

∃r > 0 Br (x) ∩ Aff(M ) ≡ {y : y ∈ Aff(M ), ky − xk2 ≤ r} ⊂ M.

The set of all relative interior points of M is called its relative interior [notation: ri M ].

E.g. the relative interior of a singleton is the singleton itself (since a point in the 0-dimensional space is
the same as a ball of a positive radius); more generally, the relative interior of an affine subspace is the
set itself. The interior of a segment [x, y] (x 6= y) in Rn is empty whenever n > 1; in contrast to this,
the relative interior is nonempty independently of n and is the interval (x, y) – the segment with deleted
endpoints. Geometrically speaking, the relative interior is the interior we get when regard M as a subset
of its affine hull (the latter, geometrically, is nothing but Rk , k being the affine dimension of Aff(M )).

Exercise B.8 Prove that the relative interior of a simplex with vertices y0 , ..., ym is exactly the set
m
P m
P
{x = λi yi : λi > 0, λi = 1}.
i=0 i=0

We can play with the notion of the relative interior in basically the same way as with the one of
interior, namely:
B.1. DEFINITION AND BASIC PROPERTIES 623

• since Aff(M ), as every affine subspace, is closed and contains M , it contains also the smallest closed
sets containing M , i.e., cl M . Therefore we have the following analogies of inclusions (B.1.1):

ri M ⊂ M ⊂ cl M [⊂ Aff(M )]; (B.1.2)

• we can define the relative boundary ∂ri M = cl M \ri M which is a closed set contained in Aff(M ),
and, as for the “actual” interior and boundary, we have

ri M ⊂ M ⊂ cl M = ri M + ∂ri M.

Of course, if Aff(M ) = Rn , then the relative interior becomes the usual interior, and similarly for
boundary; this for sure is the case when int M 6= ∅ (since then M contains a ball B, and therefore the
affine hull of M is the entire Rn , which is the affine hull of B).

B.1.6.D. Nice topological properties of convex sets


An arbitrary set M in Rn may possess very pathological topology: both inclusions in the chain

ri M ⊂ M ⊂ cl M

can be very “non-tight”. E.g., let M be the set of rational numbers in the segment [0, 1] ⊂ R. Then
ri M = int M = ∅ – since every neighbourhood of every rational real contains irrational reals – while
cl M = [0, 1]. Thus, ri M is “incomparably smaller” than M , cl M is “incomparably larger”, and M is
contained in its relative boundary (by the way, what is this relative boundary?).
The following proposition demonstrates that the topology of a convex set M is much better than it
might be for an arbitrary set.

Theorem B.1.1 Let M be a convex set in Rn . Then


(i) The interior int M , the closure cl M and the relative interior ri M are convex;
(ii) If M is nonempty, then the relative interior ri M of M is nonempty
(iii) The closure of M is the same as the closure of its relative interior:

cl M = cl ri M

(in particular, every point of cl M is the limit of a sequence of points from ri M )


(iv) The relative interior remains unchanged when we replace M with its closure:

ri M = ri cl M.

Proof. (i): prove yourself!


(ii): Let M be a nonempty convex set, and let us prove that ri M 6= ∅. By translation, we may
assume that 0 ∈ M . Further, we may assume that the linear span of M is the entire Rn . Indeed, as
far as linear operations and the Euclidean structure are concerned, the linear span L of M , as every
other linear subspace in Rn , is equivalent to certain Rk ; since the notion of relative interior deals only
with linear and Euclidean structures, we lose nothing thinking of Lin(M ) as of Rk and taking it as our
universe instead of the original universe Rn . Thus, in the rest of the proof of (ii) we assume that 0 ∈ M
and Lin(M ) = Rn ; what we should prove is that the interior of M (which in the case in question is the
same as relative interior) is nonempty. Note that since 0 ∈ M , we have Aff(M ) = Lin(M ) = Rn .
Since Lin(M ) = Rn , we can find in M n linearly independent vectors a1 , .., an . Let also a0 = 0. The
n + 1 vectors a0 , ..., an belong to M , and since M is convex, the convex hull of these vectors also belongs
to M . This convex hull is the set
n
X X n
X n
X
∆ = {x = λi ai : λ ≥ 0, λi = 1} = {x = µi ai : µ ≥ 0, µi ≤ 1}.
i=0 i i=1 i=1
624 APPENDIX B. CONVEX SETS IN RN

We see that ∆ is the image of the standard full-dimensional simplex


n
X
{µ ∈ Rn : µ ≥ 0, µi ≤ 1}
i=1

under linear transformation µ 7→ Aµ, where A is the matrix with the columns a1P
, ..., an . The standard
simplex clearly has a nonempty interior (comprised of all vectors µ > 0 with µi < 1); since A is
i
nonsingular (due to linear independence of a1 , ..., an ), multiplication by A maps open sets onto open
ones, so that ∆ has a nonempty interior. Since ∆ ⊂ M , the interior of M is nonempty. 2

(iii): We should prove that the closure of ri M is exactly the same that the closure of M . In fact we
shall prove even more:
Lemma B.1.1 Let x ∈ ri M and y ∈ cl M . Then all points from the half-segment [x, y),

[x, y) = {z = (1 − λ)x + λy : 0 ≤ λ < 1}

belong to the relative interior of M .


Proof of the Lemma. Let Aff(M ) = a + L, L being linear subspace; then

M ⊂ Aff(M ) = x + L.

Let B be the unit ball in L:


B = {h ∈ L : khk2 ≤ 1}.
Since x ∈ ri M , there exists positive radius r such that

x + rB ⊂ M. (B.1.3)

Now let λ ∈ [0, 1), and let z = (1 − λ)x + λy. Since y ∈ cl M , we have y = lim yi for certain sequence
i→∞
of points from M . Setting zi = (1 − λ)x + λyi , we get zi → z as i → ∞. Now, from (B.1.3) and the
convexity of M is follows that the sets Zi = {u = (1 − λ)x0 + λyi : x0 ∈ x + rB} are contained in M ;
clearly, Zi is exactly the set zi + r0 B, where r0 = (1 − λ)r > 0. Thus, z is the limit of sequence zi , and
r0 -neighbourhood (in Aff(M )) of every one of the points zi belongs to M . For every r00 < r0 and for all i
such that zi is close enough to z, the r0 -neighbourhood of zi contains the r00 -neighbourhood of z; thus, a
neighbourhood (in Aff(M )) of z belongs to M , whence z ∈ ri M . 2

A useful byproduct of Lemma B.1.1 is as follows:

Corollary B.1.2 Let M be a convex set. Then every convex combination


X
λi xi
i

of points xi ∈ cl M where at least one term with positive coefficient corresponds to xi ∈ ri M


is in fact a point from ri M .

(iv): The statement is evidently true when M is empty, so assume that M is nonempty. The inclusion
ri M ⊂ ri cl M is evident, and all we need is to prove the inverse inclusion. Thus, let z ∈ ri cl M , and let
us prove that z ∈ ri M . Let x ∈ ri M (we already know that the latter set is nonempty). Consider the
segment [x, z]; since z is in the relative interior of cl M , we can extend a little bit this segment through
the point z, not leaving cl M , i.e., there exists y ∈ cl M such that z ∈ [x, y). We are done, since by
Lemma B.1.1 from z ∈ [x, y), with x ∈ ri M , y ∈ cl M , it follows that z ∈ ri M . 2
B.2. MAIN THEOREMS ON CONVEX SETS 625

We see from the proof of Theorem B.1.1 that to get a closure of a (nonempty) convex set,
it suffices to subject it to the “radial” closure, i.e., to take a point x ∈ ri M , take all rays in
Aff(M ) starting at x and look at the intersection of such a ray l with M ; such an intersection
will be a convex set on the line which contains a one-sided neighbourhood of x, i.e., is either
a segment [x, yl ], or the entire ray l, or a half-interval [x, yl ). In the first two cases we should
not do anything; in the third we should add y to M . After all rays are looked through and
all ”missed” endpoints yl are added to M , we get the closure of M . To understand what
is the role of convexity here, look at the nonconvex set of rational numbers from [0, 1]; the
interior (≡ relative interior) of this ”highly percolated” set is empty, the closure is [0, 1], and
there is no way to restore the closure in terms of the interior.

B.2 Main theorems on convex sets


B.2.1 Caratheodory Theorem
Let us call the affine dimension (or simple dimension of a nonempty set M ⊂ Rn (notation: dim M ) the
affine dimension of Aff(M ).
Theorem B.2.1 [Caratheodory] Let M ⊂ Rn , and let dim ConvM = m. Then every point x ∈ ConvM
is a convex combination of at most m + 1 points from M .
Proof. Let x ∈ ConvM . By Proposition B.1.4 on the structure of convex hull, x is convex combination
of certain points x1 , ..., xN from M :
N
X N
X
x= λ i xi , [λi ≥ 0, λi = 1].
i=1 i=1

Let us choose among all these representations of x as a convex combination of points from M the one
with the smallest possible N , and let it be the above combination. I claim that N ≤ m + 1 (this claim
leads to the desired statement). Indeed, if N > m + 1, then the system of m + 1 homogeneous equations
N
P
µ i xi = 0
i=1
PN
µi = 0
i=1

with N unknowns µ1 , ..., µN has a nontrivial solution δ1 , ..., δN :


N
X N
X
δi xi = 0, δi = 0, (δ1 , ..., δN ) 6= 0.
i=1 i=1

It follows that, for every real t,


N
X
(∗) [λi + tδi ]xi = x.
i=1
What is to the left, is an affine combination of xi ’s. When t = 0, this is a convex combination - all
coefficients are nonnegative. When t is large, this is not a convex combination, since some of δi ’s are
negative (indeed, not all of them are zero, and the sum of δi ’s is 0). There exists, of course, the largest t
for which the combination (*) has nonnegative coefficients, namely
λi
t∗ = min .
i:δi <0 |δi |
For this value of t, the combination (*) is with nonnegative coefficients, and at least one of the coefficients
is zero; thus, we have represented x as a convex combination of less than N points from M , which
contradicts the definition of N . 2
626 APPENDIX B. CONVEX SETS IN RN

B.2.2 Radon Theorem


Theorem B.2.2 [Radon] Let S be a set of at least n + 2 points x1 , ..., xN in Rn . Then one can split
the set into two nonempty subsets S1 and S2 with intersecting convex hulls: there exists partitioning
I ∪ J = {1, ..., N }, I ∩ J = ∅, of the index set {1, ..., N } into two nonempty sets I and J and convex
combinations of the points {xi , i ∈ I}, {xj , j ∈ J} which coincide with each other, i.e., there exist
αi , i ∈ I, and βj , j ∈ J, such that
X X X X
αi xi = βj x j ; αi = βj = 1; αi , βj ≥ 0.
i∈I j∈J i j

Proof. Since N > n + 1, the homogeneous system of n + 1 scalar equations with N unknowns µ1 , ..., µN
N
P
µ i xi = 0
i=1
PN
µi = 0
i=1

has a nontrivial solution λ1 , ..., λN :


N
X N
X
µi xi = 0, λi = 0, [(λ1 , ..., λN ) 6= 0].
i=1 i=1

Let I = {i : λi ≥ 0}, J = {i : λi < 0}; then I and J are nonempty and form a partitioning of {1, ..., N }.
We have X X
a≡ λi = (−λj ) > 0
i∈I j∈J

(since the sum of all λ’s is zero and not all λ’s are zero). Setting
λi −λj
αi = , i ∈ I, βj = , j ∈ J,
a a
we get X X
αi ≥ 0, βj ≥ 0, αi = 1, βj = 1,
i∈I j∈J

and  
X X X X N
X
[ αi xi ] − [ βj xj ] = a−1 [ λi xi ] − [ (−λj )xj ] = a−1 λi xi = 0. 2
i∈I j∈J i∈I j∈J i=1

B.2.3 Helley Theorem


Theorem B.2.3 [Helley, I] Let F be a finite family of convex sets in Rn . Assume that every n + 1 sets
from the family have a point in common. Then all the sets have a point in common.

Proof. Let us prove the statement by induction on the number N of sets in the family. The case of
N ≤ n + 1 is evident. Now assume that the statement holds true for all families with certain number
N ≥ n + 1 of sets, and let S1 , ..., SN , SN +1 be a family of N + 1 convex sets which satisfies the premise
of the Helley Theorem; we should prove that the intersection of the sets S1 , ..., SN , SN +1 is nonempty.
Deleting from our N + 1-set family the set Si , we get N -set family which satisfies the premise of the
Helley Theorem and thus, by the inductive hypothesis, the intersection of its members is nonempty:

(∀i ≤ N + 1) : T i = S1 ∩ S2 ∩ ... ∩ Si−1 ∩ Si+1 ∩ ... ∩ SN +1 6= ∅.

Let us choose a point xi in the (nonempty) set T i . We get N + 1 ≥ n + 2 points from Rn . By Radon’s
Theorem, we can partition the index set {1, ..., N + 1} into two nonempty subsets I and J in such a way
B.2. MAIN THEOREMS ON CONVEX SETS 627

that certain convex combination x of the points xi , i ∈ I, is a convex combination of the points xj , j ∈ J,
as well. Let us verify that x belongs to all the sets S1 , ..., SN +1 , which will complete the proof. Indeed,
let i∗ be an index from our index set; let us prove that x ∈ Si∗ . We have either i∗ ∈ I, or i∗ ∈ J. In
the first case all the sets T j , j ∈ J, are contained in Si∗ (since Si∗ participates in all intersections which
give T i with i 6= i∗ ). Consequently, all the points xj , j ∈ J, belong to Si∗ , and therefore x, which is
a convex combination of these points, also belongs to Si∗ (all our sets are convex!), as required. In the
second case similar reasoning says that all the points xi , i ∈ I, belong to Si∗ , and therefore x, which is a
convex combination of these points, belongs to Si∗ . 2

Exercise B.9 Let S1 , ..., SN be a family of N convex sets in Rn , and let m be the affine dimension of
Aff(S1 ∪ ... ∪ SN ). Assume that every m + 1 sets from the family have a point in common. Prove that all
sets from the family have a point in common.
In the aforementioned version of the Helley Theorem we dealt with finite families of convex
sets. To extend the statement to the case of infinite families, we need to strengthen slightly
the assumption. The resulting statement is as follows:
Theorem B.2.4 [Helley, II] Let F be an arbitrary family of convex sets in Rn . Assume
that
(a) every n + 1 sets from the family have a point in common,
and
(b) every set in the family is closed, and the intersection of the sets from certain finite
subfamily of the family is bounded (e.g., one of the sets in the family is bounded).
Then all the sets from the family have a point in common.
Proof. By the previous theorem, all finite subfamilies of F have nonempty intersections, and
these intersections are convex (since intersection of a family of convex sets is convex, Theorem
B.1.3); in view of (a) these intersections are also closed. Adding to F all intersections of
finite subfamilies of F, we get a larger family F 0 comprised of closed convex sets, and a finite
subfamily of this larger family again has a nonempty intersection. Besides this, from (b)
it follows that this new family contains a bounded set Q. Since all the sets are closed, the
family of sets
{Q ∩ Q0 : Q0 ∈ F}
is a nested family of compact sets (i.e., a family of compact sets with nonempty intersection
of sets from every finite subfamily); by the well-known Analysis theorem such a family has
a nonempty intersection1) . 2

B.2.4 Polyhedral representations and Fourier-Motzkin Elimination


B.2.4.A. Polyhedral representations
Recall that by definition a polyhedral set X in Rn is the set of solutions of a finite system of nonstrict
linear inequalities in variables x ∈ Rn :
X = {x ∈ Rn : Ax ≤ b} = {x ∈ Rn : aTi x ≤ bi , 1 ≤ i ≤ m}.
1)
here is the proof of this Analysis theorem: assume, on contrary, that the compact sets Qα , α ∈ A, have
empty intersection. Choose a set Qα∗ from the family; for every x ∈ Qα∗ there is a set Qx in the family which
does not contain x - otherwise x would be a common point of all our sets. Since Qx is closed, there is an open
ball Vx centered at x which does not intersect Qx . The balls Vx , x ∈ Qα∗ , form an open covering of the compact
set Qα∗ , and therefore there exists a finite subcovering Vx1 , ..., VxN of Qα∗ by the balls from the covering. Since
Qxi does not intersect Vxi , we conclude that the intersection of the finite subfamily Qα∗ , Qx1 , ..., QxN is empty,
which is a contradiction
628 APPENDIX B. CONVEX SETS IN RN

We shall call such a representation of X its polyhedral description. A polyhedral set always is convex
and closed (Proposition B.1.1). Now let us introduce the notion of polyhedral representation of a set
X ⊂ Rn .

Definition B.2.1 We say that a set X ⊂ Rn is polyhedrally representable, if it admits a representation


as follows:
X = {x ∈ Rn : ∃u ∈ Rk : Ax + Bu ≤ c} (B.2.1)

where A, B are m × n and m × k matrices and c ∈ Rm . A representation of X of the form of (B.2.1) is


called a polyhedral representation of X, and variables u in such a representation are called slack variables.
Geometrically, a polyhedral representation of a set X ⊂ Rn is its representation as the projection
{x : ∃u : (x, u) ∈ Y } of a polyhedral set Y = {(x, u) : Ax + Bu ≤ c} in the space of n + k variables
(x ∈ Rn , u ∈ Rk ) under the linear mapping (the projection) (x, u) 7→ x : Rn+k n
x,u → Rx of the n + k-
dimensional space of (x, u)-variables (the space where Y lives) to the n-dimensional space of x-variables
where X lives.

Note that every polyhedrally representable set is the image under linear mapping (even a projection) of
a polyhedral, and thus convex, set. It follows that a polyhedrally representable set definitely is convex
(Proposition B.1.6).
Examples: 1) Every polyhedral set X = {x ∈ Rn : Ax ≤ b} is polyhedrally representable – a polyhedral
description of X is nothing but a polyhedral representation with no slack variables (k = 0). Vice versa,
a polyhedral representation of a set X with no slack variables (k = 0) clearly is a polyhedral description
of the set (which therefore is polyhedral).
Pn
2) Looking at the set X = {x ∈ Rn : i=1 |xi | ≤ 1}, we cannot say immediately whether it is or is not
polyhedral; at least the initial description of X is not of the form {x : Ax ≤ b}. However, X admits a
polyhedral representation, e.g., the representation
n
X
X = {x ∈ Rn : ∃u ∈ Rn : −ui ≤ xi ≤ ui , 1 ≤ i ≤ n, ui ≤ 1}. (B.2.2)
| {z }
i=1
⇔|xi |≤ui

Note that the set X in question can be described by a system of linear inequalities in x-variables only,
namely, as
Xn
X = {x ∈ Rn : i xi ≤ 1 , ∀(i = ±1, 1 ≤ i ≤ n)},
i=1

that is, X is polyhedral. However, the above polyhedral description of X (which in fact is minimal in
terms of the number of inequalities involved) requires 2n inequalities — an astronomically large number
when n is just few tens. In contrast to this, the polyhedral representation (B.2.2) of the same set requires
just n slack variables u and 2n + 1 linear inequalities on x, u – the “complexity” of this representation is
just linear in n.
3) Let a1 , ..., am be given vectors in Rn . Consider the conic hull of the finite set {a1 , ..., am } – the set
Pm
Cone {a1 , ..., am } = {x = λi ai : λ ≥ 0} (see Section B.1.4). It is absolutely unclear whether this set is
i=1
polyhedral. In contrast to this, its polyhedral representation is immediate:
Pm
Cone {a1 , ..., am } = {x ∈ Rn : ∃λ ≥ 0 : x = i=1 λi ai }
 −λ ≤ P0m
= {x ∈ Rn : ∃λ ∈ Rm : x− P i=1 λi ai ≤ 0 }
m
−x + i=1 λi ai ≤ 0

In other words, the original description of X is nothing but its polyhedral representation (in slight
disguise), with λi ’s in the role of slack variables.
B.2. MAIN THEOREMS ON CONVEX SETS 629

B.2.4.B. Every polyhedrally representable set is polyhedral! (Fourier-Motzkin elim-


ination)
A surprising and deep fact is that the situation in the Example 2) above is quite general:

Theorem B.2.5 Every polyhedrally representable set is polyhedral.

Proof: Fourier-Motzkin Elimination. Recalling the definition of a polyhedrally representable set,


our claim can be rephrased equivalently as follows: the projection of a polyhedral set Y in a space Rn+ku,v
of (x, u)-variables on the subspace Rnx of x-variables is a polyhedral set in Rn . All we need is to prove
this claim in the case of exactly one slack variable (since the projection which reduces the dimension by k
— “kills k slack variables” — is the result of k subsequent projections, every one reducing the dimension
by 1 (“killing one slack variable each”)).
Thus, let
Y = {(x, u) ∈ Rn+1 : aTi x + bi u ≤ ci , 1 ≤ i ≤ m}
be a polyhedral set with n variables x and single variable u; we want to prove that the projection

X = {x : ∃u : Ax + bu ≤ c}

of Y on the space of x-variables is polyhedral. To see it, let us split the inequalities defining Y into three
groups (some of them can be empty):
— “black” inequalities — those with bi = 0; these inequalities do not involve u at all;
— “red” inequalities – those with bi > 0. Such an inequality can be rewritten equivalently as u ≤
b−1 T
i [ci − ai x], and it imposes a (depending on x) upper bound on u;
— ”green” inequalities – those with bi < 0. Such an inequality can be rewritten equivalently as u ≥
b−1 T
i [ci − ai x], and it imposes a (depending on x) lower bound on u.
Now it is clear when x ∈ X, that is, when x can be extended, by some u, to a point (x, u) from Y : this
is the case if and only if, first, x satisfies all black inequalities, and, second, the red upper bounds on u
specified by x are compatible with the green lower bounds on u specified by x, meaning that every lower
bound is ≤ every upper bound (the latter is necessary and sufficient to be able to find a value of u which
is ≥ all lower bounds and ≤ all upper bounds). Thus,
  T 
  ai x ≤ ci for all “black” indexes i – those with bi = 0 
X= x: b−1 T −1 T
j [cj − aj x] ≤ bk [ck − ak x] for all “green” (i.e., with bj < 0) indexes j .
and all “red” (i.e., with bk > 0) indexes k
  

We see that X is given by finitely many nonstrict linear inequalities in x-variables only, as claimed. 2

The outlined procedure for building polyhedral descriptions (i.e., polyhedral representations not in-
volving slack variables) for projections of polyhedral sets is called Fourier-Motzkin elimination.

B.2.4.C. Some applications


As an immediate application of Fourier-Motzkin elimination, let us take a linear program min{cT x : Ax ≤
x
b} and look at the set T of values of the objective at all feasible solutions, if any:

T = {t ∈ R : ∃x : cT x = t, Ax ≤ b}.

Rewriting the linear equality cT x = t as a pair of opposite inequalities, we see that T is polyhedrally
representable, and the above definition of T is nothing but a polyhedral representation of this set, with
x in the role of the vector of slack variables. By Fourier-Motzkin elimination, T is polyhedral – this set
is given by a finite system of nonstrict linear inequalities in variable t only. As such, as it is immediately
seen, T is
— either empty (meaning that the LP in question is infeasible),
630 APPENDIX B. CONVEX SETS IN RN

— or is a below unbounded nonempty set of the form {t ∈ R : −∞ ≤ t ≤ b} with b ∈ R ∪ {+∞}


(meaning that the LP is feasible and unbounded),
— or is a below bounded nonempty set of the form {t ∈ R : −a ≤ t ≤ b} with a ∈ R and +∞ ≥ b ≥ a.
In this case, the LP is feasible and bounded, and a is its optimal value.
Note that given the list of linear inequalities defining T (this list can be built algorithmically by Fourier-
Motzkin elimination as applied to the original polyhedral representation of T ), we can easily detect which
one of the above cases indeed takes place, i.e., to identify the feasibility and boundedness status of the
LP and to find its optimal value. When it is finite (case 3 above), we can use the Fourier-Motzkin
elimination backward, starting with t = a ∈ T and extending this value to a pair (t, x) with t = a = cT x
and Ax ≤ b, that is, we can augment the optimal value by an optimal solution. Thus, we can say that
Fourier-Motzkin elimination is a finite Real Arithmetics algorithm which allows to check whether an LP
is feasible and bounded, and when it is the case, allows to find the optimal value and an optimal solution.
An unpleasant fact of life is that this algorithm is completely impractical, since the elimination process
can blow up exponentially the number of inequalities. Indeed, from the description of the process it is
clear that if a polyhedral set is given by m linear inequalities, then eliminating one variable, we can end
up with as much as m2 /4 inequalities (this is what happens if there are m/2 red, m/2 green and no black
inequalities). Eliminating the next variable, we again can “nearly square” the number of inequalities,
and so on. Thus, the number of inequalities in the description of T can become astronomically large
when even when the dimension of x is something like 10. The actual importance of Fourier-Motzkin
elimination is of theoretical nature. For example, the LP-related reasoning we have just carried out
shows that every feasible and bounded LP program is solvable – has an optimal solution (we shall revisit
this result in more details in Section B.2.9.B). This is a fundamental fact for LP, and the above reasoning
(even with the justification of the elimination “charged” to it) is the shortest and most transparent way
to prove this fundamental fact. Another application of the fact that polyhedrally representable sets are
polyhedral is the Homogeneous Farkas Lemma to be stated and proved in Section B.2.5.A; this lemma
will be instrumental in numerous subsequent theoretical developments.

B.2.4.D. Calculus of polyhedral representations


The fact that polyhedral sets are exactly the same as polyhedrally representable ones does not nullify
the notion of a polyhedral representation. The point is that a set can admit “quite compact” polyhedral
representation involving slack variables and require astronomically large, completely meaningless for any
practical purpose number of inequalities in its polyhedral description (think about the set (B.2.2) when
n = 100). Moreover, polyhedral representations admit a kind of “fully algorithmic calculus.” Specifically,
it turns out that all basic convexity-preserving operations (cf. Proposition B.1.6) as applied to polyhedral
operands preserve polyhedrality; moreover, polyhedral representations of the results are readily given by
polyhedral representations of the operands. Here is “algorithmic polyhedral analogy” of Proposition
B.1.6:
1. Taking finite intersection: Let Mi , 1 ≤ i ≤ m, be polyhedral sets in Rn given by their polyhedral
representations
Mi = {x ∈ Rn : ∃ui ∈ Rki : Ai x + Bi ui ≤ ci }, 1 ≤ i ≤ m.
Then the intersection of the sets Mi is polyhedral with an explicit polyhedral representation,
specifically,
m
\
Mi = {x ∈ Rn : ∃u = (u1 , ..., um ) ∈ Rk1 +...+km : Ai x + Bi ui ≤ ci , 1 ≤ i ≤ m}
| {z }
i=1
system of nonstrict linear
inequalities in x, u

2. Taking direct product: Let Mi ⊂ Rni , 1 ≤ i ≤ m, be polyhedral sets given by polyhedral repre-
sentations
Mi = {xi ∈ Rni : ∃ui ∈ Rki : Ai xi + Bi ui ≤ ci }, 1 ≤ i ≤ m.
B.2. MAIN THEOREMS ON CONVEX SETS 631

Then the direct product M1 × ... × Mm := {x = (x1 , ..., xm ) : xi ∈ Mi , 1 ≤ i ≤ m} of the sets is a


polyhedral set with explicit polyhedral representation, specifically,

M1 × ... × Mm = {x = (x1 , ..., xm ) ∈ Rn1 +...+nm :


∃u = (u1 , ..., um ) ∈ Rk1 +...+km : Ai xi + Bi ui ≤ ci , 1 ≤ i ≤ m}.

3. Arithmetic summation and multiplication by reals: Let Mi ⊂ Rn , 1 ≤ i ≤ m, be polyhedral sets


given by polyhedral representations

Mi = {x ∈ Rn : ∃ui ∈ Rki : Ai x + Bi ui ≤ ci }, 1 ≤ i ≤ m,

and let λ1 , ..., λk be reals. Then the set λ1 M1 + ... + λm Mm := {x = λ1 x1 + ... + λm xm : xi ∈


Mi , 1 ≤ i ≤ m} is polyhedral with explicit polyhedral representation, specifically,

Rn : ∃(xi ∈ R
λ1 M1 + ... + λm Mm = {x ∈P n i
P ,u ∈ Rki , 1 ≤ i ≤ m) :
x ≤ i λi x , x ≥ i λi x , Ai xi + Bi ui ≤ ci , 1 ≤ i ≤ m}.
i i

4. Taking the image under an affine mapping: Let M ⊂ Rn be a polyhedral set given by polyhedral
representation
M = {x ∈ Rn : ∃u ∈ Rk : Ax + Bu ≤ c}
and let P(x) = P x + p : Rn → Rm be an affine mapping. Then the image P(M ) := {y =
P x + p : x ∈ M } of M under the mapping is polyhedral set with explicit polyhedral representation,
specifically,

P(M ) = {y ∈ Rm : ∃(x ∈ Rn , u ∈ Rk ) : y ≤ P x + p, y ≥ P x + p, Ax + Bu ≤ c}.

5. Taking the inverse image under affine mapping: Let M ⊂ Rn be polyhedral set given by polyhe-
dral representation
M = {x ∈ Rn : ∃u ∈ Rk : Ax + Bu ≤ c}
and let P(y) = P y + p : Rm → Rn be an affine mapping. Then the inverse image P −1 (M ) := {y :
P y + p ∈ M } of M under the mapping is polyhedral set with explicit polyhedral representation,
specifically,
P −1 (M ) = {y ∈ Rm : ∃u : A(P y + p) + Bu ≤ c}.

Note that rules for intersection, taking direct products and taking inverse images, as applied to polyhedral
descriptions of operands, lead to polyhedral descriptions of the results. In contrast to this, the rules for
taking sums with coefficients and images under affine mappings heavily exploit the notion of polyhedral
representation: even when the operands in these rules are given by polyhedral descriptions, there are no
simple ways to point out polyhedral descriptions of the results.

Exercise B.10 Justify the above calculus rules.

Finally, we note that the problem of minimizing a linear form cT x over a set M given by polyhedral
representation:
M = {x ∈ Rn : ∃u ∈ Rk : Ax + Bu ≤ c}
can be immediately reduced to an explicit LP program, namely,

min cT x : Ax + Bu ≤ c .

x,u

A reader with some experience in Linear Programming definitely used a lot the above “calculus of
polyhedral representations” when building LPs (perhaps without clear understanding of what in fact is
going on, same as Molière’s Monsieur Jourdain all his life has been speaking prose without knowing it).
632 APPENDIX B. CONVEX SETS IN RN

B.2.5 General Theorem on Alternative and Linear Programming Duality


Most of the contents of this Section is reproduced literally in the main body of the Notes (section 1.2).
We repeat this text here to provide an option for smooth reading the Appendix.

B.2.5.A. Homogeneous Farkas Lemma


Let a1 , ..., aN be vectors from Rn , and let a be another vector. Here we address the question: when
a belongs to the cone spanned by the vectors a1 , ..., aN , i.e., when a can be represented as a linear
combination of ai with nonnegative coefficients? A necessary condition is evident: if
n
X
a= λi ai [λi ≥ 0, i = 1, ..., N ]
i=1

then every vector h which has nonnegative inner products with all ai should also have nonnegative inner
product with a: X
a= λi ai & λi ≥ 0 ∀i & hT ai ≥ 0 ∀i ⇒ hT a ≥ 0.
i

The Homogeneous Farkas Lemma says that this evident necessary condition is also sufficient:

Lemma B.2.1 [Homogeneous Farkas Lemma] Let a, a1 , ..., aN be vectors from Rn . The vector a is a
conic combination of the vectors ai (linear combination with nonnegative coefficients) if and only if every
vector h satisfying hT ai ≥ 0, i = 1, ..., N , satisfies also hT a ≥ 0. In other words, a homogeneous linear
inequality
aT h ≥ 0
in variable h is consequence of the system

aTi h ≥ 0, 1 ≤ i ≤ N

of homogeneous linear inequalities if and only if it can be obtained from the inequalities of the system by
“admissible linear aggregation” – taking their weighted sum with nonnegative weights.

Proof. The necessity – the “only if” part of the statement – was proved before the Farkas Lemma
was formulated. Let us prove the “if” part of the Lemma. Thus, assume that every vector h satisfying
hT ai ≥ 0 ∀i satisfies also hT a ≥ 0, and let us prove that a is a conic combination of the vectors ai .
An “intelligent” proof goes as follows. The set Cone {a1 , ..., aN } of all conic combinations of a1 , ..., aN
is polyhedrally representable (Example 3 in Section B.2.5.A.1) and as such is polyhedral (Theorem B.2.5):

Cone {a1 , ..., aN } = {x ∈ Rn : pTj x ≥ bj , 1 ≤ j ≤ J}. (!)

Observing that 0 ∈ Cone {a1 , ..., aN }, we conclude that bj ≤ 0 for all j; and since λai ∈ Cone {a1 , ..., aN }
for every i and every λ ≥ 0, we should have λpTj ai ≥ 0 for all i, j and all λ ≥ 0, whence pTj ai ≥ 0 for
all i and j. For every j, relation pTj ai ≥ 0 for all i implies, by the premise of the statement we want to
prove, that pTj a ≥ 0, and since bj ≤ 0, we see that pTj a ≥ bj for all j, meaning that a indeed belongs to
Cone {a1 , ..., aN } due to (!). 2

An interested reader can get a better understanding of the power of Fourier-Motzkin elimination,
which ultimately is the basis for the above intelligent proof, by comparing this proof with the one based
on Helley’s Theorem.

Proof based on Helley’s Theorem. As above, we assume that every vector h satisfying hT ai ≥ 0
∀i satisfies also hT a ≥ 0, and we want to prove that a is a conic combination of the vectors ai .
There is nothing to prove when a = 0 – the zero vector of course is a conic combination of the vectors
ai . Thus, from now on we assume that a 6= 0.
B.2. MAIN THEOREMS ON CONVEX SETS 633

10 . Let
Π = {h : aT h = −1},
and let
Ai = {h ∈ Π : aTi h ≥ 0}.
Π is a hyperplane in Rn , and every Ai is a polyhedral set contained in this hyperplane and is therefore
convex.

20 . What we know is that the intersection of all the sets Ai , i = 1, ..., N , is empty (since a vector h
from the intersection would have nonnegative inner products with all ai and the inner product −1 with
a, and we are given that no such h exists). Let us choose the smallest, in the number of elements, of
those sub-families of the family of sets A1 , ..., AN which still have empty intersection of their members;
without loss of generality we may assume that this is the family A1 , ..., Ak . Thus, the intersection of all
k sets A1 , ..., Ak is empty, but the intersection of every k − 1 sets from the family A1 , ..., Ak is nonempty.

30 . We claim that
(A) a ∈ Lin({a1 , ..., ak });
(B) The vectors a1 , ..., ak are linearly independent.
(A) is easy: assuming that a 6∈ E = Lin({a1 , ..., ak }), we conclude that the orthogonal
projection f of the vector a onto the orthogonal complement E ⊥ of E is nonzero. The inner
product of f and a is the same as f T f , is.e., is positive, while f T ai = 0, i = 1, ..., k. Taking
h = −(f T f )−1 f , we see that hT a = −1 and hT ai = 0, i = 1, ..., k. In other words, h belongs
to every set Ai , i = 1, ..., k, by definition of these sets, and therefore the intersection of the
sets A1 , ..., Ak is nonempty, which is a contradiction.
(B) is given by the Helley Theorem I. (B) is evident when k = 1, since in this case linear
dependence of a! , ..., ak would mean that a1 = 0; by (A), this implies that a = 0, which
is not the case. Now let us prove (B) in the case of k > 1. Assume, on the contrary
to what should be proven, that a1 , ..., ak are linearly dependent, so that the dimension of
E = Lin({a1 , ..., ak }) is certain m < k. We already know from A. that a ∈ E. Now let
A0i = Ai ∩ E. We claim that every k − 1 of the sets A0i have a nonempty intersection, while
all k these sets have empty intersection. The second claim is evident – since the sets A1 , ..., Ak
have empty intersection, the same is the case with their parts A0i . The first claim also is
easily supported: let us take k − 1 of the dashed sets, say, A01 , ..., A0k−1 . By construction,
the intersection of A1 , ..., Ak−1 is nonempty; let h be a vector from this intersection, i.e., a
vector with nonnegative inner products with a1 , ..., ak−1 and the product −1 with a. When
replacing h with its orthogonal projection h0 on E, we do not vary all these inner products,
since these are products with vectors from E; thus, h0 also is a common point of A1 , ..., Ak−1 ,
and since this is a point from E, it is a common point of the dashed sets A01 , ..., A0k−1 as well.
Now we can complete the proof of (B): the sets A01 , ..., A0k are convex sets belonging to
the hyperplane Π0 = Π ∩ E = {h ∈ E : aT h = −1} (Π0 indeed is a hyperplane in E,
since 0 6= a ∈ E) in the m-dimensional linear subspace E. Π0 is an affine subspace of
the affine dimension ` = dim E − 1 = m − 1 < k − 1 (recall that we are in the situation
when m = dim E < k), and every ` + 1 ≤ k − 1 subsets from the family A01 ,...,A0k have a
nonempty intersection. From the Helley Theorem I (see Exercise B.9) it follows that all the
sets A01 , ..., A0k have a point in common, which, as we know, is not the case. The contradiction
we have got proves that a1 , ..., ak are linearly independent.

40 . With (A) and (B) in our disposal, we can easily complete the proof of the “if” part of the Farkas
Lemma. Specifically, by (A), we have
k
X
a= λi ai
i=1
634 APPENDIX B. CONVEX SETS IN RN

with some real coefficients λi , and all we need is to prove that these coefficients are nonnegative. Assume,
on the contrary, that, say, λ1 < 0. Let us extend the (linearly independent in view of (B)) system of
vectors a1 , ..., ak by vectors f1 , ..., fn−k to a basis in Rn , and let ξi (x) be the coordinates of a vector x in
this basis. The function ξ1 (x) is a linear form of x and therefore is the inner product with certain vector:

ξ1 (x) = f T x ∀x.

Now we have
f T a = ξ1 (a) = λ1 < 0
and 
T 1, i=1
f ai =
0, i = 2, ..., k
so that f T ai ≥ 0, i = 1, ..., k. We conclude that a proper normalization of f – namely, the vector |λ1 |−1 f
– belongs to A1 , ..., Ak , which is the desired contradiction – by construction, this intersection is empty.
2

B.2.5.B. General Theorem on Alternative


B.2.5.B.1 Certificates for solvability and insolvability. Consider a (finite) system of scalar
inequalities with n unknowns. To be as general as possible, we do not assume for the time being the
inequalities to be linear, and we allow for both non-strict and strict inequalities in the system, as well as
for equalities. Since an equality can be represented by a pair of non-strict inequalities, our system can
always be written as
fi (x) Ωi 0, i = 1, ..., m, (S)
where every Ωi is either the relation ” > ” or the relation ” ≥ ”.
The basic question about (S) is
(?) Whether (S) has a solution or not.
Knowing how to answer the question (?), we are able to answer many other questions. E.g., to verify
whether a given real a is a lower bound on the optimal value c∗ of (LP) is the same as to verify whether
the system
−cT x + a > 0


Ax − b ≥ 0
has no solutions.
The general question above is too difficult, and it makes sense to pass from it to a seemingly simpler
one:
(??) How to certify that (S) has, or does not have, a solution.
Imagine that you are very smart and know the correct answer to (?); how could you convince me that
your answer is correct? What could be an “evident for everybody” validity certificate for your answer?
If your claim is that (S) is solvable, a certificate could be just to point out a solution x∗ to (S). Given
this certificate, one can substitute x∗ into the system and check whether x∗ indeed is a solution.
Assume now that your claim is that (S) has no solutions. What could be a “simple certificate”
of this claim? How one could certify a negative statement? This is a highly nontrivial problem not
just for mathematics; for example, in criminal law: how should someone accused in a murder prove his
innocence? The “real life” answer to the question “how to certify a negative statement” is discouraging:
such a statement normally cannot be certified (this is where the rule “a person is presumed innocent until
proven guilty” comes from). In mathematics, however, the situation is different: in some cases there exist
“simple certificates” of negative statements. E.g., in order to certify that (S) has no solutions, it suffices
to demonstrate that a consequence of (S) is a contradictory inequality such as

−1 ≥ 0.
B.2. MAIN THEOREMS ON CONVEX SETS 635

For example, assume that λi , i = 1, ..., m, are nonnegative weights. Combining inequalities from (S) with
these weights, we come to the inequality
m
X
λi fi (x) Ω 0 (Comb(λ))
i=1

where Ω is either ” > ” (this is the case when the weight of at least one strict inequality from (S) is
positive), or ” ≥ ” (otherwise). Since the resulting inequality, due to its origin, is a consequence of the
system (S), i.e., it is satisfied by every solution to (S), it follows that if (Comb(λ)) has no solutions at
all, we can be sure that (S) has no solution. Whenever this is the case, we may treat the corresponding
vector λ as a “simple certificate” of the fact that (S) is infeasible.
Let us look what does the outlined approach mean when (S) is comprised of linear inequalities:
  
T ”>”
(S) : {ai x Ωi bi , i = 1, ..., m} Ωi =
”≥”

Here the “combined inequality” is linear as well:

Xm m
X
(Comb(λ)) : ( λai )T x Ω λbi
i=1 i=1

(Ω is ” > ” whenever λi > 0 for at least one i with Ωi = ” > ”, and Ω is ” ≥ ” otherwise). Now, when
can a linear inequality
dT x Ω e
be contradictory? Of course, it can happen only when d = 0. Whether in this case the inequality is
contradictory, it depends on what is the relation Ω: if Ω = ” > ”, then the inequality is contradictory
if and only if e ≥ 0, and if Ω = ” ≥ ”, it is contradictory if and only if e > 0. We have established the
following simple result:

Proposition B.2.1 Consider a system of linear inequalities


 T
ai x > bi , i = 1, ..., ms ,
(S) :
aTi x ≥ bi , i = ms + 1, ..., m.

with n-dimensional vector of unknowns x. Let us associate with (S) two systems of linear inequalities
and equations with m-dimensional vector of unknowns λ:

(a) λ ≥ 0;


 m
 P
(b) λi ai = 0;




 i=1
m
TI : P
 (cI ) λ i bi ≥ 0;
i=1


ms


 P
 (dI )
 λi > 0.
i=1


 (a) λ ≥ 0;

 m
P
(b) λi ai = 0;

TII : i=1

 m
P
 (cII ) λi bi > 0.


i=1

Assume that at least one of the systems TI , TII is solvable. Then the system (S) is infeasible.
636 APPENDIX B. CONVEX SETS IN RN

B.2.5.B.2 General Theorem on Alternative. Proposition B.2.1 says that in some cases it is
easy to certify infeasibility of a linear system of inequalities: a “simple certificate” is a solution to another
system of linear inequalities. Note, however, that the existence of a certificate of this latter type is to the
moment only a sufficient, but not a necessary, condition for the infeasibility of (S). A fundamental result
in the theory of linear inequalities is that the sufficient condition in question is in fact also necessary:

Theorem B.2.6 [General Theorem on Alternative] In the notation from Proposition B.2.1, system (S)
has no solutions if and only if either TI , or TII , or both these systems, are solvable.

Proof. GTA is a more or less straightforward corollary of the Homogeneous Farkas Lemma. Indeed, in
view of Proposition B.2.1, all we need to prove is that if (S) has no solution, then at least one of the
systems TI , or TII is solvable. Thus, assume that (S) has no solutions, and let us look at the consequences.
Let us associate with (S) the following system of homogeneous linear inequalities in variables x, τ , :

(a) τ − ≥ 0
(b) aTi x −bi τ − ≥ 0, i = 1, ..., ms (B.2.3)
(c) aTi x −bi τ ≥ 0, i = ms + 1, ..., m

I claim that
(!) For every solution to (B.2.3), one has  ≤ 0.
Indeed, assuming that (B.2.3) has a solution x, τ,  with  > 0, we conclude from (B.2.3.a)
that τ > 0; from (B.2.3.b − c) it now follows that τ −1 x is a solution to (S), while the latter
system is unsolvable.
Now, (!) says that the homogeneous linear inequality

− ≥ 0 (B.2.4)

is a consequence of the system of homogeneous linear inequalities (B.2.3). By Homogeneous Farkas


Lemma, it follows that there exist nonnegative weights ν, λi , i = 1, ..., m, such that the vector of
coefficients of the variables x, τ,  in the left hand side of (B.2.4) is linear combination, with the coefficients
ν, λ1 , ..., λm , of the vectors of coefficients of the variables in the inequalities from (B.2.3):
m
P
(a) λi ai = 0
i=1
m
P
(b) − λi bi + ν = 0 (B.2.5)
i=1
ms
P
(c) − λi − ν = −1
i=1

Recall that by their origin, ν and all λi are nonnegative. Now, it may happen that λ1 , ..., λms are zero.
In this case ν > 0 by (B.2.5.c), and relations (B.2.5a − b) say that λ1 , ..., λm solve TII . In the remaining
ms
P
case (that is, when not all λ1 , ..., λms are zero, or, which is the same, when λi > 0), the same relations
i=1
say that λ1 , ..., λm solve TI . 2

B.2.5.B.3 Corollaries of the Theorem on Alternative. We formulate here explicitly two


very useful principles following from the Theorem on Alternative:
A. A system of linear inequalities

aTi x Ωi bi , i = 1, ..., m

has no solutions if and only if one can combine the inequalities of the system in a linear fashion
(i.e., multiplying the inequalities by nonnegative weights, adding the results and passing, if
B.2. MAIN THEOREMS ON CONVEX SETS 637

necessary, from an inequality aT x > b to the inequality aT x ≥ b) to get a contradictory


inequality, namely, either the inequality 0T x ≥ 1, or the inequality 0T x > 0.
B. A linear inequality
aT0 x Ω0 b0
is a consequence of a solvable system of linear inequalities

aTi x Ωi bi , i = 1, ..., m

if and only if it can be obtained by combining, in a linear fashion, the inequalities of the
system and the trivial inequality 0 > −1.

It should be stressed that the above principles are highly nontrivial and very deep. Consider,
e.g., the following system of 4 linear inequalities with two variables u, v:

−1 ≤ u ≤ 1
−1 ≤ v ≤ 1.

From these inequalities it follows that

u2 + v 2 ≤ 2, (!)

which in turn implies, by the Cauchy inequality, the linear inequality u + v ≤ 2:


p p √
u + v = 1 × u + 1 × v ≤ 12 + 12 u2 + v 2 ≤ ( 2)2 = 2. (!!)

The concluding inequality is linear and is a consequence of the original system, but in the
demonstration of this fact both steps (!) and (!!) are “highly nonlinear”. It is absolutely
unclear a priori why the same consequence can, as it is stated by Principle A, be derived
from the system in a linear manner as well [of course it can – it suffices just to add two
inequalities u ≤ 1 and v ≤ 1].
Note that the Theorem on Alternative and its corollaries A and B heavily exploit the fact
that we are speaking about linear inequalities. E.g., consider the following 2 quadratic and
2 linear inequalities with two variables:

(a) u2 ≥ 1;
(b) v 2 ≥ 1;
(c) u ≥ 0;
(d) v ≥ 0;

along with the quadratic inequality

(e) uv ≥ 1.

The inequality (e) is clearly a consequence of (a) – (d). However, if we extend the system of
inequalities (a) – (b) by all “trivial” (i.e., identically true) linear and quadratic inequalities
with 2 variables, like 0 > −1, u2 + v 2 ≥ 0, u2 + 2uv + v 2 ≥ 0, u2 − uv + v 2 ≥ 0, etc.,
and ask whether (e) can be derived in a linear fashion from the inequalities of the extended
system, the answer will be negative. Thus, Principle A fails to be true already for quadratic
inequalities (which is a great sorrow – otherwise there were no difficult problems at all!)

B.2.5.C. Application: Linear Programming Duality


We are about to use the Theorem on Alternative to obtain the basic results of the LP duality theory.
638 APPENDIX B. CONVEX SETS IN RN

B.2.5.C.1 Dual to an LP program: the origin The motivation for constructing the problem
dual to an LP program
aT1
   
 T 
c∗ = min cT x : Ax − b ≥ 0 A =  a2  ∈ Rm×n 
  
 ...  (LP)
x  
aTm
is the desire to generate, in a systematic way, lower bounds on the optimal value c∗ of (LP). An evident
way to bound from below a given function f (x) in the domain given by system of inequalities
gi (x) ≥ bi , i = 1, ..., m, (B.2.6)
is offered by what is called the Lagrange duality and is as follows:
Lagrange Duality:
• Let us look at all inequalities which can be obtained from (B.2.6) by linear aggregation,
i.e., at the inequalities of the form
X X
yi gi (x) ≥ yi bi (B.2.7)
i i

with the “aggregation weights” yi ≥ 0. Note that the inequality (B.2.7), due to its origin, is
valid on the entire set X of solutions of (B.2.6).
• Depending on the choice of aggregation weights, it may happen that the left hand P side
in (B.2.7) is ≤ f (x) for all x ∈ Rn . Whenever it is the case, the right hand side yi bi of
i
(B.2.7) is a lower bound on f in X.
P
Indeed, on X the quantity yi bi is a lower bound on yi gi (x), and for y in question
i
the latter function of x is everywhere ≤ f (x).
It follows that
• The optimal value in the problem
( )
X y ≥ 0, (a)
max yi bi : P yi gi (x) ≤ f (x) ∀x ∈ Rn (b) (B.2.8)
y
i i

is a lower bound on the values of f on the set of solutions to the system (B.2.6).
Let us look what happens with the Lagrange duality when f and gi are homogeneous Plinear functions:
f = cT x, gi (x) = aTi x. In this case, the requirement (B.2.8.b) merely says that c = yi ai (or, which
i
is the same, AT y = c due to the origin of A). Thus, problem (B.2.8) becomes the Linear Programming
problem
max bT y : AT y = c, y ≥ 0 , (LP∗ )

y

which is nothing but the LP dual of (LP).


By the construction of the dual problem,
[Weak Duality] The optimal value in (LP∗ ) is less than or equal to the optimal value in (LP).
In fact, the “less than or equal to” in the latter statement is “equal”, provided that the optimal value c∗
in (LP) is a number (i.e., (LP) is feasible and below bounded). To see that this indeed is the case, note
that a real a is a lower bound on c∗ if and only if cT x ≥ a whenever Ax ≥ b, or, which is the same, if
and only if the system of linear inequalities
(Sa ) : −cT x > −a, Ax ≥ b
has no solution. We know by the Theorem on Alternative that the latter fact means that some other
system of linear equalities (more exactly, at least one of a certain pair of systems) does have a solution.
More precisely,
B.2. MAIN THEOREMS ON CONVEX SETS 639

(*) (Sa ) has no solutions if and only if at least one of the following two systems with m + 1
unknowns: 

 (a) λ = (λ0 , λ1 , ..., λm ) ≥ 0;

 Pm
 (b) −λ0 c + λi ai = 0;


i=1
TI : Pm



 (cI ) −λ0 a + λi bi ≥ 0;

 i=1
 (d ) λ > 0,
I 0

or 

 (a) λ = (λ0 , λ1 , ..., λm ) ≥ 0;

 Pm
(b) −λ0 c − λi ai = 0;

TII : i=1

 Pm
 (cII )

 −λ0 a − λi bi > 0
i=1

– has a solution.
Now assume that (LP) is feasible. We claim that under this assumption (Sa ) has no solutions if and
only if TI has a solution.
The implication ”TI has a solution ⇒ (Sa ) has no solution” is readily given by the above
remarks. To verify the inverse implication, assume that (Sa ) has no solutions and the system
Ax ≤ b has a solution, and let us prove that then TI has a solution. If TI has no solution, then
by (*) TII has a solution and, moreover, λ0 = 0 for (every) solution to TII (since a solution
to the latter system with λ0 > 0 solves TI as well). But the fact that TII has a solution λ
with λ0 = 0 is independent of the values of a and c; if this fact would take place, it would
mean, by the same Theorem on Alternative, that, e.g., the following instance of (Sa ):

0T x ≥ −1, Ax ≥ b

has no solutions. The latter means that the system Ax ≥ b has no solutions – a contradiction
with the assumption that (LP) is feasible. 2

Now, if TI has a solution, this system has a solution with λ0 = 1 as well (to see this, pass from a solution
λ to the one λ/λ0 ; this construction is well-defined, since λ0 > 0 for every solution to TI ). Now, an
(m + 1)-dimensional vector λ = (1, y) is a solution to TI if and only if the m-dimensional vector y solves
the system of linear inequalities and equations

y ≥ 0;
m
T
P
A y≡ yi a i = c; (D)
i=1
bT y ≥ a

Summarizing our observations, we come to the following result.

Proposition B.2.2 Assume that system (D) associated with the LP program (LP) has a solution (y, a).
Then a is a lower bound on the optimal value in (LP). Vice versa, if (LP) is feasible and a is a lower
bound on the optimal value of (LP), then a can be extended by a properly chosen m-dimensional vector y
to a solution to (D).

We see that the entity responsible for lower bounds on the optimal value of (LP) is the system (D): every
solution to the latter system induces a bound of this type, and in the case when (LP) is feasible, all lower
bounds can be obtained from solutions to (D). Now note that if (y, a) is a solution to (D), then the pair
(y, bT y) also is a solution to the same system, and the lower bound bT y on c∗ is not worse than the lower
bound a. Thus, as far as lower bounds on c∗ are concerned, we lose nothing by restricting ourselves to
640 APPENDIX B. CONVEX SETS IN RN

T ∗
the solutions (y, a) of (D) with
 T a = Tb y; the best lower bound on c given by (D) is therefore the optimal
value of the problem maxy b y : A y = c, y ≥ 0 , which is nothing but the dual to (LP) problem (LP∗ ).
Note that (LP∗ ) is also a Linear Programming program.
All we know about the dual problem to the moment is the following:

Proposition B.2.3 Whenever y is a feasible solution to (LP∗ ), the corresponding value of the dual
objective bT y is a lower bound on the optimal value c∗ in (LP). If (LP) is feasible, then for every a ≤ c∗
there exists a feasible solution y of (LP∗ ) with bT y ≥ a.

B.2.5.C.2 Linear Programming Duality Theorem. Proposition B.2.3 is in fact equivalent to


the following

Theorem B.2.7 [Duality Theorem in Linear Programming] Consider a linear programming program

min cT x : Ax ≥ b

(LP)
x

along with its dual


max bT y : AT y = c, y ≥ 0 (LP∗ )

y

Then
1) The duality is symmetric: the problem dual to dual is equivalent to the primal;
2) The value of the dual objective at every dual feasible solution is ≤ the value of the primal objective
at every primal feasible solution
3) The following 5 properties are equivalent to each other:
(i) The primal is feasible and bounded below.
(ii) The dual is feasible and bounded above.
(iii) The primal is solvable.
(iv) The dual is solvable.
(v) Both primal and dual are feasible.
Whenever (i) ≡ (ii) ≡ (iii) ≡ (iv) ≡ (v) is the case, the optimal values of the primal and the dual problems
are equal to each other.

Proof. 1) is quite straightforward: writing the dual problem (LP∗ ) in our standard form, we get
     
 Im 0 
min −bT y :  AT  y −  −c  ≥ 0 ,
y 
−AT c

where Im is the m-dimensional unit matrix. Applying the duality transformation to the latter problem,
we come to the problem
 

 ξ ≥ 0  
η ≥ 0
 
max 0T ξ + cT η + (−c)T ζ : ,
ξ,η,ζ 
 ζ ≥ 0  
ξ − Aη + Aζ = −b
 

which is clearly equivalent to (LP) (set x = η − ζ).


2) is readily given by Proposition B.2.3.
3):
B.2. MAIN THEOREMS ON CONVEX SETS 641

(i)⇒(iv): If the primal is feasible and bounded below, its optimal value c∗ (which of course is
a lower bound on itself) can, by Proposition B.2.3, be (non-strictly) majorized by a quantity
bT y ∗ , where y ∗ is a feasible solution to (LP∗ ). In the situation in question, of course,
bT y ∗ = c∗ (by already proved item 2)); on the other hand, in view of the same Proposition
B.2.3, the optimal value in the dual is ≤ c∗ . We conclude that the optimal value in the dual
is attained and is equal to the optimal value in the primal.
(iv)⇒(ii): evident;
(ii)⇒(iii): This implication, in view of the primal-dual symmetry, follows from the implica-
tion (i)⇒(iv).
(iii)⇒(i): evident.
We have seen that (i)≡(ii)≡(iii)≡(iv) and that the first (and consequently each) of these 4
equivalent properties implies that the optimal value in the primal problem is equal to the
optimal value in the dual one. All which remains is to prove the equivalence between (i)–(iv),
on one hand, and (v), on the other hand. This is immediate: (i)–(iv), of course, imply (v);
vice versa, in the case of (v) the primal is not only feasible, but also bounded below (this is
an immediate consequence of the feasibility of the dual problem, see 2)), and (i) follows. 2

An immediate corollary of the LP Duality Theorem is the following necessary and sufficient optimality
condition in LP:
Theorem B.2.8 [Necessary and sufficient optimality conditions in linear programming] Consider an LP
program (LP) along with its dual (LP∗ ). A pair (x, y) of primal and dual feasible solutions is comprised
of optimal solutions to the respective problems if and only if
yi [Ax − b]i = 0, i = 1, ..., m, [complementary slackness]
likewise as if and only if
c T x − bT y = 0 [zero duality gap]

Indeed, the “zero duality gap” optimality condition is an immediate consequence of the fact
that the value of primal objective at every primal feasible solution is ≥ the value of the
dual objective at every dual feasible solution, while the optimal values in the primal and the
dual are equal to each other, see Theorem B.2.7. The equivalence between the “zero duality
gap” and the “complementary slackness” optimality conditions is given by the following
computation: whenever x is primal feasible and y is dual feasible, the products yi [Ax − b]i ,
i = 1, ..., m, are nonnegative, while the sum of these products is precisely the duality gap:
y T [Ax − b] = (AT y)T x − bT y = cT x − bT y.
Thus, the duality gap can vanish at a primal-dual feasible pair (x, y) if and only if all products
yi [Ax − b]i for this pair are zeros.

B.2.6 Separation Theorem


B.2.6.A. Separation: definition
Recall that a hyperplane M in Rn is, by definition, an affine subspace of the dimension n − 1. By
Proposition A.3.7, hyperplanes are exactly the same as level sets of nontrivial linear forms:
M ⊂ Rn is a hyperplane
m
∃a ∈ Rn , b ∈ R, a 6= 0 : M = {x ∈ Rn : aT x = b}
We can, consequently, associate with the hyperplane (or, better to say, with the associated linear form
a; this form is defined uniquely, up to multiplication by a nonzero real) the following sets:
642 APPENDIX B. CONVEX SETS IN RN

• ”upper” and ”lower” open half-spaces M ++ = {x ∈ Rn : aT x > b}, M −− = {x ∈ Rn : aT x < b};


these sets clearly are convex, and since a linear form is continuous, and the sets are given by strict
inequalities on the value of a continuous function, they indeed are open.
Note that since a is uniquely defined by M , up to multiplication by a nonzero real, these open
half-spaces are uniquely defined by the hyperplane, up to swapping the ”upper” and the ”lower”
ones (which half-space is ”upper”, it depends on the particular choice of a);
• ”upper” and ”lower” closed half-spaces M + = {x ∈ Rn : aT x ≥ b}, M − = {x ∈ Rn : aT x ≤ b};
these are also convex sets, now closed (since they are given by non-strict inequalities on the value
of a continuous function). It is easily seen that the closed upper/lower half-space is the closure
of the corresponding open half-space, and M itself is the boundary (i.e., the complement of the
interior to the closure) of all four half-spaces.
It is clear that our half-spaces and M itself partition Rn :

Rn = M −− ∪ M ∪ M ++

(partitioning by disjoint sets),


Rn = M − ∪ M +
(M is the intersection of the right hand side sets).
Now we define the basic notion of separation of two convex sets T and S by a hyperplane.

Definition B.2.2 [separation] Let S, T be two nonempty convex sets in Rn .


• A hyperplane
M = {x ∈ Rn : aT x = b} [a 6= 0]
is said to separate S and T , if, first,

S ⊂ {x : aT x ≤ b}, T ⊂ {x : aT x ≥ b}

(i.e., S and T belong to the opposite closed half-spaces into which M splits Rn ), and, second, at
least one of the sets S, T is not contained in M itself:

S ∪ T 6⊂ M.

The separation is called strong, if there exist b0 , b00 , b0 < b < b00 , such that

S ⊂ {x : aT x ≤ b0 }, T ⊂ {x : aT x ≥ b00 }.

• A linear form a 6= 0 is said to separate (strongly separate) S and T , if for properly chosen b the
hyperplane {x : aT x = b} separates (strongly separates) S and T .
• We say that S and T can be (strongly) separated, if there exists a hyperplane which (strongly)
separates S and T .

E.g.,
• the hyperplane {x : aT x ≡ x2 − x1 = 1} in R2 strongly separates convex polyhedral sets T = {x ∈
R2 : 0 ≤ x1 ≤ 1, 3 ≤ x2 ≤ 5} and S = {x ∈ R2 : x2 = 0; x1 ≥ −1};
• the hyperplane {x : aT x ≡ x = 1} in R1 separates (but not strongly separates) the convex sets
S = {x ≤ 1} and T = {x ≥ 1};
• the hyperplane {x : aT x ≡ x1 = 0} in R2 separates (but not strongly separates) the sets S = {x ∈
R2 :, x1 < 0, x2 ≥ −1/x1 } and T = {x ∈ R2 : x1 > 0, x2 > 1/x1 };
B.2. MAIN THEOREMS ON CONVEX SETS 643

• the hyperplane {x : aT x ≡ x2 − x1 = 1} in R2 does not separate the convex sets S = {x ∈ R2 :


x2 ≥ 1} and T = {x ∈ R2 : x2 = 0};
• the hyperplane {x : aT x ≡ x2 = 0} in R2 does not separate the sets S = {x ∈ R2 : x2 = 0, x1 ≤ −1}
and T = {x ∈ R2 : x2 = 0, x1 ≥ 1}.
The following Exercise presents an equivalent description of separation:
Exercise B.11 Let S, T be nonempty convex sets in Rn . Prove that a linear form a separates S and T
if and only if
sup aT x ≤ inf aT y
x∈S y∈T

and
inf aT x < sup aT y.
x∈S y∈T

This separation is strong if and only if


sup aT x < inf aT y.
x∈S y∈T

Exercise B.12 Whether the sets S = {x ∈ R2 : x1 > 0, x2 ≥ 1/x1 } and T = {x ∈ R2 : x1 < 0, x2 ≥


−1/x1 } can be separated? Whether they can be strongly separated?

B.2.6.B. Separation Theorem


Theorem B.2.9 [Separation Theorem] Let S and T be nonempty convex sets in Rn .
(i) S and T can be separated if and only if their relative interiors do not intersect: ri S ∩ ri T = ∅.
(ii) S and T can be strongly separated if and only if the sets are at a positive distance from each other:
dist(S, T ) ≡ inf{kx − yk2 : x ∈ S, y ∈ T } > 0.
In particular, if S, T are closed nonempty non-intersecting convex sets and one of these sets is compact,
S and T can be strongly separated.
Proof takes several steps.

(i), Necessity. Assume that S, T can be separated, so that for certain a 6= 0 we have
inf aT x ≤ inf aT y; inf aT x < sup aT y. (B.2.9)
x∈S y∈T x∈S y∈T

We should lead to a contradiction the assumption that ri S and ri T have in common certain point x̄.
Assume that it is the case; then from the first inequality in (B.2.9) it is clear that x̄ maximizes the linear
function f (x) = aT x on S and simultaneously minimizes this function on T . Now, we have the following
simple and important
Lemma B.2.2 A linear function f (x) = aT x can attain its maximum/minimum over a
convex set Q at a point x ∈ ri Q if and only if the function is constant on Q.
Proof. ”if” part is evident. To prove the ”only if” part, let x̄ ∈ ri Q be, say, a minimizer
of f over Q and y be an arbitrary point of Q; we should prove that f (x̄) = f (y). There is
nothing to prove if y = x̄, so let us assume that y 6= x̄. Since x̄ ∈ ri Q, the segment [y, x̄],
which is contained in M , can be extended a little bit through the point x̄, not leaving M
(since x̄ ∈ ri Q), so that there exists z ∈ Q such that x̄ ∈ [y, z), i.e., x̄ = (1 − λ)y + λz with
certain λ ∈ (0, 1]; since y 6= x̄, we have in fact λ ∈ (0, 1). Since f is linear, we have
f (x̄) = (1 − λ)f (y) + λf (z);
since f (x̄) ≤ min{f (y), f (z)} and 0 < λ < 1, this relation can be satisfied only when
f (x̄) = f (y) = f (z). 2
644 APPENDIX B. CONVEX SETS IN RN

By Lemma B.2.2, f (x) = f (x̄) on S and on T , so that f (·) is constant on S ∪ T , which yields the
desired contradiction with the second inequality in (B.2.9). 2

(i), Sufficiency. The proof of sufficiency part of the Separation Theorem is much more instructive.
There are several ways to prove it, and I choose the one which goes via the Homogeneous Farkas Lemma
B.2.1, which is extremely important in its own right.

(i), Sufficiency, Step 1: Separation of a convex polytope and a point outside the
polytope. Let us start with seemingly very particular case of the Separation Theorem – the one where
S is the convex full points x1 , ..., xN , and T is a singleton T = {x} which does not belong to S. We
intend to prove that in this case there exists a linear form which separates x and S; in fact we shall prove
even the existence of strong separation.  
x
Let us associate with n-dimensional vectors x1 , ..., xN , x the (n + 1)-dimensional vectors a =
  1
xi
and ai = , i = 1, ..., N . I claim that a does not belong to the conic hull of a1 , ..., aN . Indeed, if a
1
would be representable as a linear combination of a1 , ..., aN with nonnegative coefficients, then, looking
at the last, (n + 1)-st, coordinates in such a representation, we would conclude that the sum of coefficients
should be 1, so that the representation, actually, represents x as a convex combination of x1 , ..., xN , which
was assumed to be impossible.
Since a does not belong to the  conic
 hull of a1 , ..., aN , by the Homogeneous Farkas Lemma (Lemma
f
B.2.1) there exists a vector h = ∈ Rn+1 which “separates” a and a1 , ..., aN in the sense that
α

hT a > 0, hT ai ≤ 0, i = 1, ..., N,

whence, of course,
hT a > max hT ai .
i

Since the components in all the inner products hT a, hT ai coming from the (n + 1)-st coordinates are
equal to each other , we conclude that the n-dimensional component f of h separates x and x1 , ..., xN :

f T x > max f T xi .
i

λi xi of the points xi one clearly has f T y ≤ max f T xi , we


P
Since for every convex combination y =
i i
conclude, finally, that
fT x > max f T y,
y∈Conv({x1 ,...,xN })

so that f strongly separates T = {x} and S = Conv({x1 , ..., xN }). 2

(i), Sufficiency, Step 2: Separation of a convex set and a point outside of the set.
Now consider the case when S is an arbitrary nonempty convex set and T = {x} is a singleton outside S
(the difference with Step 1 is that now S is not assumed to be a polytope).
First of all, without loss of generality we may assume that S contains 0 (if it is not the case, we may
subject S and T to translation S 7→ p + S, T 7→ p + T with p ∈ −S). Let L be the linear span of S. If
x 6∈ L, the separation is easy: taking as f the orthogonal to L component of x, we shall get

f T x = f T f > 0 = max f T y,
y∈S

so that f strongly separates S and T = {x}.


B.2. MAIN THEOREMS ON CONVEX SETS 645

It remains to consider the case when x ∈ L. Since S ⊂ L, x ∈ L and x 6∈ S, L is a nonzero linear


subspace; w.l.o.g., we can assume that L = Rn .
Let Σ = {h : khk2 = 1} be the unit sphere in L = Rn . This is a closed and bounded set in Rn
(boundedness is evident, and closedness follows from the fact that k · k2 is continuous). Consequently, Σ
is a compact set. Let us prove that there exists f ∈ Σ which separates x and S in the sense that

f T x ≥ sup f T y. (B.2.10)
y∈S

Assume, on the contrary, that no such f exists, and let us lead this assumption to a contradiction. Under
our assumption for every h ∈ Σ there exists yh ∈ S such that

hT yh > hT x.

Since the inequality is strict, it immediately follows that there exists a neighbourhood Uh of the vector
h such that
(h0 )T yh > (h0 )T x ∀h0 ∈ Uh . (B.2.11)
The family of open sets {Uh }h∈Σ covers Σ; since Σ is compact, we can find a finite subfamily Uh1 , ..., UhN
of the family which still covers Σ. Let us take the corresponding points y1 = yh1 , y2 = yh2 , ..., yN = yhN
and the polytope S 0 = Conv({y1 , ..., yN }) spanned by the points. Due to the origin of yi , all of them are
points from S; since S is convex, the polytope S 0 is contained in S and, consequently, does not contain
x. By Step 1, x can be strongly separated from S 0 : there exists a such that

aT x > sup aT y. (B.2.12)


y∈S 0

By normalization, we may also assume that kak2 = 1, so that a ∈ Σ. Now we get a contradiction: since
a ∈ Σ and Uh1 , ..., UhN form a covering of Σ, a belongs to certain Uhi . By construction of Uhi (see
(B.2.11)), we have
aT yi ≡ aT yhi > aT x,
which contradicts (B.2.12) – recall that yi ∈ S 0 .
The contradiction we get proves that there exists f ∈ Σ satisfying (B.2.10). We claim that f separates
S and {x}; in view of (B.2.10), all we need to verify our claim is to show that the linear form f (y) = f T y
is non-constant on S ∪ T , which is evident: we are in the situation when 0 ∈ S and L ≡ Lin(S) = Rn
and f 6= 0, so that f (y) is non-constant already on S. 2

Mathematically oriented reader should take into account that the simple-looking reasoning under-
lying Step 2 in fact brings us into a completely new world. Indeed, the considerations at Step
1 and in the proof of Homogeneous Farkas Lemma are “pure arithmetic” – we never used things
like convergence, compactness, etc., and used rational arithmetic only – no square roots, etc. It
means that the Homogeneous Farkas Lemma and the result stated a Step 1 remain valid if we, e.g.,
replace our universe Rn with the space Qn of n-dimensional rational vectors (those with rational
coordinates; of course, the multiplication by reals in this space should be restricted to multiplication
by rationals). The “rational” Farkas Lemma or the possibility to separate a rational vector from a
“rational” polytope by a rational linear form, which is the “rational” version of the result of Step 1,
definitely are of interest (e.g., for Integer Programming). In contrast to these “purely arithmetic”
considerations, at Step 2 we used compactness – something heavily exploiting the fact that our
universe is Rn and not, say, Qn (in the latter space bounded and closed sets not necessary are
compact). Note also that we could not avoid things like compactness arguments at Step 2, since the
very fact we are proving is true in Rn but not in Qn . Indeed, consider the “rational plane” – the
universe comprised of all 2-dimensional vectors with rational entries, and let S be the half-plane in
this rational plane given by the linear inequality
x1 + αx2 ≤ 0,

where α is irrational. S clearly is a “convex set” in Q2 ; it is immediately seen that a point outside
this set cannot be separated from S by a rational linear form.
646 APPENDIX B. CONVEX SETS IN RN

(i), Sufficiency, Step 3: Separation of two nonempty and non-intersecting convex


sets. Now we are ready to prove that two nonempty and non-intersecting convex sets S and T can be
separated. To this end consider the arithmetic difference

∆ = S − T = {x − y : x ∈ S, y ∈ T }.

By Proposition B.1.6.3, ∆ is convex (and, of course, nonempty) set; since S and T do not intersect, ∆
does not contain 0. By Step 2, we can separate ∆ and {0}: there exists f 6= 0 such that

f T 0 = 0 ≥ sup f T z & f T 0 > inf f T z.


z∈∆ z∈∆

In other words,
0≥ sup [f T x − f T y] & 0 > inf [f T x − f T y],
x∈S,y∈T x∈S,y∈T

which clearly means that f separates S and T . 2

(i), Sufficiency, Step 4: Separation of nonempty convex sets with non-intersecting


relative interiors. Now we are able to complete the proof of the “if” part of the Separation Theorem.
Let S and T be two nonempty convex sets with non-intersecting relative interiors; we should prove that S
and T can be properly separated. This is immediate: as we know from Theorem B.1.1, the sets S 0 = ri S
and T 0 = ri T are nonempty and convex; since we are given that they do not intersect, they can be
separated by Step 3: there exists f such that

inf f T x ≥ sup f T x & sup f T x > inf 0 f T x. (B.2.13)


x∈T 0 y∈S 0 x∈T 0 y∈S

It is immediately seen that in fact f separates S and T . Indeed, the quantities in the left and the right
hand sides of the first inequality in (B.2.13) clearly remain unchanged when we replace S 0 with cl S 0 and
T 0 with cl T 0 ; by Theorem B.1.1, cl S 0 = cl S ⊃ S and cl T 0 = cl T ⊃ T , and we get inf f T x = inf 0 f T x,
x∈T x∈T
and similarly sup f T y = sup f T y. Thus, we get from (B.2.13)
y∈S y∈S 0

inf f T x ≥ sup f T y.
x∈T y∈S

It remains to note that T 0 ⊂ T , S 0 ⊂ S, so that the second inequality in (B.2.13) implies that

sup f T x > inf f T x. 2


x∈T y∈S

(ii), Necessity: prove yourself.

(ii), Sufficiency: Assuming that ρ ≡ inf{kx − yk2 : x ∈ S, y ∈ T } > 0, consider the sets S 0 = {x :
inf kx − yk2 ≤ ρ}. Note that S 0 is convex along with S (Example B.1.4) and that S 0 ∩ T = ∅ (why?) By
y∈S
(i), S 0 and T can be separated, and if f is a linear form which separates S 0 and T , then the same form
strongly separates S and T (why?). The “in particular” part of (ii) readily follows from the just proved
statement due to the fact that if two closed nonempty sets in Rn do not intersect and one of them is
compact, then the sets are at positive distance from each other (why?). 2

Exercise B.13 Derive the statement in Remark B.1.1 from the Separation Theorem.

Exercise B.14 Implement the following alternative approach to the proof of Separation Theorem:
B.2. MAIN THEOREMS ON CONVEX SETS 647

1. Prove that if x is a point in Rn and S is a nonempty closed convex set in Rn , then the problem

min{kx − yk2 : y ∈ S}
y

has a unique optimal solution x̄.


2. In the situation of 1), prove that if x 6∈ S, then the linear form e = x − x̄ strongly separates {x}
and S:
max eT y = eT x̄ = eT x − eT e < eT x,
y∈S

thus getting a direct proof of the possibility to separate strongly a nonempty closed convex set and
a point outside this set.
3. Derive from 2) the Separation Theorem.

B.2.6.C. Supporting hyperplanes


By the Separation Theorem, a closed and nonempty convex set M is the intersection of all closed half-
spaces containing M . Among these half-spaces, the most interesting are the “extreme” ones – those with
the boundary hyperplane touching M . The notion makes sense for an arbitrary (not necessary closed)
convex set, but we shall use it for closed sets only, and include the requirement of closedness in the
definition:

Definition B.2.3 [Supporting plane] Let M be a convex closed set in Rn , and let x be a point from the
relative boundary of M . A hyperplane

Π = {y : aT y = aT x} [a 6= 0]

is called supporting to M at x, if it separates M and {x}, i.e., if

aT x ≥ sup aT y & aT x > inf aT y. (B.2.14)


y∈M y∈M

Note that since x is a point from the relative boundary of M and therefore belongs to cl M = M , the
first inequality in (B.2.14) in fact is equality. Thus, an equivalent definition of a supporting plane is as
follows:
Let M be a closed convex set and x be a relative boundary point of M . The hyperplane
{y : aT y = aT x} is called supporting to M at x, if the linear form a(y) = aT y attains its
maximum on M at the point x and is nonconstant on M .
E.g., the hyperplane {x1 = 1} in Rn clearly is supporting to the unit Euclidean ball {x : |x| ≤ 1} at the
point x = e1 = (1, 0, ..., 0).
The most important property of a supporting plane is its existence:

Proposition B.2.4 [Existence of supporting hyperplane] Let M be a convex closed set in Rn and x be
a point from the relative boundary of M . Then
(i) There exists at least one hyperplane which is supporting to M at x;
(ii) If Π is supporting to M at x, then the intersection M ∩ Π is of affine dimension less than the one
of M (recall that the affine dimension of a set is, by definition, the affine dimension of the affine hull of
the set).

Proof. (i) is easy: if x is a point from the relative boundary of M , then it is outside the relative interior
of M and therefore {x} and ri M can be separated by the Separation Theorem; the separating hyperplane
is exactly the desired supporting to M at x hyperplane.
To prove (ii), note that if Π = {y : aT y = aT x} is supporting to M at x ∈ ∂ri M , then the set
M = M ∩ Π is a nonempty (it contains x) convex set, and the linear form aT y is constant on M 0
0

and therefore (why?) on Aff(M 0 ). At the same time, the form is nonconstant on M by definition of a
648 APPENDIX B. CONVEX SETS IN RN

supporting plane. Thus, Aff(M 0 ) is a proper (less than the entire Aff(M )) subset of Aff(M ), and therefore
the affine dimension of Aff(M 0 ) (i.e., the affine dimension of M 0 ) is less than the affine dimension of Aff(M )
(i.e., than the affine dimension of M ). 2) . 2

B.2.7 Polar of a convex set and Milutin-Dubovitski Lemma


B.2.7.A. Polar of a convex set
Let M be a nonempty convex set in Rn . The polar Polar (M ) of M is the set of all linear forms which
do not exceed 1 on M , i.e., the set of all vectors a such that aT x ≤ 1 for all x ∈ M :

Polar (M ) = {a : aT x ≤ 1∀x ∈ M }.

For example, Polar (Rn ) = {0}, Polar ({0}) = Rn ; if L is a liner subspace in Rn , then Polar (L) = L⊥
(why?).
The following properties of the polar are evident:
1. 0 ∈ Polar (M );
2. Polar (M ) is convex;
3. Polar (M ) is closed.
It turns out that these properties characterize polars:

Proposition B.2.5 Every closed convex set M containing the origin is polar, specifically, it is polar of
its polar:
M is closed and convex, 0 ∈ M
m
M = Polar (Polar (M ))

Proof. All we need is to prove that if M is closed and convex and 0 ∈ M , then M = Polar (Polar (M )).
By definition,
y ∈ Polar (M ), x ∈ M ⇒ y T x ≤ 1,
so that M ⊂ Polar (Polar (M )). To prove that this inclusion is in fact equality, assume, on the contrary,
that there exists x̄ ∈ Polar (Polar (M ))\M . Since M is nonempty, convex and closed and x̄ 6∈ M , the
point x̄ can be strongly separated from M (Separation Theorem, (ii)). Thus, for appropriate b one has

bT x̄ > sup bT x.
x∈M

Since 0 ∈ M , the left hand side quantity in this inequality is positive; passing from b to a proportional
vector a = λb with appropriately chosen positive λ, we may ensure that

aT x̄ > 1 ≥ sup aT x.
x∈M

This is the desired contradiction, since the relation 1 ≥ sup aT x implies that a ∈ Polar (M ), so that the
x∈M
relation aT x̄ > 1 contradicts the assumption that x̄ ∈ Polar (Polar (M )). 2

Exercise B.15 Let M be a convex set containing the origin, and let M 0 be the polar of M . Prove the
following facts:
2)
In the latter reasoning we used the following fact: if P ⊂ Q are two affine subspaces, then the affine
dimension of P is ≤ the one of Q, with ≤ being = if and only if P = Q. Please prove this fact
B.2. MAIN THEOREMS ON CONVEX SETS 649

1. Polar (M ) = Polar (cl M );


2. M is bounded if and only if 0 ∈ int M 0 ;
3. int M 6= ∅ if and only if M 0 does not contain straight lines;
4. M is a closed cone of and only if M 0 is a closed cone. If M is a cone (not necessarily closed), then

M 0 = {a : aT x ≤ 0∀x ∈ M }. (B.2.15)

B.2.7.B. Dual cone


Let M ⊂ Rn be a cone. By Exercise B.15.4, the polar M 0 of M is a closed cone given by (B.2.15). The
set M∗ = −M 0 (which also is a closed cone), that is, the set

M∗ = {a : aT x ≥ 0∀x ∈ M }

of all vectors which have nonnegative inner products with all vectors from M , is called the cone dual to
M . By Proposition B.2.5 and Exercise B.15.4, the family of closed cones in Rn is closed with respect to
passing to a dual cone, and the duality is symmetric: for a closed cone M , M∗ also is a closed cone, and
(M∗ )∗ = M .

Exercise B.16 Let M be a closed cone in Rn , and M∗ be its dual cone. Prove that
1. M is pointed (i.e., does not contain lines) if and only M∗ has a nonempty interior. Derive from
this fact that M is a closed pointed cone with a nonempty interior if and only if the dual cone has
the same properties.
2. Prove that a ∈ int M∗ if and only if aT x > 0 for all nonzero vectors x ∈ M .

B.2.7.C. Dubovitski-Milutin Lemma


Let M1 , ..., Mk be cones (not necessarily closed), and M be their intersection; of course, M also is a cone.
How to compute the cone dual to M ?

Proposition B.2.6 Let M1 , ..., Mk be cones. The cone M 0 dual to the intersection M of the cones
M1 ,...,Mk contains the arithmetic sum M f of the cones M 0 ,...,M 0 dual to M1 ,...,Mk . If all the cones
1 k
M1 , ..., Mk are closed, then M 0 is equal to cl Mf. In particular, for closed cones M1 ,...,Mk , M 0 coincides
with M f if and only if the latter set is closed.

Proof. Whenever ai ∈ Mi0 and x ∈ M , we have aTi x ≥ 0, i = 1, ..., k, whence (a1 + ... + ak )T x ≥ 0. Since
the latter relation is valid for all x ∈ M , we conclude that a1 + ... + ak ∈ M 0 . Thus, M
f ⊂ M 0.
Now assume that the cones M1 , ..., Mk are closed, and let us prove that M = cl M f. Since M 0 is
0 0
f ⊂ M , all we should prove is that if a ∈ M , then a ∈ M
closed and we have seen that M c = cl M
f as well.
Assume, on the contrary, that a ∈ M 0 \M c. Since the set Mf clearly is a cone, its closure M
c is a closed
cone; by assumption, a does not belong to this closed cone and therefore, by Separation Theorem (ii), a
can be strongly separated from M c and therefore – from Mf⊂M c. Thus, for some x one has

k
X
aT x < inf bT x = inf
0 ,i=1,...,k
(a1 + ... + ak )T x = inf aTi x. (B.2.16)
b∈M ai ∈Mi
ai ∈Mi0
e i=1

From the resulting inequality it follows that inf 0 aTi x > −∞; since Mi0 is a cone, the latter is possible
ai ∈Mi
if and only if inf 0 aTi x = 0, i.e., if and only if for every i one has x ∈ Polar (Mi0 ) = Mi (recall that the
ai ∈Mi
cones Mi are closed). Thus, x ∈ Mi for all i, and the concluding quantity in (B.2.16) is 0. We see that
x ∈ M = ∩i Mi , and that (B.2.16) reduces to aT x < 0. This contradicts the inclusion a ∈ M 0 . 2
650 APPENDIX B. CONVEX SETS IN RN

Note that in general Mf can be non-closed even when all the cones M1 , ..., Mk are closed. Indeed, take k =
p
2, and let M1 be the ice-cream cone {(x, y, z) ∈ R3 : z ≥ x2 + y 2 }, and M20 be the ray {z = x ≤ 0, y = 0}
0
0 0
in bR3 . Observe
p that the points from M ≡ M1 +M2 are exactly the points
f
√ of the form (x−t, y,√ z −t) with
t ≥ 0 and z ≥ x + y . In particular, for x positive the points (0, 1, x + 1−x) = (x−x, 1, x2 + 1−x)
2 2 2

belong to M f; as x → ∞, these points converge to p = (0, 1, 0), and thus p ∈ cl M f. On the other hand,
p
there clearly do not exist x, y, z, t with t ≥ 0 and z ≥ x2 + y 2 such that (x − t, y, z − t) = (0, 1, 0), that
is, p 6∈ M
f.
Dubovitski-Milutin Lemma presents a simple sufficient condition for M f to be closed and thus to
0
coincide with M :
Proposition B.2.7 [Dubovitski-Milutin Lemma] Let M1 , ..., Mk be cones such that Mk is closed and the
set Mk ∩ int M1 ∩ int M2 ∩ ... ∩ int Mk−1 is nonempty, and let M = M1 ∩ ... ∩ Mk . Let also Mi0 be the
cones dual to Mi . Then
k
T
(i) cl M = cl Mi ;
i=1
f = M 0 +...+M 0 is closed, and thus coincides with the cone M 0 dual to cl M (or, which
(ii) the cone M 1 k
is the same by Exercise B.15.1, with the cone dual to M ). In other words, every linear form which is
nonnegative on M can be represented as a sum of k linear forms which are nonnegative on the respective
cones M1 ,...,Mk .
T
Proof. (i): We should prove that under the premise of the Dubovitski-Milutin Lemma, cl M = cl Mi .
i
The right hand side here contains M and is closed, so that all we should prove is that every point x in
Tk
cl Mi is the limit of an appropriate sequence xt ∈ M . By premise of the Lemma, there exists a point
i=1
x̄ ∈ Mk ∩ int M1 ∩ int M2 ∩ ... ∩ int Mk−1 ; setting xt = t−1 x̄ + (1 − t−1 )x, t = 1, 2, ..., we get a sequence
converging to x as t → ∞; at the same time, xt ∈ Mk (since x, x̄ are in cl Mk = Mk ) and xt ∈ Mi for
every i < k (by Lemma B.1.1; note that for i < k one has x̄ ∈ int Mi and x ∈ cl Mi ), and thus xt ∈ M .
k
2
T
Thus, every point x ∈ cl Mi is the limit of a sequence from M .
i=1

(ii): Under the premise of the Lemma, when replacing the cones M1 , ..., Mk with their closures, we
do not vary the polars Mi0 of the cones (and thus do not vary M f) and replace the intersection of the sets
M1 , ..., Mk with its closure (by (i)), thus not varying the polar of the intersection. And of course when
replacing the cones M1 , ..., Mk with their closures, we preserve the premise of Lemma. Thus, we lose
nothing when assuming, in addition to the premise of Lemma, that the cones M1 , ..., Mk are closed. To
prove the lemma for closed cones M1 ,...,Mk , we use induction in k ≥ 2.

Base k = 2: Let a sequence {ft + gt }∞ 0 0


t=1 with ft ∈ M1 and gt ∈ M2 converge to certain h; we
should prove that h = f + g for appropriate f ∈ M10 and g ∈ M20 . To achieve our goal, it suffices to verify
that for an appropriate subsequence tj of indices there exists f ≡ lim ftj . Indeed, if this is the case,
j→∞
then g = lim gtj also exists (since ft + gt → h as t → ∞ and f + g = h; besides this, f ∈ M10 and g ∈ M20 ,
j→∞
since both the cones in question are closed. In order to verify the existence of the desired subsequence, it
suffices to lead to a contradiction the assumption that kft k2 → ∞ as t → ∞. Let the latter assumption
be true. Passing to a subsequence, we may assume that the unit vectors φt = ft /kft k2 have a limit φ as
t → ∞; since M10 is a closed cone, φ is a unit vector from M10 . Now, since ft + gt → h as t → ∞, we have
φ = lim ft /kft k2 = − lim gt /kft k2 (recall that kft k2 → ∞ as t → ∞, whence h/kft k2 → 0 as t → ∞).
t→∞ t→∞
We see that the vector −φ belongs to M20 . Now, by assumption M2 intersects the interior of the cone
M1 ; let x̄ be a point in this intersection. We have φT x̄ ≥ 0 (since x̄ ∈ M1 and φ ∈ M10 ) and φT x̄ ≤ 0
(since −φ ∈ M20 and x̄ ∈ M2 ). We conclude that φT x̄ = 0, which contradicts the facts that 0 6= φ ∈ M10
and x̄ ∈ int M1 (see Exercise B.16.2). 2
B.2. MAIN THEOREMS ON CONVEX SETS 651

Inductive step: Assume that the statement we are proving is valid in the case of k − 1 ≥ 2 cones,
and let M1 ,...,Mk be k cones satisfying the premise of the Dubovitski-Milutin Lemma. By this premise,
the cone M 1 = M1 ∩ ... ∩ Mk−1 has a nonempty interior, and Mk intersects this interior. Applying to the
pair of cones M 1 , Mk the already proved 2-cone version of the Lemma, we see that the set (M 1 )0 + Mk0
is closed; here (M 1 )0 is the cone dual to M 1 . Further, the cones M1 , ..., Mk−1 satisfy the premise of
the (k − 1)-cone version of the Lemma; by inductive hypothesis, the set M10 + ... + Mk−1 0
is closed and
1 0 0 0 1 0 0
therefore, by Proposition B.2.6, equals to (M ) . Thus, M1 + ... + Mk = (M ) + Mk , and we have seen
that the latter set is closed. 2

B.2.8 Extreme points and Krein-Milman Theorem


Supporting planes are useful tool to prove existence of extreme points of convex sets. Geometrically, an
extreme point of a convex set M is a point in M which cannot be obtained as a convex combination of
other points of the set; and the importance of the notion comes from the fact (which we shall prove in the
mean time) that the set of all extreme points of a “good enough” convex set M is the “shortest worker’s
instruction for building the set” – this is the smallest set of points for which M is the convex hull.

B.2.8.A. Extreme points: definition


The exact definition of an extreme point is as follows:

Definition B.2.4 [extreme points] Let M be a nonempty convex set in Rn . A point x ∈ M is called an
extreme point of M , if there is no nontrivial (of positive length) segment [u, v] ∈ M for which x is an
interior point, i.e., if the relation
x = λu + (1 − λ)v
with λ ∈ (0, 1) and u, v ∈ M is valid if and only if

u = v = x.

E.g., the extreme points of a segment are exactly its endpoints; the extreme points of a triangle are
its vertices; the extreme points of a (closed) circle on the 2-dimensional plane are the points of the
circumference.
An equivalent definitions of an extreme point is as follows:

Exercise B.17 Let M be a convex set and let x ∈ M . Prove that


1. x is extreme if and only if the only vector h such that x ± h ∈ M is the zero vector;

2. x is extreme if and only if the set M \{x} is convex.

B.2.8.B. Krein-Milman Theorem


It is clear that a convex set M not necessarily possesses extreme points; as an example you may take the
open unit ball in Rn . This example is not interesting – the set in question is not closed; when replacing it
with its closure, we get a set (the closed unit ball) with plenty of extreme points – these are all points of
the boundary. There are, however, closed convex sets which do not possess extreme points – e.g., a line
or an affine subspace of larger dimension. A nice fact is that the absence of extreme points in a closed
convex set M always has the standard reason – the set contains a line. Thus, a closed and nonempty
convex set M which does not contain lines for sure possesses extreme points. And if M is nonempty
convex compact set, it possesses a quite representative set of extreme points – their convex hull is the
entire M .
652 APPENDIX B. CONVEX SETS IN RN

Theorem B.2.10 Let M be a closed and nonempty convex set in Rn . Then


(i) The set Ext(M ) of extreme points of M is nonempty if and only if M does not contain lines;
(ii) If M is bounded, then M is the convex hull of its extreme points:

M = Conv(Ext(M )),

so that every point of M is a convex combination of the points of Ext(M ).

Part (ii) of this theorem is the finite-dimensional version of the famous Krein-Milman Theorem.
Proof. Let us start with (i). The ”only if” part is easy, due to the following simple

Lemma B.2.3 Let M be a closed convex set in Rn . Assume that for some x̄ ∈ M and
h ∈ Rn M contains the ray
{x̄ + th : t ≥ 0}

starting at x̄ with the direction h. Then M contains also all parallel rays starting at the
points of M :
(∀x ∈ M ) : {x + th : t ≥ 0} ⊂ M.

In particular, if M contains certain line, then it contains also all parallel lines passing through
the points of M .

Comment. For a closed convex set M, the set of all directions h such that x + th ∈ M
for some x and all t ≥ 0 (i.e., by Lemma – such that x + th ∈ M for all x ∈ M and all t ≥ 0)
is called the recessive cone of M [notation: Rec(M )]. With Lemma B.2.3 it is immediately
seen (prove it!) that Rec(M ) indeed is a closed cone, and that

M + Rec(M ) = M.

Directions from Rec(M ) are called recessive for M .

Proof of the lemma is immediate: if x ∈ M and x̄ + th ∈ M for all t ≥ 0, then, due to


convexity, for any fixed τ ≥ 0 we have
τ
(x̄ + h) + (1 − )x ∈ M


for all  ∈ (0, 1). As  → +0, the left hand side tends to x + τ h, and since M is closed,
x + τ h ∈ M for every τ ≥ 0. 2

Exercise B.18 Let M be a closed nonempty convex set. Prove that Rec(M ) 6= {0} if and
only if M is unbounded.

Lemma B.2.3, of course, resolves all our problems with the ”only if” part. Indeed, here we should
prove that if M possesses extreme points, then M does not contain lines, or, which is the same, that if
M contains lines, then it has no extreme points. But the latter statement is immediate: if M contains a
line, then, by Lemma, there is a line in M passing through every given point of M , so that no point can
be extreme. 2

Now let us prove the ”if” part of (i). Thus, from now on we assume that M does not contain lines;
our goal is to prove that then M possesses extreme points. Let us start with the following
B.2. MAIN THEOREMS ON CONVEX SETS 653

Lemma B.2.4 Let Q be a nonempty closed convex set, x̄ be a relative boundary point of Q
and Π be a hyperplane supporting to Q at x̄. Then all extreme points of the nonempty closed
convex set Π ∩ Q are extreme points of Q.
Proof of the Lemma. First, the set Π ∩ Q is closed and convex (as an intersection of
two sets with these properties); it is nonempty, since it contains x̄ (Π contains x̄ due to the
definition of a supporting plane, and Q contains x̄ due to the closedness of Q). Second, let
a be the linear form associated with Π:

Π = {y : aT y = aT x̄},

so that
inf aT x < sup aT x = aT x̄ (B.2.17)
x∈Q x∈Q

(see Proposition B.2.4). Assume that y is an extreme point of Π ∩ Q; what we should do is


to prove that y is an extreme point of Q, or, which is the same, to prove that

y = λu + (1 − λ)v

for some u, v ∈ Q and λ ∈ (0, 1) is possible only if y = u = v. To this end it suffices to


demonstrate that under the above assumptions u, v ∈ Π ∩ Q (or, which is the same, to prove
that u, v ∈ Π, since the points are known to belong to Q); indeed, we know that y is an
extreme point of Π∩Q, so that the relation y = λu+(1−λ)v with λ ∈ (0, 1) and u, v ∈ Π∩Q
does imply y = u = v.
To prove that u, v ∈ Π, note that since y ∈ Π we have

aT y = aT x̄ ≥ max{aT u, aT v}

(the concluding inequality follows from (B.2.17)). On the other hand,

aT y = λaT u + (1 − λ)aT v;

combining these observations and taking into account that λ ∈ (0, 1), we conclude that

aT y = aT u = aT v.

But these equalities imply that u, v ∈ Π. 2

Equipped with the Lemma, we can easily prove (i) by induction on the dimension of the convex set
M (recall that this is nothing but the affine dimension of the affine span of M , i.e., the linear dimension
of the linear subspace L such that Aff(M ) = a + L).
There is nothing to do if the dimension of M is zero, i.e., if M is a point – then, of course, M = Ext(M ).
Now assume that we already have proved the nonemptiness of Ext(T ) for all nonempty closed and not
containing lines convex sets T of certain dimension k, and let us prove that the same statement is valid
for the sets of dimension k + 1. Let M be a closed convex nonempty and not containing lines set of
dimension k + 1. Since M does not contain lines and is of positive dimension, it differs from Aff(M )
and therefore it possesses a relative boundary point x̄ 3) . According to Proposition B.2.4, there exists a
3)
Indeed, there exists z ∈ Aff(M )\M , so that the points
xλ = x + λ(z − x)
(x is an arbitrary fixed point of M ) do not belong to M for some λ ≥ 1, while x0 = x belongs to M . The set of
those λ ≥ 0 for which xλ ∈ M is therefore nonempty and bounded from above; this set clearly is closed (since M
is closed). Thus, there exists the largest λ = λ∗ for which xλ ∈ M . We claim that xλ∗ is a relative boundary
point of M . Indeed, by construction this is a point from M . If it would be a point from the relative interior of M ,
then all the points xλ with close to λ∗ and greater than λ∗ values of λ would also belong to M , which contradicts
the origin of λ∗
654 APPENDIX B. CONVEX SETS IN RN

hyperplane Π = {x : aT x = aT x̄} which supports M at x̄:

inf aT x < max aT x = aT x̄.


x∈M x∈M

By the same Proposition, the set T = Π∩M (which is closed, convex and nonempty) is of affine dimension
less than the one of M , i.e., of the dimension ≤ k. T clearly does not contain lines (since even the larger
set M does not contain lines). By the inductive hypothesis, T possesses extreme points, and by Lemma
B.2.4 all these points are extreme also for M . The inductive step is completed, and (i) is proved. 2

Now let us prove (ii). Thus, let M be nonempty, convex, closed and bounded; we should prove that

M = Conv(Ext(M )).

What is immediately seen is that the right hand side set is contained in the left hand side one. Thus, all
we need is to prove that every x ∈ M is a convex combination of points from Ext(M ). Here we again
use induction on the affine dimension of M . The case of 0-dimensional set M (i.e., a point) is trivial.
Assume that the statement in question is valid for all k-dimensional convex closed and bounded sets, and
let M be a convex closed and bounded set of dimension k + 1. Let x ∈ M ; to represent x as a convex
combination of points from Ext(M ), let us pass through x an arbitrary line ` = {x + λh : λ ∈ R} (h 6= 0)
in the affine span Aff(M ) of M . Moving along this line from x in each of the two possible directions,
we eventually leave M (since M is bounded); as it was explained in the proof of (i), it means that there
exist nonnegative λ+ and λ− such that the points

x̄± = x + λ± h

both belong to the relative boundary of M . Let us verify that x̄± are convex combinations of the extreme
points of M (this will complete the proof, since x clearly is a convex combination of the two points x̄± ).
Indeed, M admits supporting at x̄+ hyperplane Π; as it was explained in the proof of (i), the set Π ∩ M
(which clearly is convex, closed and bounded) is of affine dimension less than that one of M ; by the
inductive hypothesis, the point x̄+ of this set is a convex combination of extreme points of the set, and by
Lemma B.2.4 all these extreme points are extreme points of M as well. Thus, x̄+ is a convex combination
of extreme points of M . Similar reasoning is valid for x̄− . 2

B.2.8.C. Example: Extreme points of a polyhedral set.


Consider a polyhedral set
K = {x ∈ Rn : Ax ≤ b},
A being a m × n matrix and b being a vector from Rm . What are the extreme points of K? The answer
is given by the following

Theorem B.2.11 [Extreme points of polyhedral set]


Let x ∈ K. The vector x is an extreme point of K if and only if some n linearly independent (i.e., with
linearly independent vectors of coefficients) inequalities of the system Ax ≤ b are equalities at x.

Proof. Let aTi , i = 1, ..., m, be the rows of A.


The “only if” part: let x be an extreme point of K, and let I be the set of those indices i for which
aTi x = bi ; we should prove that the set F of vectors {ai : i ∈ I} contains n linearly independent vectors,
or, which is the same, that Lin(F ) = Rn . Assume that it is not the case; then the orthogonal complement
to F contains a nonzero vector h (since the dimension of F ⊥ is equal to n − dim Lin(F ) and is therefore
positive). Consider the segment ∆ = [x − h, x + h],  > 0 being the parameter of our construction.
Since h is orthogonal to the “active” vectors ai – those with i ∈ I, all points y of this segment satisfy
the relations aTi y = aTi x = bi . Now, if i is a “nonactive” index – one with aTi x < bi – then aTi y ≤ bi
B.2. MAIN THEOREMS ON CONVEX SETS 655

for all y ∈ ∆ , provided that  is small enough. Since there are finitely many nonactive indices, we can
choose  > 0 in such a way that all y ∈ ∆ will satisfy all “nonactive” inequalities aTi x ≤ bi , i 6∈ I. Since
y ∈ ∆ satisfies, as we have seen, also all “active” inequalities, we conclude that with the above choice of
 we get ∆ ⊂ K, which is a contradiction:  > 0 and h 6= 0, so that ∆ is a nontrivial segment with the
midpoint x, and no such segment can be contained in K, since x is an extreme point of K. 2

To prove the “if” part, assume that x ∈ K is such that among the inequalities aTi x ≤ bi which are
equalities at x there are n linearly independent, say, those with indices 1, ..., n, and let us prove that x
is an extreme point of K. This is immediate: assuming that x is not an extreme point, we would get
the existence of a nonzero vector h such that x ± h ∈ K. In other words, for i = 1, ..., n we would have
bi ± aTi h ≡ aTi (x ± h) ≤ bi , which is possible only if aTi h = 0, i = 1, ..., n. But the only vector which is
orthogonal to n linearly independent vectors in Rn is the zero vector (why?), and we get h = 0, which
was assumed not to be the case. 2
.

Corollary B.2.1 The set of extreme points of a polyhedral set is finite.

Indeed, according to the above Theorem, every extreme point of a polyhedral set K = {x ∈ Rn : Ax ≤ b}
satisfies the equality version of certain n-inequality subsystem of the original system, the matrix of
the subsystem being nonsingular. Due to the latter fact, an extreme point is uniquely defined by the
corresponding subsystem, so that the number of extreme points does not exceed the number Cnm of n × n
submatrices of the matrix A and is therefore finite. 2

Note that Cnm is nothing but an upper (ant typically very conservative) bound on the number of
extreme points of a polyhedral set given by m inequalities in Rn : some n × n submatrices of A can be
singular and, what is more important, the majority of the nonsingular ones normally produce “candidates”
which do not satisfy the remaining inequalities.

Remark B.2.1 The result of Theorem B.2.11 is very important, in particular, for the theory
of the Simplex method – the traditional computational tool of Linear Programming. When
applied to the LP program in the standard form

min cT x : P x = p, x ≥ 0 [x ∈ Rn ],

x

with k × n matrix P , the result of Theorem B.2.11 is that extreme points of the feasible
set are exactly the basic feasible solutions of the system P x = p, i.e., nonnegative vectors
x such that P x = p and the set of columns of P associated with positive entries of x is
linearly independent. Since the feasible set of an LP program in the standard form clearly
does not contain lines, among the optimal solutions (if they exist) to an LP program in
the standard form at least one is an extreme point of the feasible set (Theorem B.2.14.(ii)).
Thus, in principle we could look through the finite set of all extreme points of the feasible
set (≡ through all basic feasible solutions) and to choose the one with the best value of the
objective. This recipe allows to find a feasible solution in finitely many arithmetic operations,
provided that the program is solvable, and is, basically, what the Simplex method does; this
latter method, of course, looks through the basic feasible solutions in a smart way which
normally allows to deal with a negligible part of them only.
Another useful consequence of Theorem B.2.11 is that if all the data in an LP program are
rational, then every extreme point of the feasible domain of the program is a vector with
rational entries. In particular, a solvable standard form LP program with rational data has
at least one rational optimal solution.
656 APPENDIX B. CONVEX SETS IN RN

B.2.8.C.1. Illustration: Birkhoff Theorem An n × n matrix X is called double stochastic,


if its entries are nonnegative and all column and row sums are equal to 1. These matrices (treated as
2
elements of Rn = Rn×n ) form a bounded polyhedral set, specifically, the set
X X
Πn = {X = [xij ]ni,j=1 : xij ≥ 0 ∀i, j, xij = 1 ∀j, xij = 1 ∀i}
i j

By Krein-Milman Theorem, Πn is the convex hull of its extreme points. What are these extreme points?
The answer is given by important

Theorem B.2.12 [Birkhoff Theorem] Extreme points of Πn are exactly the permutation matrices of
order n, i.e., n × n Boolean (i.e., with 0/1 entries) matrices with exactly one nonzero element (equal to
1) in every row and every column.

Exercise B.19 [Easy part] Prove the easy part of the Theorem, specifically, that every n×n permutation
matrix is an extreme point of Πn .

Proof of difficult part. Now let us prove that every extreme point of Πn is a permutation matrix. To
this end let us note that the 2n linear equations in the definition of Πn — those saying Pthat all row and
column sums are equal to 1 - are linearly dependent, and dropping one of them, say, i xin = 1, we do
not alter the set. Indeed, the remaining equalities say that all row sums are equal to 1, so that the total
sum of all entries in X is n, and that the first n − 1 column sums are equal to 1, meaning that the last
column sum is n − (n − 1) = 1. Thus, we lose nothing when assuming that there are just 2n − 1 equality
constraints in the description of Πn . Now let us prove the claim by induction in n. The base n = 1 is
trivial. Let us justify the inductive step n − 1 ⇒ n. Thus, let X be an extreme point of Πn . By Theorem
B.2.11, among the constraints defining Πn (i.e., 2n − 1 equalities and n2 inequalities xij ≥ 0) there should
be n2 linearly independent which are satisfied at X as equations. Thus, at least n2 − (2n − 1) = (n − 1)2
entries in X should be zeros. It follows that at least one of the columns of X contains ≤ 1 nonzero entries
(since otherwise the number of zero entries in X would be at most n(n − 2) < (n − 1)2 ). Thus, there
exists at least one column with at most 1 nonzero entry; since the sum of entries in this column is 1, this
nonzero entry, let it be xīj̄ , is equal to 1. Since the entries in row ī are nonnegative, sum up to 1 and
xīj̄ = 1, xīj̄ = 1 is the only nonzero entry in its row and its column. Eliminating from X the row ī and
the column j̄, we get an (n − 1) × (n − 1) double stochastic matrix. By inductive hypothesis, this matrix is
a convex combination of (n − 1) × (n − 1) permutation matrices. Augmenting every one of these matrices
by the column and the row we havePeliminated, we get a representation of X as a convex combination of
n × n permutation matrices: X = ` λ` P` with nonnegative λ` summing up to 1. Since P` ∈ Πn and X
is an extreme point of Πn , in this representation all terms with nonzero coefficients λ` must be equal to
λ` X, so that X is one of the permutation matrices P` and as such is a permutation matrix. 2

B.2.9 Structure of polyhedral sets


B.2.9.A. Main result
By definition, a polyhedral set M is the set of all solutions to a finite system of nonstrict linear inequalities:

M = {x ∈ Rn : Ax ≤ b}, (B.2.18)

where A is a matrix of the column size n and certain row size m and b is m-dimensional vector. This is an
“outer” description of a polyhedral set. We are about to establish an important result on the equivalent
“inner” representation of a polyhedral set.
B.2. MAIN THEOREMS ON CONVEX SETS 657

Consider the following construction. Let us take two finite nonempty set of vectors V (“vertices”)
and R (“rays”) and build the set
X X X
M (V, R) = Conv(V ) + Cone (R) = { λv v + µr r : λv ≥ 0, µr ≥ 0, λv = 1}.
v∈V r∈R v

Thus, we take all vectors which can be represented as sums of convex combinations of the points from V
and conic combinations of the points from R. The set M (V, R) clearly is convex (as the arithmetic sum
of two convex sets Conv(V ) and Cone (R)). The promised inner description polyhedral sets is as follows:

Theorem B.2.13 [Inner description of a polyhedral set] The sets of the form M (V, R) are exactly the
nonempty polyhedral sets: M (V, R) is polyhedral, and every nonempty polyhedral set M is M (V, R) for
properly chosen V and R.
The polytopes M (V, {0}) = Conv(V ) are exactly the nonempty and bounded polyhedral sets. The sets
of the type M ({0}, R) are exactly the polyhedral cones (sets given by finitely many nonstrict homogeneous
linear inequalities).

Remark B.2.2 In addition to the results of the Theorem, it can be proved that in the representation
of a nonempty polyhedral set M as M = Conv(V ) + Cone (R)
– the “conic” part Conv(R) (not the set R itself!) is uniquely defined by M and is the recessive cone
of M (see Comment to Lemma B.2.3);
– if M does not contain lines, then V can be chosen as the set of all extreme points of M .

Postponing temporarily the proof of Theorem B.2.13, let us explain why this theorem is that important
– why it is so nice to know both inner and outer descriptions of a polyhedral set.
Consider a number of natural questions:
• A. Is it true that the inverse image of a polyhedral set M ⊂ Rn under an affine mapping y 7→
P(y) = P y + p : Rm → Rn , i.e., the set

P −1 (M ) = {y ∈ Rm : P y + p ∈ M }

is polyhedral?
• B. Is it true that the image of a polyhedral set M ⊂ Rn under an affine mapping x 7→ y = P(x) =
P x + p : Rn → Rm – the set
P(M ) = {P x + p : x ∈ M }
is polyhedral?
• C. Is it true that the intersection of two polyhedral sets is again a polyhedral set?
• D. Is it true that the arithmetic sum of two polyhedral sets is again a polyhedral set?
The answers to all these questions are positive; one way to see it is to use calculus of polyhedral rep-
resentations along with the fact that polyhedrally representable sets are exactly the same as polyhedral
sets (section B.2.4). Another very instructive way is to use the just outlined results on the structure of
polyhedral sets, which we intend to do now.
It is very easy to answer affirmatively to A, starting from the original – outer – definition of a
polyhedral set: if M = {x : Ax ≤ b}, then, of course,

P −1 (M ) = {y : A(P y + p) ≤ b} = {y : (AP )y ≤ b − Ap}

and therefore P −1 (M ) is a polyhedral set.


An attempt to answer affirmatively to B via the same definition fails – there is no easy way to
convert the linear inequalities defining a polyhedral set into those defining its image, and it is absolutely
658 APPENDIX B. CONVEX SETS IN RN

unclear why the image indeed is given by finitely many linear inequalities. Note, however, that there
is no difficulty to answer affirmatively to B with the inner description of a nonempty polyhedral set: if
M = M (V, R), then, evidently,
P(M ) = M (P(V ), P R),
where P R = {P r : r ∈ R} is the image of R under the action of the homogeneous part of P.
Similarly, positive answer to C becomes evident, when we use the outer description of a polyhedral
set: taking intersection of the solution sets to two systems of nonstrict linear inequalities, we, of course,
again get the solution set to a system of this type – you simply should put together all inequalities from
the original two systems. And it is very unclear how to answer positively to D with the outer definition
of a polyhedral set – what happens with inequalities when we add the solution sets? In contrast to this,
the inner description gives the answer immediately:

M (V, R) + M (V 0 , R0 ) = Conv(V ) + Cone (R) + Conv(V 0 ) + Cone (R0 )


= [Conv(V ) + Conv(V 0 )] + [Cone (R) + Cone (R0 )]
= Conv(V + V 0 ) + Cone (R ∪ R0 )
= M (V + V 0 , R ∪ R0 ).

Note that in this computation we used two rules which should be justified: Conv(V ) + Conv(V 0 ) =
Conv(V + V 0 ) and Cone (R) + Cone (R0 ) = Cone (R ∪ R0 ). The second is evident from the definition of
the conic hull, and only the first needs simple reasoning. To prove it, note that Conv(V ) + Conv(V 0 ) is a
convex set which contains V + V 0 and therefore contains Conv(V + V 0 ). The inverse inclusion is proved
as follows: if X X
x= λ i vi , y = λ0j vj0
i j
0
are convex combinations of points from V , resp., V , then, as it is immediately seen (please check!),
X
x+y = λi λ0j (vi + vj0 )
i,j

and the right hand side is a convex combination of points from V + V 0 .


We see that it is extremely useful to keep in mind both descriptions of polyhedral sets – what is
difficult to see with one of them, is absolutely clear with another.
As a seemingly “more important” application of the developed theory, let us look at Linear Program-
ming.

B.2.9.B. Theory of Linear Programming


A general Linear Programming program is the problem of maximizing a linear objective function over a
polyhedral set:
(P) max cT x : x ∈ M = {x ∈ Rn : Ax ≤ b} ;

x

here c is a given n-dimensional vector – the objective, A is a given m × n constraint matrix and b ∈ Rm
is the right hand side vector. Note that (P) is called “Linear Programming program in the canonical
form”; there are other equivalent forms of the problem.

B.2.9.B.1. Solvability of a Linear Programming program. According to the Linear Pro-


gramming terminology which you for sure know, (P) is called
• feasible, if it admits a feasible solution, i.e., the system Ax ≤ b is solvable, and infeasible otherwise;
• bounded, if it is feasible and the objective is above bounded on the feasible set, and unbounded, if
it is feasible, but the objective is not bounded from above on the feasible set;
• solvable, if it is feasible and the optimal solution exists – the objective attains its maximum on the
feasible set.
B.2. MAIN THEOREMS ON CONVEX SETS 659

If the program is bounded, then the upper bound of the values of the objective on the feasible set is a
real; this real is called the optimal value of the program and is denoted by c∗ . It is convenient to assign
optimal value to unbounded and infeasible programs as well – for an unbounded program it, by definition,
is +∞, and for an infeasible one it is −∞.
Note that our terminology is aimed to deal with maximization programs; if the program is to mini-
mize the objective, the terminology is updated in the natural way: when defining bounded/unbounded
programs, we should speak about below boundedness rather than about the above boundedness of the
objective, etc. E.g., the optimal value of an unbounded minimization program is −∞, and of an infeasible
one it is +∞. This terminology is consistent with the usual way of converting a minimization problem
into an equivalent maximization one by replacing the original objective c with −c: the properties of
feasibility – boundedness – solvability remain unchanged, and the optimal value in all cases changes its
sign.
I have said that you for sure know the above terminology; this is not exactly true, since you definitely
have heard and used the words “infeasible LP program”, “unbounded LP program”, but hardly used the
words “bounded LP program” – only the “solvable” one. This indeed is true, although absolutely unclear
in advance – a bounded LP program always is solvable. We have already established this fact, even twice
— via Fourier-Motzkin elimination (section B.2.4 and via the LP Duality Theorem). Let us reestablish
this fundamental for Linear Programming fact with the tools we have at our disposal now.

Theorem B.2.14 (i) A Linear Programming program is solvable if and only if it is bounded.
(ii) If the program is solvable and the feasible set of the program does not contain lines, then at least
one of the optimal solutions is an extreme point of the feasible set.

Proof. (i): The “only if” part of the statement is tautological: the definition of solvability includes
boundedness. What we should prove is the “if” part – that a bounded program is solvable. This is
immediately given by the inner description of the feasible set M of the program: this is a polyhedral set,
so that being nonempty (as it is for a bounded program), it can be represented as

M (V, R) = Conv(V ) + Cone (R)

for some nonempty finite sets V and R. I claim first of all that since (P) is bounded, the inner product
of c with every vector from R is nonpositive. Indeed, otherwise there would be r ∈ R with cT r > 0; since
M (V, R) clearly contains with every its point x the entire ray {x + tr : t ≥ 0}, and the objective evidently
is unbounded on this ray, it would be above unbounded on M , which is not the case.
Now let us choose in the finite and nonempty set V the point, let it be called v ∗ , which maximizes
the objective on V . I claim that v ∗ is an optimal solution to (P), so that (P) is solvable. The justification
of the claim is immediate: v ∗ clearly belongs to M ; now, a generic point of M = M (V, R) is
X X
x= λv v + µr r
v∈V r∈R
P
with nonnegative λv and µr and with λv = 1, so that
v

cT x = λ v cT v + µr cT r
P P
v r
λ v cT v [since µr ≥ 0 and cT r ≤ 0, r ∈ R]
P

v
T ∗
[since λv ≥ 0 and cT v ≤ cT v ∗ ]
P
≤ λv c v
v
T ∗
P
= c v [since λv = 1]
v

(ii): if the feasible set of (P), let it be called M , does not contain lines, it, being convex and closed
(as a polyhedral set) possesses extreme points. It follows that (ii) is valid in the trivial case when the
objective of (ii) is constant on the entire feasible set, since then every extreme point of M can be taken
as the desired optimal solution. The case when the objective is nonconstant on M can be immediately
reduced to the aforementioned trivial case: if x∗ is an optimal solution to (P) and the linear form cT x is
660 APPENDIX B. CONVEX SETS IN RN

nonconstant on M , then the hyperplane Π = {x : cT x = c∗ } is supporting to M at x∗ ; the set Π ∩ M is


closed, convex, nonempty and does not contain lines, therefore it possesses an extreme point x∗∗ which,
on one hand, clearly is an optimal solution to (P), and on another hand is an extreme point of M by
Lemma B.2.4. 2

B.2.9.C. Structure of a polyhedral set: proofs


B.2.9.C.1. Structure of a bounded polyhedral set. Let us start with proving a significant
part of Theorem B.2.13 – the one describing bounded polyhedral sets.

Theorem B.2.15 [Structure of a bounded polyhedral set] A bounded and nonempty polyhedral set M
in Rn is a polytope, i.e., is the convex hull of a finite nonempty set:

M = M (V, {0}) = Conv(V );

one can choose as V the set of all extreme points of M .


Vice versa – a polytope is a bounded and nonempty polyhedral set.

Proof. The first part of the statement – that a bounded nonempty polyhedral set is a polytope – is
readily given by the Krein-Milman Theorem combined with Corollary B.2.1. Indeed, a polyhedral set
always is closed (as a set given by nonstrict inequalities involving continuous functions) and convex; if it
is also bounded and nonempty, it, by the Krein-Milman Theorem, is the convex hull of the set V of its
extreme points; V is finite by Corollary B.2.1. 2

Now let us prove the more difficult part of the statement – that a polytope is a bounded polyhedral
set. The fact that a convex hull of a finite set is bounded is evident. Thus, all we need is to prove that
the convex hull of finitely many points is a polyhedral set. To this end note that this convex hull clearly
is polyhedrally representable:
X X
Conv{v1 , ..., vN } = {x : ∃λ : λ ≥ 0, λi = 1, x = λ i vi }
i i

and therefore is polyhedral by Theorem B.2.5. 2

B.2.9.C.2. Structure of a general polyhedral set: completing the proof. Now let us
prove the general Theorem B.2.13. The proof basically follows the lines of the one of Theorem B.2.15,
but with one elaboration: now we cannot use the Krein-Milman Theorem to take upon itself part of our
difficulties.
To simplify language let us call VR-sets (“V” from “vertex”, “R” from rays) the sets of the form
M (V, R), and P-sets the nonempty polyhedral sets. We should prove that every P-set is a VR-set, and
vice versa. We start with proving that every P-set is a VR-set.

B.2.9.C.2.A. P⇒VR:

P⇒VR, Step 1: reduction to the case when the P-set does not contain lines. Let
M be a P-set, so that M is the set of all solutions to a solvable system of linear inequalities:

M = {x ∈ Rn : Ax ≤ b} (B.2.19)

with m × n matrix A. Such a set may contain lines; if h is the direction of a line in M , then A(x + th) ≤ b
for some x and all t ∈ R, which is possible only if Ah = 0. Vice versa, if h is from the kernel of A, i.e., if
Ah = 0, then the line x + Rh with x ∈ M clearly is contained in M . Thus, we come to the following fact:
B.2. MAIN THEOREMS ON CONVEX SETS 661

Lemma B.2.5 Nonempty polyhedral set (B.2.19) contains lines if and only if the kernel of
A is nontrivial, and the nonzero vectors from the kernel are exactly the directions of lines
contained in M : if M contains a line with direction h, then h ∈ KerA, and vice versa: if
0 6= h ∈ KerA and x ∈ M , then M contains the entire line x + Rh.
Given a nonempty set (B.2.19), let us denote by L the kernel of A and by L⊥ the orthogonal complement
to the kernel, and let M 0 be the cross-section of M by L⊥ :

M 0 = {x ∈ L⊥ : Ax ≤ b}.

The set M 0 clearly does not contain lines (since the direction of every line in M 0 , on one hand, should
belong to L⊥ due to M 0 ⊂ L⊥ , and on the other hand – should belong to L = KerA, since a line in
M 0 ⊂ M is a line in M as well). The set M 0 is nonempty and, moreover, M = M 0 + L. Indeed, M 0
contains the orthogonal projections of all points from M onto L⊥ (since to project a point onto L⊥ , you
should move from this point along certain line with the direction in L, and all these movements, started
in M , keep you in M by the Lemma) and therefore is nonempty, first, and is such that M 0 + L ⊃ M ,
second. On the other hand, M 0 ⊂ M and M + L = M by Lemma B.2.5, whence M 0 + L ⊂ M . Thus,
M0 + L = M.
Finally, M 0 is a polyhedral set together with M , since the inclusion x ∈ L⊥ can be represented by
dim L linear equations (i.e., by 2dim L nonstrict linear inequalities): you should say that x is orthogonal
to dim L somehow chosen vectors a1 , ..., adim L forming a basis in L.
The results of our effort are as follows: given an arbitrary P-set M , we have represented is as the
sum of a P-set M 0 not containing lines and a linear subspace L. With this decomposition in mind we see
that in order to achieve our current goal – to prove that every P-set is a VR-set – it suffices to prove the
same statement for P-sets not containing lines. Indeed, given that M 0 = M (V, R0 ) and denoting by R0 a
finite set such that L = Cone (R0 ) (to get R0 , take the set of 2dim L vectors ±ai , i = 1, ..., dim L, where
a1 , ..., adim L is a basis in L), we would obtain

M = M0 + L
= [Conv(V ) + Cone (R)] + Cone (R0 )
= Conv(V ) + [Cone (R) + Cone (R0 )]
= Conv(V ) + Cone (R ∪ R0 )
= M (V, R ∪ R0 )

We see that in order to establish that a P-set is a VR-set it suffices to prove the same statement for
the case when the P-set in question does not contain lines.

P⇒VR, Step 2: the P-set does not contain lines. Our situation is as follows: we are
given a not containing lines P-set in Rn and should prove that it is a VR-set. We shall prove this
statement by induction on the dimension n of the space. The case of n = 0 is trivial. Now assume that
the statement in question is valid for n ≤ k, and let us prove that it is valid also for n = k + 1. Let M
be a not containing lines P-set in Rk+1 :

M = {x ∈ Rk+1 : aTi x ≤ bi , i = 1, ..., m}. (B.2.20)

Without loss of generality we may assume that all ai are nonzero vectors (since M is nonempty, the
inequalities with ai = 0 are valid on the entire Rn , and removing them from the system, we do not vary
its solution set). Note that m > 0 – otherwise M would contain lines, since k ≥ 0.
10 . We may assume that M is unbounded – otherwise the desired result is given already by Theorem
B.2.15. By Exercise B.18, there exists a recessive direction r 6= 0 of M Thus, M contains the ray
{x + tr : t ≥ 0}, whence, by Lemma B.2.3, M + Cone ({r}) = M . 2

20 . For every i ≤ m, where m is the row size of the matrix A from (B.2.20), that is, the number
of linear inequalities in the description of M , let us denote by Mi the corresponding “facet” of M – the
662 APPENDIX B. CONVEX SETS IN RN

polyhedral set given by the system of inequalities (B.2.20) with the inequality aTi x ≤ bi replaced by the
equality aTi x = bi . Some of these “facets” can be empty; let I be the set of indices i of nonempty Mi ’s.
When i ∈ I, the set Mi is a nonempty polyhedral set – i.e., a P-set – which does not contain lines
(since Mi ⊂ M and M does not contain lines). Besides this, Mi belongs to the hyperplane {aTi x = bi },
i.e., actually it is a P-set in Rk . By the inductive hypothesis, we have representations

Mi = M (Vi , Ri ), i ∈ I,

for properly chosen finite nonempty sets Vi and Ri . I claim that

M = M (∪i∈I Vi , ∪i∈I Ri ∪ {r}), (B.2.21)

where r is a recessive direction of M found in 10 ; after the claim will be supported, our induction will be
completed.
To prove (B.2.21), note, first of all, that the right hand side of this relation is contained in the left
hand side one. Indeed, since Mi ⊂ M and Vi ⊂ Mi , we have Vi ⊂ M , whence also V = ∪i Vi ⊂ M ; since
M is convex, we have
Conv(V ) ⊂ M. (B.2.22)
Further, if r0 ∈ Ri , then r0 is a recessive direction of Mi ; since Mi ⊂ M , r0 is a recessive direction of M
by Lemma B.2.3. Thus, every vector from ∪i∈I Ri is a recessive direction for M , same as r; thus, every
vector from R = ∪i∈I Ri ∪ {r} is a recessive direction of M , whence, again by Lemma B.2.3,

M + Cone (R) = M.

Combining this relation with (B.2.22), we get M (V, R) ⊂ M , as claimed.


It remains to prove that M is contained in the right hand side of (B.2.21). Let x ∈ M , and let us move
from x along the direction (−r), i.e., move along the ray {x − tr : t ≥ 0}. After large enough step along
this ray we leave M . (Indeed, otherwise the ray with the direction −r started at x would be contained
in M , while the opposite ray for sure is contained in M since r is a recessive direction of M ; we would
conclude that M contains a line, which is not the case by assumption.) Since the ray {x − tr : t ≥ 0}
eventually leaves M and M is bounded, there exists the largest t, let it be called t∗ , such that x0 = x − t∗ r
still belongs to M . It is clear that at x0 one of the linear inequalities defining M becomes equality –
otherwise we could slightly increase the parameter t∗ still staying in M . Thus, x0 ∈ Mi for some i ∈ I.
Consequently,
x0 ∈ Conv(Vi ) + Cone (Ri ),
whence x = x0 + t∗ r ∈ Conv(Vi ) + Cone (Ri ∪ {r}) ⊂ M (V, R), as claimed. 2

B.2.9.C.2.B. VR⇒P: We already know that every P-set is a VR-set. Now we shall prove that
every VR-set is a P-set, thus completing the proof of Theorem B.2.13. This is immediate: a VR-set is
polyhedrally representable (why?) and thus is a P-set by Theorem B.2.5. 2
Appendix C

Convex functions

C.1 Convex functions: first acquaintance


C.1.1 Definition and Examples
Definition C.1.1 [convex function] A function f : Q → R defined on a nonempty subset Q of Rn and
taking real values is called convex, if
• the domain Q of the function is convex;
• for every x, y ∈ Q and every λ ∈ [0, 1] one has

f (λx + (1 − λ)y) ≤ λf (x) + (1 − λ)f (y). (C.1.1)

If the above inequality is strict whenever x 6= y and 0 < λ < 1, f is called strictly convex.

A function f such that −f is convex is called concave; the domain Q of a concave function should be
convex, and the function itself should satisfy the inequality opposite to (C.1.1):

f (λx + (1 − λ)y) ≥ λf (x) + (1 − λ)f (y), x, y ∈ Q, λ ∈ [0, 1].

The simplest example of a convex function is an affine function

f (x) = aT x + b

– the sum of a linear form and a constant. This function clearly is convex on the entire space, and the
“convexity inequality” for it is equality. An affine function is both convex and concave; it is easily seen
that a function which is both convex and concave on the entire space is affine.
Here are several elementary examples of “nonlinear” convex functions of one variable:
• functions convex on the whole axis:
x2p , p is a positive integer;
exp{x};
• functions convex on the nonnegative ray:
xp , 1 ≤ p;
−xp , 0 ≤ p ≤ 1;
x ln x;
• functions convex on the positive ray:
1/xp , p > 0;
− ln x.

663
664 APPENDIX C. CONVEX FUNCTIONS

To the moment it is not clear why these functions are convex; in the mean time we shall derive a simple
analytic criterion for detecting convexity which immediately demonstrates that the above functions indeed
are convex.
A very convenient equivalent definition of a convex function is in terms of its epigraph. Given a
real-valued function f defined on a nonempty subset Q of Rn , we define its epigraph as the set

Epi(f ) = {(t, x) ∈ Rn+1 : x ∈ Q, t ≥ f (x)};

geometrically, to define the epigraph, you should take the graph of the function – the surface {t =
f (x), x ∈ Q} in Rn+1 – and add to this surface all points which are “above” it. The equivalent, more
geometrical, definition of a convex function is given by the following simple statement (prove it!):

Proposition C.1.1 [definition of convexity in terms of the epigraph]


A function f defined on a subset of Rn is convex if and only if its epigraph is a nonempty convex set
in Rn+1 .

More examples of convex functions: norms. Equipped with Proposition C.1.1, we can extend
our initial list of convex functions (several one-dimensional functions and affine ones) by more examples
– norms. Let π(x) be a norm on Rn√(see Section B.1.2.B). To the moment we know three examples of
P
norms – the Euclidean norm kxk2 = xT x, the 1-norm kxk1 = |xi | and the ∞-norm kxk∞ = max |xi |.
i i
It was also claimed (although not proved) that these are three members of an infinite family of norms

n
!1/p
X
p
kxkp = |xi | , 1≤p≤∞
i=1

(the right hand side of the latter relation for p = ∞ is, by definition, max |xi |).
i
We are about to prove that every norm is convex:

Proposition C.1.2 Let π(x) be a real-valued function on Rn which is positively homogeneous of degree
1:
π(tx) = tπ(x) ∀x ∈ Rn , t ≥ 0.
π is convex if and only if it is subadditive:

π(x + y) ≤ π(x) + π(y) ∀x, y ∈ Rn .

In particular, a norm (which by definition is positively homogeneous of degree 1 and is subadditive) is


convex.

Proof is immediate: the epigraph of a positively homogeneous of degree 1 function π clearly is a conic
set: (t, x) ∈ Epi(π) ⇒ λ(t, x) ∈ Epi(π) whenever λ ≥ 0. Now, by Proposition C.1.1 π is convex if and
only if Epi(π) is convex. It is clear that a conic set is convex if and only if it contains the sum of every
two its elements (why ?); this latter property is satisfied for the epigraph of a real-valued function if and
only if the function is subadditive (evident). 2

C.1.2 Elementary properties of convex functions


C.1.2.1 Jensen’s inequality
The following elementary observation is, I believe, one of the most useful observations in the world:
C.1. CONVEX FUNCTIONS: FIRST ACQUAINTANCE 665

Proposition C.1.3 [Jensen’s inequality] Let f be convex and Q be the domain of f . Then for every
convex combination
XN
λi xi
i=1
of points from Q one has
N
X N
X
f( λi xi ) ≤ λi f (xi ).
i=1 i=1

The proof is immediate: the points (f (xi ), xi ) clearly belong to the epigraph of f ; since f is convex, its
epigraph is a convex set, so that the convex combination
N
X XN N
X
λi (f (xi ), xi ) = ( λi f (xi ), λ i xi )
i=1 i=1 i=1

of the points also belongs to Epi(f ). By definition of the epigraph, the latter means exactly that
N N
2
P P
λi f (xi ) ≥ f ( λi xi ).
i=1 i=1

Note that the definition of convexity of a function f is exactly the requirement on f to satisfy the
Jensen inequality for the case of N = 2; we see that to satisfy this inequality for N = 2 is the same as to
satisfy it for all N .

C.1.2.2 Convexity of level sets of a convex function


The following simple observation is also very useful:
Proposition C.1.4 [convexity of level sets] Let f be a convex function with the domain Q. Then, for
every real α, the set
levα (f ) = {x ∈ Q : f (x) ≤ α}
– the level set of f – is convex.
The proof takes one line: if x, y ∈ levα (f ) and λ ∈ [0, 1], then f (λx + (1 − λ)y) ≤ λf (x) + (1 − λ)f (y) ≤
λα + (1 − λ)α = α, so that λx + (1 − λ)y ∈ levα (f ). 2

Note that the convexity of level sets does not characterize convex functions; there are nonconvex
functions which share this property (e.g., every monotone function on the axis). The “proper” character-
ization of convex functions in terms of convex sets is given by Proposition C.1.1 – convex functions are
exactly the functions with convex epigraphs. Convexity of level sets specify a wider family of functions,
the so called quasiconvex ones.

C.1.3 What is the value of a convex function outside its domain?


Literally, the question which entitles this subsection is senseless. Nevertheless, when speaking about
convex functions, it is extremely convenient to think that the function outside its domain also has a
value, namely, takes the value +∞; with this convention, we can say that
a convex function f on Rn is a function taking values in the extended real axis R ∪ {+∞} such that
the domain Dom f of the function – the set of those x’s where f (x) is finite – is nonempty, and for all
x, y ∈ Rn and all λ ∈ [0, 1] one has
f (λx + (1 − λ)y) ≤ λf (x) + (1 − λ)f (y). (C.1.2)
If the expression in the right hand side involves infinities, it is assigned the value according to the
standard and reasonable conventions on what are arithmetic operations in the “extended real axis”
R ∪ {+∞} ∪ {−∞}:
666 APPENDIX C. CONVEX FUNCTIONS

• arithmetic operations with reals are understood in their usual sense;


• the sum of +∞ and a real, same as the sum of +∞ and +∞ is +∞; similarly, the sum of a real
and −∞, same as the sum of −∞ and −∞ is −∞. The sum of +∞ and −∞ is undefined;
• the product of a real and +∞ is +∞, 0 or −∞, depending on whether the real is positive, zero or
negative, and similarly for the product of a real and −∞. The product of two ”infinities” is again
infinity, with the usual rule for assigning the sign to the product.
Note that it is not clear in advance that our new definition of a convex function is equivalent to the initial
one: initially we included into the definition requirement for the domain to be convex, and now we omit
explicit indicating this requirement. In fact, of course, the definitions are equivalent: convexity of Dom f
– i.e., the set where f is finite – is an immediate consequence of the “convexity inequality” (C.1.2).
It is convenient to think of a convex function as of something which is defined everywhere, since it
saves a lot of words. E.g., with this convention I can write f + g (f and g are convex functions on Rn ),
and everybody will understand what is meant; without this convention, I am supposed to add to this
expression the explanation as follows: “f + g is a function with the domain being the intersection of those
of f and g, and in this intersection it is defined as (f + g)(x) = f (x) + g(x)”.

C.2 How to detect convexity


In an optimization problem
min {f (x) : gj (x) ≤ 0, j = 1, ..., m}
x

convexity of the objective f and the constraints gi is crucial: it turns out that problems with this property
possess nice theoretical properties (e.g., the local necessary optimality conditions for these problems are
sufficient for global optimality); and what is much more important, convex problems can be efficiently
(both in theoretical and, to some extent, in the practical meaning of the word) solved numerically, which
is not, unfortunately, the case for general nonconvex problems. This is why it is so important to know
how one can detect convexity of a given function. This is the issue we are coming to.
The scheme of our investigation is typical for mathematics. Let me start with the example which
you know from Analysis. How do you detect continuity of a function? Of course, there is a definition
of continuity in terms of  and δ, but it would be an actual disaster if each time we need to prove
continuity of a function, we were supposed to write down the proof that ”for every positive  there exists
positive δ such that ...”. In fact we use another approach: we list once for ever a number of standard
operations which preserve continuity, like addition, multiplication, taking superpositions, etc., and point
out a number of standard examples of continuous functions – like the power function, the exponent,
etc. To prove that the operations in the list preserve continuity, same as to prove that the standard
functions are continuous, this takes certain effort and indeed is done in  − δ terms; but after this effort
is once invested, we normally have no difficulties with proving continuity of a given function: it suffices
to demonstrate that the function can be obtained, in finitely many steps, from our ”raw materials” – the
standard functions which are known to be continuous – by applying our machinery – the combination
rules which preserve continuity. Normally this demonstration is given by a single word ”evident” or even
is understood by default.
This is exactly the case with convexity. Here we also should point out the list of operations which
preserve convexity and a number of standard convex functions.

C.2.1 Operations preserving convexity of functions


These operations are as follows:
• [stability under taking weighted sums] if f, g are convex functions on Rn , then their linear combi-
nation λf + µg with nonnegative coefficients again is convex, provided that it is finite at least at
one point;
[this is given by straightforward verification of the definition]
C.2. HOW TO DETECT CONVEXITY 667

• [stability under affine substitutions of the argument] the superposition f (Ax + b) of a convex
function f on Rn and affine mapping x 7→ Ax + b from Rm into Rn is convex, provided that it is
finite at least at one point.
[you can prove it directly by verifying the definition or by noting that the epigraph of the super-
position, if nonempty, is the inverse image of the epigraph of f under an affine mapping]
• [stability under taking pointwise sup] upper bound sup fα (·) of every family of convex functions on
α
Rn is convex, provided that this bound is finite at least at one point.
[to understand it, note that the epigraph of the upper bound clearly is the intersection of epigraphs
of the functions from the family; recall that the intersection of every family of convex sets is convex]
• [“Convex Monotone superposition”] Let f (x) = (f1 (x), ..., fk (x)) be vector-function on Rn with
convex components fi , and assume that F is a convex function on Rk which is monotone, i.e., such
that z ≤ z 0 always implies that F (z) ≤ F (z 0 ). Then the superposition

φ(x) = F (f (x)) = F (f1 (x), ..., fk (x))

is convex on Rn , provided that it is finite at least at one point.

Remark C.2.1 The expression F (f1 (x), ..., fk (x)) makes no evident sense at a point x where some
of fi ’s are +∞. By definition, we assign the superposition at such a point the value +∞.

[To justify the rule, note that if λ ∈ (0, 1) and x, x0 ∈ Dom φ, then z = f (x), z 0 = f (x0 ) are vectors
from Rk which belong to Dom F , and due to the convexity of the components of f we have

f (λx + (1 − λ)x0 ) ≤ λz + (1 − λ)z 0 ;

in particular, the left hand side is a vector from Rk – it has no “infinite entries”, and we may
further use the monotonicity of F :

φ(λx + (1 − λ)x0 ) = F (f (λx + (1 − λ)x0 )) ≤ F (λz + (1 − λ)z 0 )

and now use the convexity of F :

F (λz + (1 − λ)z 0 ) ≤ λF (z) + (1 − λ)F (z 0 )

to get the required relation

φ(λx + (1 − λ)x0 ) ≤ λφ(x) + (1 − λ)φ(x0 ).

]
Imagine how many extra words would be necessary here if there were no convention on the value of a
convex function outside its domain!
Two more rules are as follows:

• [stability under partial minimization] if f (x, y) : Rnx × Rm


y is convex (as a function of
z = (x, y); this is called joint convexity) and the function

g(x) = inf f (x, y)


y

is proper, i.e., is > −∞ everywhere and is finite at least at one point, then g is convex
[this can be proved as follows. We should prove that if x, x0 ∈ Dom g and x00 =
λx + (1 − λ)x0 with λ ∈ [0, 1], then x00 ∈ Dom g and g(x00 ) ≤ λg(x) + (1 − λ)g(x0 ).
Given positive , we can find y and y 0 such that (x, y) ∈ Dom f , (x0 , y 0 ) ∈ Dom f and
668 APPENDIX C. CONVEX FUNCTIONS

g(x) +  ≥ f (x, y), g(y 0 ) +  ≥ f (x0 , y 0 ). Taking weighted sum of these two inequalities,
we get
λg(x) + (1 − λ)g(y) +  ≥ λf (x, y) + (1 − λ)f (x0 , y 0 ) ≥
[since f is convex]

≥ f (λx + (1 − λ)x0 , λy + (1 − λ)y 0 ) = f (x00 , λy + (1 − λ)y 0 )

(the last ≥ follows from the convexity of f ). The concluding quantity in the chain is
≥ g(x00 ), and we get g(x00 ) ≤ λg(x) + (1 − λ)g(x0 ) + . In particular, x00 ∈ Dom g (recall
that g is assumed to take only the values from R and the value +∞). Moreover, since
the resulting inequality is valid for all  > 0, we come to g(x00 ) ≤ λg(x) + (1 − λ)g(x0 ),
as required.]
• the “conic transformation” of a convex function f on Rn – the function g(y, x) =
yf (x/y) – is convex in the half-space y > 0 in Rn+1 .

Now we know what are the basic operations preserving convexity. Let us look what are the standard
functions these operations can be applied to. A number of examples was already given, but we still do
not know why the functions in the examples are convex. The usual way to check convexity of a “simple”
– given by a simple formula – function is based on differential criteria of convexity. Let us look what are
these criteria.

C.2.2 Differential criteria of convexity


From the definition of convexity of a function if immediately follows that convexity is one-dimensional
property: a proper (i.e., finite at least at one point) function f on Rn taking values in R ∪ {+∞} is
convex if and only if its restriction on every line, i.e., every function of the type g(t) = f (x + th) on the
axis, is either convex, or is identically +∞.
It follows that to detect convexity of a function, it, in principle, suffices to know how to detect
convexity of functions of one variable. This latter question can be resolved by the standard Calculus
tools. Namely, in the Calculus they prove the following simple

Proposition C.2.1 [Necessary and Sufficient Convexity Condition for smooth functions on the axis] Let
(a, b) be an interval in the axis (we do not exclude the case of a = −∞ and/or b = +∞). Then
(i) A differentiable everywhere on (a, b) function f is convex on (a, b) if and only if its derivative f 0
is monotonically nondecreasing on (a, b);
(ii) A twice differentiable everywhere on (a, b) function f is convex on (a, b) if and only if its second
derivative f 00 is nonnegative everywhere on (a, b).

With the Proposition, you can immediately verify that the functions listed as examples of convex functions
in Section C.1.1 indeed are convex. The only difficulty which you may meet is that some of these functions
(e.g., xp , p ≥ 1, and −xp , 0 ≤ p ≤ 1, were claimed to be convex on the half-interval [0, +∞), while the
Proposition speaks about convexity of functions on intervals. To overcome this difficulty, you may use
the following simple

Proposition C.2.2 Let M be a convex set and f be a function with Dom f = M . Assume that f is
convex on ri M and is continuous on M , i.e.,

f (xi ) → f (x), i → ∞,

whenever xi , x ∈ M and xi → x as i → ∞. Then f is convex on M .

Proof of Proposition C.2.1:


(i), necessity. Assume that f is differentiable and convex on (a, b); we should prove that then
f 0 is monotonically nondecreasing. Let x < y be two points of (a, b), and let us prove that
C.2. HOW TO DETECT CONVEXITY 669

f 0 (x) ≤ f 0 (y). Indeed, let z ∈ (x, y). We clearly have the following representation of z as a
convex combination of x and y:
y−z x−z
z= x+ y,
y−x y−x

whence, from convexity,


y−z x−z
f (z) ≤ f (x) + f (y),
y−x y−x
whence
f (z) − f (x) f (y) − f (z)
≤ .
x−z y−z
Passing here to limit as z → x + 0, we get

(f (y) − f (x)
f 0 (x) ≤ ,
y−x

and passing in the same inequality to limit as z → y − 0, we get

(f (y) − f (x)
f 0 (y) ≥ ,
y−x

whence f 0 (x) ≤ f 0 (y), as claimed. 2

(i), sufficiency. We should prove that if f is differentiable on (a, b) and f 0 is monotonically


nondecreasing on (a, b), then f is convex on (a, b). It suffices to verify that if x < y,
x, y ∈ (a, b), and z = λx + (1 − λ)y with 0 < λ < 1, then

f (z) ≤ λf (x) + (1 − λ)f (y),

or, which is the same (write f (z) as λf (z) + (1 − λ)f (z)), that

f (z) − f (x) f (y) − f (z)


≤ .
λ 1−λ
noticing that z − x = λ(y − x) and y − z = (1 − λ)(y − x), we see that the inequality we
should prove is equivalent to

f (z) − f (x) f (y) − f (z)


≤ .
z−x y−z

But in this equivalent form the inequality is evident: by the Lagrange Mean Value Theorem,
its left hand side is f 0 (ξ) with some ξ ∈ (x, z), while the right hand one is f 0 (η) with some
η ∈ (z, y). Since f 0 is nondecreasing and ξ ≤ z ≤ η, we have f 0 (ξ) ≤ f 0 (η), and the left hand
side in the inequality we should prove indeed is ≤ the right hand one. 2

(ii) is immediate consequence of (i), since, as we know from the very beginning of Calculus,
a differentiable function – in the case in question, it is f 0 – is monotonically nondecreasing
on an interval if and only if its derivative is nonnegative on this interval. 2

In fact, for functions of one variable there is a differential criterion of convexity which does
not assume any smoothness (we shall not prove this criterion):
670 APPENDIX C. CONVEX FUNCTIONS

Proposition C.2.3 [convexity criterion for univariate functions]


Let g : R → R ∪ {+∞} be a function. Let the domain ∆ = {t : g(t) < ∞} of the function be
a convex set which is not a singleton, i.e., let it be an interval (a, b) with possibly added one
or both endpoints (−∞ ≤ a < b ≤ ∞). g is convex if and only if it satisfies the following 3
requirements:
1) g is continuous on (a, b);
2) g is differentiable everywhere on (a, b), excluding, possibly, a countable set of points, and
the derivative g 0 (t) is nondecreasing on its domain;
3) at each endpoint u of the interval (a, b) which belongs to ∆ g is upper semicontinuous:

g(u) ≥ lim supt∈(a,b),t→u g(t).

Proof of Proposition C.2.2: Let x, y ∈ M and z = λx + (1 − λ)y, λ ∈ [0, 1], and let us
prove that
f (z) ≤ λf (x) + (1 − λ)f (y).
As we know from Theorem B.1.1.(iii), there exist sequences xi ∈ ri M and yi ∈ ri M con-
verging, respectively to x and to y. Then zi = λxi + (1 − λ)yi converges to z as i → ∞, and
since f is convex on ri M , we have

f (zi ) ≤ λf (xi ) + (1 − λ)f (yi );

passing to limit and taking into account that f is continuous on M and xi , yi , zi converge,
as i → ∞, to x, y, z ∈ M , respectively, we obtain the required inequality. 2

From Propositions C.2.1.(ii) and C.2.2 we get the following convenient necessary and sufficient condition
for convexity of a smooth function of n variables:

Corollary C.2.1 [convexity criterion for smooth functions on Rn ]


Let f : Rn → R ∪ {+∞} be a function. Assume that the domain Q of f is a convex set with a
nonempty interior and that f is
• continuous on Q
and
• twice differentiable on the interior of Q.
Then f is convex if and only if its Hessian is positive semidefinite on the interior of Q:

hT f 00 (x)h ≥ 0 ∀x ∈ int Q ∀h ∈ Rn .

Proof. The ”only if” part is evident: if f is convex and x ∈ Q0 = int Q, then the function
of one variable
g(t) = f (x + th)
(h is an arbitrary fixed direction in Rn ) is convex in certain neighbourhood of the point
t = 0 on the axis (recall that affine substitutions of argument preserve convexity). Since f
is twice differentiable in a neighbourhood of x, g is twice differentiable in a neighbourhood
of t = 0, so that g 00 (0) = hT f 00 (x)h ≥ 0 by Proposition C.2.1. 2

Now let us prove the ”if” part, so that we are given that hT f 00 (x)h ≥ 0 for every x ∈ int Q
and every h ∈ Rn , and we should prove that f is convex.
C.3. GRADIENT INEQUALITY 671

Let us first prove that f is convex on the interior Q0 of the domain Q. By Theorem B.1.1, Q0
is a convex set. Since, as it was already explained, the convexity of a function on a convex
set is one-dimensional fact, all we should prove is that every one-dimensional function
g(t) = f (x + t(y − x)), 0 ≤ t ≤ 1
(x and y are from Q0 ) is convex on the segment 0 ≤ t ≤ 1. Since f is continuous on Q ⊃ Q0 ,
g is continuous on the segment; and since f is twice continuously differentiable on Q0 , g is
continuously differentiable on (0, 1) with the second derivative
g 00 (t) = (y − x)T f 00 (x + t(y − x))(y − x) ≥ 0.
Consequently, g is convex on [0, 1] (Propositions C.2.1.(ii) and C.2.2). Thus, f is convex on
Q0 . It remains to note that f , being convex on Q0 and continuous on Q, is convex on Q by
Proposition C.2.2. 2

Applying the combination rules preserving convexity to simple functions which pass the “infinitesimal’
convexity tests, we can prove convexity of many complicated functions. Consider, e.g., an exponential
posynomial – a function
N
X
f (x) = ci exp{aTi x}
i=1

with positive coefficients ci (this is why the function is called posynomial). How could we prove that the
function is convex? This is immediate:
exp{t} is convex (since its second order derivative is positive and therefore the first derivative is
monotone, as required by the infinitesimal convexity test for smooth functions of one variable);
consequently, all functions exp{aTi x} are convex (stability of convexity under affine substitutions of
argument);
consequently, f is convex (stability of convexity under taking linear combinations with nonnegative
coefficients).
And if we were supposed to prove that the maximum of three posynomials is convex? Ok, we could
add to our three steps the fourth, which refers to stability of convexity under taking pointwise supremum.

C.3 Gradient inequality


An extremely important property of a convex function is given by the following
Proposition C.3.1 [Gradient inequality] Let f be a function taking finite values and the value +∞, x
be an interior point of the domain of f and Q be a convex set containing x. Assume that
• f is convex on Q
and
• f is differentiable at x,
and let ∇f (x) be the gradient of the function at x. Then the following inequality holds:
(∀y ∈ Q) : f (y) ≥ f (x) + (y − x)T ∇f (x). (C.3.1)
Geometrically: the graph
{(y, t) ∈ Rn+1 : y ∈ Dom f ∩ Q, t = f (y)}
of the function f restricted onto the set Q is above the graph
{(y, t) ∈ Rn+1 : t = f (x) + (y − x)T ∇f (x)}
of the linear form tangent to f at x.
672 APPENDIX C. CONVEX FUNCTIONS

Proof. Let y ∈ Q. There is nothing to prove if y 6∈ Dom f (since there the right hand side in the gradient
inequality is +∞), same as there is nothing to prove when y = x. Thus, we can assume that y 6= x and
y ∈ Dom f . Let us set
yτ = x + τ (y − x), 0 < τ ≤ 1,
so that y1 = y and yτ is an interior point of the segment [x, y] for 0 < τ < 1. Now let us use the following
extremely simple
Lemma C.3.1 Let x, x0 , x00 be three distinct points with x0 ∈ [x, x00 ], and let f be convex
and finite on [x, x00 ]. Then
f (x0 ) − f (x) f (x00 ) − f (x)
0
≤ . (C.3.2)
kx − xk2 kx00 − xk2
Proof of the Lemma. We clearly have
kx0 − xk2
x0 = x + λ(x00 − x), λ= ∈ (0, 1)
kx00 − xk2
or, which is the same,
x0 = (1 − λ)x + λx00 .
From the convexity inequality
f (x0 ) ≤ (1 − λ)f (x) + λf (x00 ),
or, which is the same,
f (x0 ) − f (x) ≤ λ(f (x00 ) − f (x0 )).
Dividing by λ and substituting the value of λ, we come to (C.3.2). 2

Applying the Lemma to the triple x, x0 = yτ , x00 = y, we get


f (x + τ (y − x)) − f (x) f (y) − f (x)
≤ ;
τ ky − xk2 ky − xk2
as τ → +0, the left hand side in this inequality, by the definition of the gradient, tends to ky − xk−1
2 (y −
x)T ∇f (x), and we get
ky − xk−1 T −1
2 (y − x) ∇f (x) ≤ ky − xk2 (f (y) − f (x)),

or, which is the same,


(y − x)T ∇f (x) ≤ f (y) − f (x);
this is exactly the inequality (C.3.1). 2

It is worthy of mentioning that in the case when Q is convex set with a nonempty interior
and f is continuous on Q and differentiable on int Q, f is convex on Q if and only if the
Gradient inequality (C.3.1) is valid for every pair x ∈ int Q and y ∈ Q.
Indeed, the ”only if” part, i.e., the implication
convexity of f ⇒ Gradient inequality for all x ∈ int Q and all y ∈ Q
is given by Proposition C.3.1. To prove the ”if” part, i.e., to establish the implication inverse
to the above, assume that f satisfies the Gradient inequality for all x ∈ int Q and all y ∈ Q,
and let us verify that f is convex on Q. It suffices to prove that f is convex on the interior
Q0 of the set Q (see Proposition C.2.2; recall that by assumption f is continuous on Q and
Q is convex). To prove that f is convex on Q0 , note that Q0 is convex (Theorem B.1.1) and
that, due to the Gradient inequality, on Q0 f is the upper bound of the family of affine (and
therefore convex) functions:
f (y) = sup fx (y), fx (y) = f (x) + (y − x)T ∇f (x). 2
x∈Q0
C.4. BOUNDEDNESS AND LIPSCHITZ CONTINUITY OF A CONVEX FUNCTION 673

C.4 Boundedness and Lipschitz continuity of a convex function


Convex functions possess nice local properties.
Theorem C.4.1 [local boundedness and Lipschitz continuity of convex function]
Let f be a convex function and let K be a closed and bounded set contained in the relative interior of
the domain Dom f of f . Then f is Lipschitz continuous on K – there exists constant L – the Lipschitz
constant of f on K – such that

|f (x) − f (y)| ≤ Lkx − yk2 ∀x, y ∈ K. (C.4.1)

In particular, f is bounded on K.

Remark C.4.1 All three assumptions on K – (1) closedness, (2) boundedness, and (3) K ⊂ ri Dom f –
are essential, as it is seen from the following three examples:
• f (x) = 1/x, Dom F = (0, +∞), K = (0, 1]. We have (2), (3) but not (1); f is neither bounded, nor
Lipschitz continuous on K.
• f (x) = x2 , Dom f = R, K = R. We have (1), (3) and not (2); f is neither bounded nor Lipschitz
continuous on K.

• f (x) = − x, Dom f = [0, +∞), K = [0, 1]. We have (1), (2) and not (3); f is not Lipschitz
continuous on K 1) , although is bounded. With properly chosen convex function f of two variables
and non-polyhedral compact domain (e.g., with Dom f being the unit circle), we could demonstrate
also that lack of (3), even in presence of (1) and (2), may cause unboundedness of f at K as well.

Remark C.4.2 Theorem C.4.1 says that a convex function f is bounded on every compact (i.e., closed
and bounded) subset of the relative interior of Dom f . In fact there is much stronger statement on the
below boundedness of f : f is below bounded on any bounded subset of Rn !.

Proof of Theorem C.4.1. We shall start with the following local version of the Theorem.
Proposition C.4.1 Let f be a convex function, and let x̄ be a point from the relative interior of the
domain Dom f of f . Then
(i) f is bounded at x̄: there exists a positive r such that f is bounded in the r-neighbourhood Ur (x̄)
of x̄ in the affine span of Dom f :

∃r > 0, C : |f (x)| ≤ C ∀x ∈ Ur (x̄) = {x ∈ Aff(Dom f ) : kx − x̄k2 ≤ r};

(ii) f is Lipschitz continuous at x̄, i.e., there exists a positive ρ and a constant L such that

|f (x) − f (x0 )| ≤ Lkx − x0 k2 ∀x, x0 ∈ Uρ (x̄).

Implication “Proposition C.4.1 ⇒ Theorem C.4.1” is given by standard Analysis reasoning. All we
need is to prove that if K is a bounded and closed (i.e., a compact) subset of ri Dom f , then f is Lipschitz
continuous on K (the boundedness of f on K is an evident consequence of its Lipschitz continuity on K
and boundedness of K). Assume, on contrary, that f is not Lipschitz continuous on K; then for every
integer i there exists a pair of points xi , yi ∈ K such that

f (xi ) − f (yi ) ≥ ikxi − yi k2 . (C.4.2)

Since K is compact, passing to a subsequence we can ensure that xi → x ∈ K and yi → y ∈ K.


By Proposition C.4.1 the case x = y is impossible – by Proposition f is Lipschitz continuous in a
f (0)−f (t)
1)
indeed, we have lim t
= lim t−1/2 = +∞, while for a Lipschitz continuous f the ratios t−1 (f (0) −
t→+0 t→+0
f (t)) should be bounded
674 APPENDIX C. CONVEX FUNCTIONS

neighbourhood B of x = y; since xi → x, yi → y, this neighbourhood should contain all xi and yi with


large enough indices i; but then, from the Lipschitz continuity of f in B, the ratios (f (xi ) − f (yi ))/kxi −
yi k2 form a bounded sequence, which we know is not the case. Thus, the case x = y is impossible. The
case x 6= y is “even less possible” – since, by Proposition, f is continuous on Dom f at both the points
x and y (note that Lipschitz continuity at a point clearly implies the usual continuity at it), so that
we would have f (xi ) → f (x) and f (yi ) → f (y) as i → ∞. Thus, the left hand side in (C.4.2) remains
bounded as i → ∞. In the right hand side one factor – i – tends to ∞, and the other one has a nonzero
limit kx − yk, so that the right hand side tends to ∞ as i → ∞; this is the desired contradiction. 2

Proof of Proposition C.4.1.


10 . We start with proving the above boundedness of f in a neighbourhood of x̄. This is immediate:
we know that there exists a neighbourhood Ur̄ (x̄) which is contained in Dom f (since, by assumption,
x̄ is a relative interior point of Dom f ). Now, we can find a small simplex ∆ of the dimension m =
dim Aff(Dom f ) with the vertices x0 , ..., xm in Ur̄ (x̄) in such a way that x̄ will be a convex combination
of the vectors xi with positive coefficients, even with the coefficients 1/(m + 1):
m
X 1 2)
x̄ = xi .
i=0
m+1

We know that x̄ is the point from the relative interior of ∆ (Exercise B.8); since ∆ spans the same affine
subspace as Dom f , it means that ∆ contains Ur (x̄) with certain r > 0. Now, we have

Xm X
∆={ λi xi : λi ≥ 0, λi = 1}
i=0 i

so that in ∆ f is bounded from above by the quantity max f (xi ) by Jensen’s inequality:
0≤i≤m

Xm m
X
f( λ i xi ) ≤ λi f (xi ) ≤ max f (xi ).
i
i=0 i=0

Consequently, f is bounded from above, by the same quantity, in Ur (x̄).


20 . Now let us prove that if f is above bounded, by some C, in Ur = Ur (x̄), then it in fact is
below bounded in this neighbourhood (and, consequently, is bounded in Ur ). Indeed, let x ∈ Ur , so that
x ∈ Aff(Dom f ) and kx − x̄k2 ≤ r. Setting x0 = x̄ − [x − x̄] = 2x̄ − x, we get x0 ∈ Aff(Dom f ) and
kx0 − x̄k2 = kx − x̄k2 ≤ r, so that x0 ∈ Ur . Since x̄ = 21 [x + x0 ], we have

2f (x̄) ≤ f (x) + f (x0 ),

whence
f (x) ≥ 2f (x̄) − f (x0 ) ≥ 2f (x̄) − C, x ∈ Ur (x̄),
and f is indeed below bounded in Ur .
(i) is proved.
30 . (ii) is an immediate consequence of (i) and Lemma C.3.1. Indeed, let us prove that f is Lipschitz
continuous in the neighbourhood Ur/2 (x̄), where r > 0 is such that f is bounded in Ur (x̄) (we already
2
to see that the required ∆ exists, let us act as follows: first, the case of Dom f being a singleton is evident, so
that we can assume that Dom f is a convex set of dimension m ≥ 1. Without loss of generality, we may assume
that x̄ = 0, so that 0 ∈ Dom f and therefore Aff(Dom f ) = Lin(Dom f ). By Linear Algebra, we can find m
m
P
vectors y1 , ..., ym in Dom f which form a basis in Lin(Dom f ) = Aff(Dom f ). Setting y0 = − yi and taking into
i=1
account that 0 = x̄ ∈ ri Dom f , we can find  > 0 such that the vectors xi = yi , i = 0, ..., m, belong to Ur̄ (x̄). By
m
1
P
construction, x̄ = 0 = m+1
xi .
i=0
C.5. MAXIMA AND MINIMA OF CONVEX FUNCTIONS 675

know from (i) that the required r does exist). Let |f | ≤ C in Ur , and let x, x0 ∈ Ur/2 , x 6= x0 . Let
us extend the segment [x, x0 ] through the point x0 until it reaches, at certain point x00 , the (relative)
boundary of Ur . We have
x0 ∈ (x, x00 ); kx00 − x̄k2 = r.
From (C.3.2) we have
f (x00 ) − f (x)
f (x0 ) − f (x) ≤ kx0 − xk2 .
kx00 − xk2
The second factor in the right hand side does not exceed the quantity (2C)/(r/2) = 4C/r; indeed, the
numerator is, in absolute value, at most 2C (since |f | is bounded by C in Ur and both x, x00 belong to
Ur ), and the denominator is at least r/2 (indeed, x is at the distance at most r/2 from x̄, and x00 is at
the distance exactly r from x̄, so that the distance between x and x00 , by the triangle inequality, is at
least r/2). Thus, we have

f (x0 ) − f (x) ≤ (4C/r)kx0 − xk2 , x, x0 ∈ Ur/2 ;

swapping x and x0 , we come to


f (x) − f (x0 ) ≤ (4C/r)kx0 − xk2 ,
whence
|f (x) − f (x0 )| ≤ (4C/r)kx − x0 k2 , x, x0 ∈ Ur/2 ,
as required in (ii). 2

C.5 Maxima and minima of convex functions


As it was already mentioned, optimization problems involving convex functions possess nice theoretical
properties. One of the most important of these properties is given by the following

Theorem C.5.1 [“Unimodality”] Let f be a convex function on a convex set Q ⊂ Rn , and let x∗ ∈
Q ∩ Dom f be a local minimizer of f on Q:

(∃r > 0) : f (y) ≥ f (x∗ ) ∀y ∈ Q, ky − xk2 < r. (C.5.1)

Then x∗ is a global minimizer of f on Q:

f (y) ≥ f (x∗ ) ∀y ∈ Q. (C.5.2)

Moreover, the set Argmin f of all local (≡ global) minimizers of f on Q is convex.


Q
If f is strictly convex (i.e., the convexity inequality f (λx + (1 − λ)y) ≤ λf (x) + (1 − λ)f (y) is strict
whenever x 6= y and λ ∈ (0, 1)), then the above set is either empty or is a singleton.

Proof. 1) Let x∗ be a local minimizer of f on Q and y ∈ Q, y 6= x∗ ; we should prove that f (y) ≥ f (x∗ ).
There is nothing to prove if f (y) = +∞, so that we may assume that y ∈ Dom f . Note that also
x∗ ∈ Dom f for sure – by definition of a local minimizer.
For all τ ∈ (0, 1) we have, by Lemma C.3.1,

f (x∗ + τ (y − x∗ )) − f (x∗ ) f (y) − f (x∗ )


≤ .
τ ky − x∗ k2 ky − x∗ k2

Since x∗ is a local minimizer of f , the left hand side in this inequality is nonnegative for all small enough
values of τ > 0. We conclude that the right hand side is nonnegative, i.e., f (y) ≥ f (x∗ ). 2
676 APPENDIX C. CONVEX FUNCTIONS

2) To prove convexity of Argmin f , note that Argmin f is nothing but the level set levα (f ) of f
Q Q
associated with the minimal value min f of f on Q; as a level set of a convex function, this set is convex
Q
(Proposition C.1.4).
3) To prove that the set Argmin f associated with a strictly convex f is, if nonempty, a singleton,
Q
note that if there were two distinct minimizers x0 , x00 , then, from strict convexity, we would have
1 1 1
f ( x0 + x00 ) < [f (x0 ) + f (x00 ) == min f,
2 2 2 Q

which clearly is impossible - the argument in the left hand side is a point from Q! 2

Another pleasant fact is that in the case of differentiable convex functions the known from Calculus
necessary optimality condition (the Fermat rule) is sufficient for global optimality:

Theorem C.5.2 [Necessary and sufficient optimality condition for a differentiable convex function]
Let f be convex function on convex set Q ⊂ Rn , and let x∗ be an interior point of Q. Assume that
f is differentiable at x∗ . Then x∗ is a minimizer of f on Q if and only if

∇f (x∗ ) = 0.

Proof. As a necessary condition for local optimality, the relation ∇f (x∗ ) = 0 is known from Calculus;
it has nothing in common with convexity. The essence of the matter is, of course, the sufficiency of the
condition ∇f (x∗ ) = 0 for global optimality of x∗ in the case of convex f . This sufficiency is readily given
by the Gradient inequality (C.3.1): by virtue of this inequality and due to ∇f (x∗ ) = 0,

f (y) ≥ f (x∗ ) + (y − x∗ )∇f (x∗ ) = f (x∗ )

for all y ∈ Q. 2

A natural question is what happens if x∗ in the above statement is not necessarily an interior point
of Q. Thus, assume that x∗ is an arbitrary point of a convex set Q and that f is convex on Q and
differentiable at x∗ (the latter means exactly that Dom f contains a neighbourhood of x∗ and f possesses
the first order derivative at x∗ ). Under these assumptions, when x∗ is a minimizer of f on Q?
The answer is as follows: let

TQ (x∗ ) = {h ∈ Rn : x∗ + th ∈ Q ∀ small enough t > 0}

be the radial cone of Q at x∗ ; geometrically, this is the set of all directions leading from x∗ inside Q, so
that a small enough positive step from x∗ along the direction keeps the point in Q. From the convexity of
Q it immediately follows that the radial cone indeed is a convex cone (not necessary closed). E.g., when
x∗ is an interior point of Q, then the radial cone to Q at x∗ clearly is the entire Rn . A more interesting
example is the radial cone to a polyhedral set

Q = {x : aTi x ≤ bi , i = 1, ..., m}; (C.5.3)

for x∗ ∈ Q the corresponding radial cone clearly is the polyhedral cone

{h : aTi h ≤ 0 ∀i : aTi x∗ = bi } (C.5.4)

corresponding to the active at x∗ (i.e., satisfied at the point as equalities rather than as strict inequalities)
constraints aTi x ≤ bi from the description of Q.
Now, for the functions in question the necessary and sufficient condition for x∗ to be a minimizer of
f on Q is as follows:
C.5. MAXIMA AND MINIMA OF CONVEX FUNCTIONS 677

Proposition C.5.1 Let Q be a convex set, let x∗ ∈ Q, and let f be a convex on Q function which is
differentiable at x∗ . The necessary and sufficient condition for x∗ to be a minimizer of f on Q is that
the derivative of f taken at x∗ along every direction from TQ (x∗ ) should be nonnegative:

hT ∇f (x∗ ) ≥ 0 ∀h ∈ TQ (x∗ ).

Proof is immediate. The necessity is an evident fact which has nothing in common with convexity:
assuming that x∗ is a local minimizer of f on Q, we note that if there were h ∈ TQ (x∗ ) with hT ∇f (x∗ ) < 0,
then we would have
f (x∗ + th) < f (x∗ )
for all small enough positive t. On the other hand, x∗ + th ∈ Q for all small enough positive t due to
h ∈ TQ (x∗ ). Combining these observations, we conclude that in every neighbourhood of x∗ there are
points from Q with strictly better than the one at x∗ values of f ; this contradicts the assumption that
x∗ is a local minimizer of f on Q.
The sufficiency is given by the Gradient Inequality, exactly as in the case when x∗ is an interior point
of Q. 2

Proposition C.5.1 says that whenever f is convex on Q and differentiable at x∗ ∈ Q, the necessary
and sufficient condition for x∗ to be a minimizer of f on Q is that the linear form given by the gradient
∇f (x∗ ) of f at x∗ should be nonnegative at all directions from the radial cone TQ (x∗ ). The linear forms
nonnegative at all directions from the radial cone also form a cone; it is called the cone normal to Q at
x∗ and is denoted NQ (x∗ ). Thus, Proposition says that the necessary and sufficient condition for x∗ to
minimize f on Q is the inclusion ∇f (x∗ ) ∈ NQ (x∗ ). What does this condition actually mean, it depends
on what is the normal cone: whenever we have an explicit description of it, we have an explicit form of
the optimality condition.
E.g., when TQ (x∗ ) = Rn (it is the same as to say that x∗ is an interior point of Q), then the normal
cone is comprised of the linear forms nonnegative at the entire space, i.e., it is the trivial cone {0};
consequently, for the case in question the optimality condition becomes the Fermat rule ∇f (x∗ ) = 0, as
we already know.
When Q is the polyhedral set (C.5.3), the normal cone is the polyhedral cone (C.5.4); it is comprised
of all directions which have nonpositive inner products with all ai coming from the active, in the afore-
mentioned sense, constraints. The normal cone is comprised of all vectors which have nonnegative inner
products with all these directions, i.e., of vectors a such that the inequality hT a ≥ 0 is a consequence
of the inequalities hT ai ≤ 0, i ∈ I(x∗ ) ≡ {i : aTi x∗ = bi }. From the Homogeneous Farkas Lemma we
conclude that the normal cone is simply the conic hull of the vectors −ai , i ∈ I(x∗ ). Thus, in the case in
question (*) reads:
x∗ ∈ Q is a minimizer of f on Q if and only if there exist nonnegative reals λ∗i associated with “active”
(those from I(x∗ )) values of i such that
X
∇f (x∗ ) + λ∗i ai = 0.
i∈I(x∗ )

These are the famous Karush-Kuhn-Tucker optimality conditions; these conditions are necessary for
optimality in an essentially wider situation.
The indicated results demonstrate that the fact that a point x∗ ∈ Dom f is a global minimizer of a
convex function f depends only on the local behaviour of f at x∗ . This is not the case with maximizers
of a convex function. First of all, such a maximizer, if exists, in all nontrivial cases should belong to the
boundary of the domain of the function:

Theorem C.5.3 Let f be convex, and let Q be the domain of f . Assume that f attains its maximum
on Q at a point x∗ from the relative interior of Q. Then f is constant on Q.
678 APPENDIX C. CONVEX FUNCTIONS

Proof. Let y ∈ Q; we should prove that f (y) = f (x∗ ). There is nothing to prove if y = x∗ , so that we
may assume that y 6= x∗ . Since, by assumption, x∗ ∈ ri Q, we can extend the segment [x∗ , y] through the
endpoint x∗ , keeping the left endpoint of the segment in Q; in other words, there exists a point y 0 ∈ Q
such that x∗ is an interior point of the segment [y 0 , y]:

x∗ = λy 0 + (1 − λ)y

for certain λ ∈ (0, 1). From the definition of convexity

f (x∗ ) ≤ λf (y 0 ) + (1 − λ)f (y).

Since both f (y 0 ) and f (y) do not exceed f (x∗ ) (x∗ is a maximizer of f on Q!) and both the weights λ
and 1 − λ are strictly positive, the indicated inequality can be valid only if f (y 0 ) = f (y) = f (x∗ ). 2

The next theorem gives further information on maxima of convex functions:

Theorem C.5.4 Let f be a convex function on Rn and E be a subset of Rn . Then

sup f = sup f. (C.5.5)


ConvE E

In particular, if S ⊂ Rn is convex and compact set, then the supremum of f on S is equal to the supremum
of f on the set of extreme points of S:
sup f = sup f (C.5.6)
S Ext(S)
Proof. To prove (C.5.5), let x ∈ ConvE, so that x is a convex combination of points from E (Theorem
B.1.4 on the structure of convex hull):
X X
x= λi xi [xi ∈ E, λi ≥ 0, λi = 1].
i i

Applying Jensen’s inequality (Proposition C.1.3), we get


X X
f (x) ≤ λi f (xi ) ≤ λi sup f = sup f,
i i E E

so that the left hand side in (C.5.5) is ≤ the right hand one; the inverse inequality is evident, since
ConvE ⊃ E. 2

To derive (C.5.6) from (C.5.5), it suffices to note that from the Krein-Milman Theorem (Theorem
B.2.10) for a convex compact set S one has S = ConvExt(S). 2

The last theorem on maxima of convex functions is as follows:

Theorem C.5.5 Let f be a convex function such that the domain Q of f is closed and does
not contain lines. Then
(i) If the set
Argmax f ≡ {x ∈ Q : f (x) ≥ f (y) ∀y ∈ Q}
Q

of global maximizers of f is nonempty, then it intersects the set Ext(Q) of the extreme points
of Q, so that at least one of the maximizers of f is an extreme point of Q;
(ii) If the set Q is polyhedral and f is above bounded on Q, then the maximum of f on Q is
achieved: Argmax f 6= ∅.
Q
C.5. MAXIMA AND MINIMA OF CONVEX FUNCTIONS 679

Proof. Let us start with (i). We shall prove this statement by induction on the dimension
of Q. The base dim Q = 0, i.e., the case of a singleton Q, is trivial, since here Q = ExtQ =
Argmax f . Now assume that the statement is valid for the case of dim Q ≤ p, and let us
Q
prove that it is valid also for the case of dim Q = p + 1. Let us first verify that the set
Argmax f intersects with the (relative) boundary of Q. Indeed, let x ∈ Argmax f . There
Q Q
is nothing to prove if x itself is a relative boundary point of Q; and if x is not a boundary
point, then, by Theorem C.5.3, f is constant on Q, so that Argmax f = Q; and since Q is
Q
closed, every relative boundary point of Q (such a point does exist, since Q does not contain
lines and is of positive dimension) is a maximizer of f on Q, so that here again Argmax f
Q
intersects ∂ri Q.
Thus, among the maximizers of f there exists at least one, let it be x, which belongs to the
relative boundary of Q. Let H be the hyperplane which supports Q at x (see Section B.2.6),
and let Q0 = Q ∩ H. The set Q0 is closed and convex (since Q and H are), nonempty (it
contains x) and does not contain lines (since Q does not). We have max f = f (x) = max 0
f
Q Q
(note that Q0 ⊂ Q), whence

∅=
6 Argmax f ⊂ Argmax f.
Q0 Q

Same as in the proof of the Krein-Milman Theorem (Theorem B.2.10), we have dim Q0 <
dim Q. In view of this inequality we can apply to f and Q0 our inductive hypothesis to get

Ext(Q0 ) ∩ Argmax f 6= ∅.
Q0

Since Ext(Q0 ) ⊂ Ext(Q) by Lemma B.2.4 and, as we just have seen, Argmax f ⊂ Argmax f ,
Q0 Q
we conclude that the set Ext(Q) ∩ Argmax f is not smaller than Ext(Q0 ) ∩ Argmax f and is
Q Q0
therefore nonempty, as required. 2

To prove (ii), let us use the known to us from Section B.2.9 results on the structure of a
polyhedral convex set:
Q = Conv(V ) + Cone (R),
where V and R are finite sets. We are about to prove that the upper bound of f on Q is
exactly the maximum of f on the finite set V :

∀x ∈ Q : f (x) ≤ max f (v). (C.5.7)


v∈V

This will mean, in particular, that f attains its maximum on Q – e.g., at the point of V
where f attains its maximum on V .
To prove the announced statement, I first claim that if f is above bounded on Q, then every
direction r ∈ Cone (R) is descent for f , i.e., is such that every step in this direction taken
from every point x ∈ Q decreases f :

f (x + tr) ≤ f (x) ∀x ∈ Q∀t ≥ 0. (C.5.8)

Indeed, if, on contrary, there were x ∈ Q, r ∈ R and t ≥ 0 such that f (x + tr) > f (x), we
would have t > 0 and, by Lemma C.3.1,
s
f (x + sr) ≥ f (x) + (f (x + tr) − f (x)), s ≥ t.
t
680 APPENDIX C. CONVEX FUNCTIONS

Since x ∈ Q and r ∈ Cone (R), x + sr ∈ Q for all s ≥ 0, and since f is above bounded on Q,
the left hand side in the latter inequality is above bounded, while the right hand one, due to
f (x + tr) > f (x), goes to +∞ as s → ∞, which is the desired contradiction.
Now we are done: to prove (C.5.7), note that a generic point x ∈ Q can be represented as
X X
x= λv v + r [r ∈ Cone (R); λv = 1, λv ≥ 0],
v∈V v

and we have P
f (x) = f( λv v + r)
v∈V
P
≤ f( λv v) [by (C.5.8)]
Pv∈V
≤ λv f (v) [Jensen’s Inequality]
v∈V
≤ max f (v) 2
v∈V

C.6 Subgradients and Legendre transformation


C.6.1 Proper functions and their representation
According to one of two equivalent definitions, a convex function f on Rn is a function taking
values in R ∪ {+∞} such that the epigraph
Epi(f ) = {(t, x) ∈ Rn+1 : t ≥ f (x)}
is a nonempty convex set. Thus, there is no essential difference between convex functions
and convex sets: convex function generates a convex set – its epigraph – which of course
remembers everything about the function. And the only specific property of the epigraph as
a convex set is that it has a recessive direction – namely, e = (1, 0) – such that the intersection
of the epigraph with every line directed by h is either empty, or is a closed ray. Whenever a
nonempty convex set possesses such a property with respect to certain direction, it can be
represented, in properly chosen coordinates, as the epigraph of some convex function. Thus,
a convex function is, basically, nothing but a way to look, in the literal meaning of the latter
verb, at a convex set.
Now, we know that “actually good” convex sets are closed ones: they possess a lot of
important properties (e.g., admit a good outer description) which are not shared by arbitrary
convex sets. It means that among convex functions there also are “actually good” ones –
those with closed epigraphs. Closedness of the epigraph can be “translated” to the functional
language and there becomes a special kind of continuity – lower semicontinuity:
Definition C.6.1 [Lower semicontinuity] Let f be a function (not necessarily convex) de-
fined on Rn and taking values in R ∪ {+∞}. We say that f is lower semicontinuous at a
point x̄, if for every sequence of points {xi } converging to x̄ one has
f (x̄) ≤ lim inf f (xi )
i→∞

(here, of course, lim inf of a sequence with all terms equal to +∞ is +∞).
f is called lower semicontinuous, if it is lower semicontinuous at every point.
A trivial example of a lower semicontinuous function is a continuous one. Note, however,
that a semicontinuous function is not obliged to be continuous; what it is obliged, is to make
only “jumps down”. E.g., the function

0, x 6= 0
f (x) =
a, x = 0
C.6. SUBGRADIENTS AND LEGENDRE TRANSFORMATION 681

is lower semicontinuous if a ≤ 0 (”jump down at x = 0 or no jump at all”), and is not lower


semicontinuous if a > 0 (”jump up”).
The following statement links lower semicontinuity with the geometry of the epigraph:

Proposition C.6.1 A function f defined on Rn and taking values from R ∪ {+∞} is lower
semicontinuous if and only if its epigraph is closed (e.g., due to its emptiness).

I shall not prove this statement, same as most of other statements in this Section; the reader
definitely is able to restore (very simple) proofs I am skipping.
An immediate consequence of the latter proposition is as follows:

Corollary C.6.1 The upper bound

f (x) = sup fα (x)


α∈A

of arbitrary family of lower semicontinuous functions is lower semicontinuous.


[from now till the end of the Section, if the opposite is not explicitly stated, “a function”
means “a function defined on the entire Rn and taking values in R ∪ {+∞}”]

Indeed, the epigraph of the upper bound is the intersection of the epigraphs of the functions
forming the bound, and the intersection of closed sets always is closed. 2

Now let us look at convex lower semicontinuous functions; according to our general conven-
tion, “convex” means “satisfying the convexity inequality and finite at least at one point”,
or, which is the same, “with convex nonempty epigraph”; and as we just have seen, “lower
semicontinuous” means “with closed epigraph”. Thus, we are interested in functions with
closed convex nonempty epigraphs; to save words, let us call these functions proper.
What we are about to do is to translate to the functional language several constructions and
results related to convex sets. In the usual life, a translation (e.g. of poetry) typically results
in something less rich than the original; in contrast to this, in mathematics this is a powerful
source of new ideas and constructions.

“Outer description” of a proper function. We know that a closed convex set is


intersection of closed half-spaces. What does this fact imply when the set is the epigraph
of a proper function f ? First of all, note that the epigraph is not a completely arbitrary
convex set: it has a recessive direction e = (1, 0) – the basic orth of the t-axis in the space
of variables t ∈ R, x ∈ Rn where the epigraph lives. This direction, of course, should be
recessive for every closed half-space

(∗) Π = {(t, x) : αt ≥ dT x − a} [|α| + |d| > 0]

containing Epi(f ) (note that what is written in the right hand side of the latter relation,
is one of many universal forms of writing down a general nonstrict linear inequality in the
space where the epigraph lives; this is the form the most convenient for us now). Thus, e
should be a recessive direction of Π ⊃ Epi(f ); as it is immediately seen, recessivity of e for
Π means exactly that α ≥ 0. Thus, speaking about closed half-spaces containing Epi(f ), we
in fact are considering some of the half-spaces (*) with α ≥ 0.
Now, there are two essentially different possibilities for α to be nonnegative – (A) to be
positive, and (B) to be zero. In the case of (B) the boundary hyperplane of Π is “vertical”
– it is parallel to e, and in fact it “bounds” only x – Π is comprised of all pairs (t, x) with x
belonging to certain half-space in the x-subspace and t being arbitrary real. These “vertical”
subspaces will be of no interest for us.
682 APPENDIX C. CONVEX FUNCTIONS

The half-spaces which indeed are of interest for us are the “nonvertical” ones: those given
by the case (A), i.e., with α > 0. For a non-vertical half-space Π, we always can divide the
inequality defining Π by α and to make α = 1. Thus, a “nonvertical” candidate to the role
of a closed half-space containing Epi(f ) always can be written down as

(∗∗) Π = {(t, x) : t ≥ dT x − a},

i.e., can be represented as the epigraph of an affine function of x.


Now, when such a candidate indeed is a half-space containing Epi(f )? The answer is clear:
it is the case if and only if the affine function dT x − a everywhere in Rn is ≤ f (·) – as
we shall say, “is an affine minorant of f ”; indeed, the smaller is the epigraph, the larger
is the function. If we knew that Epi(f ) – which definitely is the intersection of all closed
half-spaces containing Epi(f ) – is in fact the intersection of already nonvertical closed half-
spaces containing Epi(f ), or, which is the same, the intersection of the epigraphs of all affine
minorants of f , we would be able to get a nice and nontrivial result:
(!) a proper convex function is the upper bound of affine functions – all its affine
minorants.
(indeed, we already know that it is the same – to say that a function is an upper bound of
certain family of functions, and to say that the epigraph of the function is the intersection
of the epigraphs of the functions of the family).
(!) indeed is true:

Proposition C.6.2 A proper convex function f is the upper bound of all its affine mino-
rants. Moreover, at every point x̄ ∈ ri Dom f from the relative interior of the domain f f is
even not the upper bound, but simply the maximum of its minorants: there exists an affine
function fx̄ (x) which is ≤ f (x) everywhere in Rn and is equal to f at x = x̄.

Proof. I. We start with the “Moreover” part of the statement; this is the key to the entire
statement. Thus, we are about to prove that if x̄ ∈ ri Dom f , then there exists an affine
function fx̄ (x) which is everywhere ≤ f (x), and at x = x̄ the inequality becomes an equality.
I.10 First of all, we easily can reduce the situation to the one when Dom f is full-dimensional.
Indeed, by shifting f we may make the affine span Aff(Dom f ) of the domain of f to be a
linear subspace L in Rn ; restricting f onto this linear subspace, we clearly get a proper
function on L. If we believe that our statement is true for the case when the domain of f is
full-dimensional, we can conclude that there exists an affine function

dT x − a [x ∈ L]

on L (d ∈ L) such that

f (x) ≥ dT x − a ∀x ∈ L; f (x̄) = dT x̄ − a.

The affine function we get clearly can be extended, by the same formula, from L on the
entire Rn and is a minorant of f on the entire Rn – outside of L ⊃ Dom f f simply is +∞!
This minorant on Rn is exactly what we need.
I.20 . Now let us prove that our statement is valid when Dom f is full-dimensional, so that
x̄ is an interior point of the domain of f . Let us look at the point y = (f (x̄), x̄). This is a
point from the epigraph of f , and I claim that it is a point from the relative boundary of
the epigraph. Indeed, if y were a relative interior point of Epi(f ), then, taking y 0 = y + e,
we would get a segment [y 0 , y] contained in Epi(f ); since the endpoint y of the segment is
assumed to be relative interior for Epi(f ), we could extend this segment a little through this
endpoint, not leaving Epi(f ); but this clearly is impossible, since the t-coordinate of the new
endpoint would be < f (x̄), and the x-component of it still would be x̄.
C.6. SUBGRADIENTS AND LEGENDRE TRANSFORMATION 683

Thus, y is a point from the relative boundary of Epi(f ). Now I claim that y 0 is an interior
point of Epi(f ). This is immediate: we know from Theorem C.4.1 that f is continuous at x̄,
so that there exists a neighbourhood U of x̄ in Aff(Dom f ) = Rn such that f (x) ≤ f (x̄ + 0.5)
whenever x ∈ U , or, in other words, the set

V = {(t, x) : x ∈ U, t > f (x̄) + 0.5}

is contained in Epi(f ); but this set clearly contains a neighbourhood of y 0 in Rn+1 .


Now let us look at the supporting linear form to Epi(f ) at the point y of the relative boundary
of Epi(f ). This form gives us a linear inequality on Rn+1 which is satisfied everywhere on
Epi(f ) and becomes equality at y; besides this, the inequality is not equality identically on
Epi(f ), it is strict somewhere on Epi(f ). Without loss of generality we may assume that the
inequality is of the form
(+) αt ≥ dT x − a.
Now, since our inequality is satisfied at y 0 = y + e and becomes equality at (t, x) = y, α
should be ≥ 0; it cannot be 0, since in the latter case the inequality in question would be
equality also at y 0 ∈ int Epi(f ). But a linear inequality which is satisfied at a convex set and
is equality at an interior point of the set is trivial – coming from the zero linear form (this
is exactly the statement that a linear form attaining its minimum on a convex set at a point
from the relative interior of the set is constant on the set and on its affine hull).
Thus, inequality (+) which is satisfied on Epi(f ) and becomes equality at y is an inequality
with α > 0. Let us divide both sides of the inequality by α; we get a new inequality of the
form
(&) t ≥ dT x − a
(I keep the same notation for the right hand side coefficients – we never will come back to
the old coefficients); this inequality is valid on Epi(f ) and is equality at y = (f (x̄), x̄). Since
the inequality is valid on Epi(f ), it is valid at every pair (t, x) with x ∈ Dom f and t = f (x):

(#) f (x) ≥ dT x − a ∀x ∈ Dom f ;

so that the right hand side is an affine minorant of f on Dom f and therefore – on Rn
(f = +∞ outside Dom f !). It remains to note that (#) is equality at x̄, since (&) is equality
at y. 2

II. We have proved that if F if the set of all affine functions which are minorants of f , then
the function
f¯(x) = sup φ(x)
φ∈F

is equal to f on ri Dom f (and at x from the latter set in fact sup in the right hand side can
be replaced with max); to complete the proof of the Proposition, we should prove that f¯ is
equal to f also outside ri Dom f .
II.10 . Let us first prove that f¯ is equal to f outside cl Dom f , or. which is the same, prove
that f¯(x) = +∞ outside cl Dom f . This is easy: is x̄ is a point outside cl Dom f , it can be
strongly separated from Dom f , see Separation Theorem (ii) (Theorem B.2.9). Thus, there
exists z ∈ Rn such that

z T x̄ ≥ z T x + ζ ∀x ∈ Dom f [ζ > 0]. (C.6.1)

Besides this, we already know that there exists at least one affine minorant of f , or, which
is the same, there exist a and d such that

f (x) ≥ dT x − a ∀x ∈ Dom f. (C.6.2)


684 APPENDIX C. CONVEX FUNCTIONS

Let us add to (C.6.2) inequality (C.6.1) multiplied by positive weight λ; we get

f (x) ≥ φλ (x) ≡ (d + λz)T x + [λζ − a − λz T x̄] ∀x ∈ Dom f.

This inequality clearly says that φλ (·) is an affine minorant of f on Rn for every λ > 0.
The value of this minorant at x = x̄ is equal to dT x̄ − a + λζ and therefore it goes to +∞
as λ → +∞. We see that the upper bound of affine minorants of f at x̄ indeed is +∞, as
claimed.
II.20 . Thus, we know that the upper bound f¯ of all affine minorants of f is equal to f
everywhere on the relative interior of Dom f and everywhere outside the closure of Dom f ;
all we should prove that this equality is also valid at the points of the relative boundary of
Dom f . Let x̄ be such a point. There is nothing to prove if f¯(x̄) = +∞, since by construction
f¯ is everywhere ≤ f . Thus, we should prove that if f¯(x̄) = c < ∞, then f (x̄) = c. Since
f¯ ≤ f everywhere, to prove that f (x̄) = c is the same as to prove that f (x̄) ≤ c. This
is immediately given by lower semicontinuity of f : let us choose x0 ∈ ri Dom f and look
what happens along a sequence of points xi ∈ [x0 , x̄) converging to x̄. All the points of this
sequence are relative interior points of Dom f (Lemma B.1.1), and consequently

f (xi ) = f¯(xi ).

Now, xi = (1 − λi )x̄ + λi x0 with λi → +0 as i → ∞; since f¯ clearly is convex (as the upper


bound of a family of affine and therefore convex functions), we have

f¯(xi ) ≤ (1 − λi )f¯(x̄) + λi f¯(x0 ).

Putting things together, we get

f (xi ) ≤ (1 − λi )f¯(x̄) + λi f (x0 );

as i → ∞, xi → x̄, and the right hand side in our inequality converges to f¯(x̄) = c; since f
is lower semicontinuous, we get f (x̄) ≤ c. 2

We see why “translation of mathematical facts from one mathematical language to another”
– in our case, from the language of convex sets to the language of convex functions – may
be fruitful: because we invest a lot into the process rather than run it mechanically.

Closure of a convex function. We got a nice result on the “outer description” of a


proper convex function: it is the upper bound of a family of affine functions. Note that,
vice versa, the upper bound of every family of affine functions is a proper function, provided
that this upper bound is finite at least at one point (indeed, as we know from Section C.2.1,
upper bound of every family of convex functions is convex, provided that it is finite at least
at one point; and Corollary C.6.1 says that upper bound of lower semicontinuous functions
(e.g., affine ones – they are even continuous) is lower semicontinuous).
Now, what to do with a convex function which is not lower semicontinuous? The similar
question about convex sets – what to do with a convex set which is not closed – can be
resolved very simply: we can pass from the set to its closure and thus get a “normal” object
which is very “close” to the original one: the “main part” of the original set – its relative
interior – remains unchanged, and the “correction” adds to the set something relatively small
– the relative boundary. The same approach works for convex functions: if a convex function
f is not proper (i.e., its epigraph, being convex and nonempty, is not closed), we can “correct”
the function – replace it with a new function with the epigraph being the closure of Epi(f ).
To justify this approach, we, of course, should be sure that the closure of the epigraph of
a convex function is also an epigraph of such a function. This indeed is the case, and to
C.6. SUBGRADIENTS AND LEGENDRE TRANSFORMATION 685

see it, it suffices to note that a set G in Rn+1 is the epigraph of a function taking values in
R ∪ {+∞} if and only if the intersection of G with every vertical line {x = const, t ∈ R} is
either empty, or is a closed ray of the form {x = const, t ≥ t̄ > −∞}. Now, it is absolutely
evident that if G is the closure of the epigraph of a function f , that its intersection with a
vertical line is either empty, or is a closed ray, or is the entire line (the last case indeed can
take place – look at the closure of the epigraph of the function equal to − x1 for x > 0 and
+∞ for x ≤ 0). We see that in order to justify our idea of “proper correction” of a convex
function we should prove that if f is convex, then the last of the indicated three cases –
the intersection of cl Epi(f ) with a vertical line is the entire line – never occurs. This fact
evidently is a corollary of the following simple

Proposition C.6.3 A convex function is below bounded on every bounded subset of Rn .

Proof. Without loss of generality we may assume that the domain of the function f is
full-dimensional and that 0 is the interior point of the domain. According to Theorem C.4.1,
there exists a neighbourhood U of the origin – which can be thought of to be a centered at
the origin ball of some radius r > 0 – where f is bounded from above by some C. Now, if
R > 0 is arbitrary and x is an arbitrary point with |x| ≤ R, then the point
r
y=− x
R
belongs to U , and we have
r R
0= x+ y;
r+R r+R
since f is convex, we conclude that

r R r R
f (0) ≤ f (x) + f (y) ≤ f (x) + c,
r+R r+R r+R r+R
and we get the lower bound
r+R r
f (x) ≥ f (0) − c
r R
for the values of f in the centered at 0 ball of radius R. 2

Thus, we conclude that the closure of the epigraph of a convex function f is the epigraph of
certain function, let it be called the closure cl f of f . Of course, this latter function is convex
(its epigraph is convex – it is the closure of a convex set), and since its epigraph is closed,
cl f is proper. The following statement gives direct description of cl f in terms of f :

Proposition C.6.4 Let f be a convex function and cl f be its closure. Then


(i) For every x one has
cl f (x) = lim inf f (x0 ).
r→+0 x0 :kx0 −xk2 ≤r

In particular,
f (x) ≥ cl f (x)
for all x, and
f (x) = cl f (x)
whenever x ∈ ri Dom f , same as whenever x 6∈ cl Dom f .
Thus, the “correction” f 7→ cl f may vary f only at the points from the relative boundary of
Dom f ,
Dom f ⊂ Dom cl f ⊂ cl Dom f,
686 APPENDIX C. CONVEX FUNCTIONS

whence also
ri Dom f = ri Dom cl f.

(ii) The family of affine minorants of cl f is exactly the family of affine minorants of f , so
that
cl f (x) = sup{φ(x) : φ is an affine minorant of f },
and the sup in the right hand side can be replaced with max whenever x ∈ ri Dom cl f =
ri Dom f .
[“so that” comes from the fact that cl f is proper and is therefore the upper bound of its
affine minorants]

C.6.2 Subgradients
Let f be a convex function, and let x ∈ Dom f . It may happen that there exists an affine
minorant dT x − a of f which coincides with f at x:

f (y) ≥ dT y − a ∀y, f (x) = dT x − a.

From the equality in the latter relation we get a = dT x − f (x), and substituting this repre-
sentation of a into the first inequality, we get

f (y) ≥ f (x) + dT (y − x) ∀y. (C.6.3)

Thus, if f admits an affine minorant which is exact at x, then there exists d which gives rise
to inequality (C.6.3). Vice versa, if d is such that (C.6.3) takes place, then the right hand
side of (C.6.3), regarded as a function of y, is an affine minorant of f which is exact at x.
Now note that (C.6.3) expresses certain property of a vector d. A vector satisfying, for a
given x, this property – i.e., the slope of an exact at x affine minorant of f – is called a
subgradient of f at x, and the set of all subgradients of f at x is denoted ∂f (x).
Subgradients of convex functions play important role in the theory and numerical methods
of Convex Programming – they are quite reasonable surrogates of gradients. The most
elementary properties of the subgradients are summarized in the following statement:

Proposition C.6.5 Let f be a convex function and x be a point from Dom f . Then
(i) ∂f (x) is a closed convex set which for sure is nonempty when x ∈ ri Dom f
(ii) If x ∈ int Dom f and f is differentiable at x, then ∂f (x) is the singleton comprised of
the usual gradient of f at x.

Proof. (i): Closedness and convexity of ∂f (x) are evident – (C.6.3) is an infinite system
of nonstrict linear inequalities with respect to d, the inequalities being indexed by y ∈ Rn .
Nonemptiness of ∂f (x) for the case when x ∈ ri Dom f – this is the most important fact
about the subgradients – is readily given by our preceding results. Indeed, we should prove
that if x ∈ ri Dom f , then there exists an affine minorant of f which is exact at x. But this
is an immediate consequence of Proposition C.6.4: part (i) of the proposition says that there
exists an affine minorant of f which is equal to cl f (x) at the point x, and part (i) says that
f (x) = cl f (x).
(ii): If x ∈ int Dom f and f is differentiable at x, then ∇f (x) ∈ ∂f (x) by the Gradient
Inequality. To prove that in the case in question ∇f (x) is the only subgradient of f at x,
note that if d ∈ ∂f (x), then, by definition,

f (y) − f (x) ≥ dT (y − x) ∀y
C.6. SUBGRADIENTS AND LEGENDRE TRANSFORMATION 687

Substituting y − x = th, h being a fixed direction and t being > 0, dividing both sides of the
resulting inequality by t and passing to limit as t → +0, we get

hT ∇f (x) ≥ hT d.

This inequality should be valid for all h, which is possible if and only if d = ∇f (x). 2

Proposition C.6.5 explains why subgradients are good surrogates of gradients: at a point
where gradient exists, it is the only subgradient, but, in contrast to the gradient, a sub-
gradient exists basically everywhere (for sure in the relative interior of the domain of the
function). E.g., let us look at the simple function

f (x) = |x|

on the axis. It is, of course, convex (as maximum of two linear forms x and −x). Whenever
x 6= 0, f is differentiable at x with the derivative +1 for x > 0 and −1 for x < 0. At the point
x = 0 f is not differentiable; nevertheless, it must have subgradients at this point (since 0
is an interior point of the domain of the function). And indeed, it is immediately seen that
the subgradients of |x| at x = 0 are exactly the reals from the segment [−1, 1]. Thus,

 {−1}, x < 0
∂|x| = [−1, 1], x = 0 .
{+1}, x > 0

Note also that if x is a relative boundary point of the domain of a convex function, even
a “good” one, the set of subgradients of f at x may be empty, as it is the case with the
function  √
− y, y≥0
f (y) = ;
+∞, y<0
it is clear that there is no non-vertical supporting line to the epigraph of the function at the
point (0, f (0)), and, consequently, there is no affine minorant of the function which is exact
at x = 0.
A significant – and important – part of Convex Analysis deals with subgradient calculus –
with the rules for computing subgradients of “composite” functions, like sums, superposi-
tions, maxima, etc., given subgradients of the operands. These rules extend onto nonsmooth
convex case the standard Calculus rules and are very nice and instructive; the related con-
siderations, however, are beyond our scope.

C.6.3 Legendre transformation


Let f be a convex function. We know that f “basically” is the upper bound of all its affine
minorants; this is exactly the case when f is proper, otherwise the corresponding equality
takes place everywhere except, perhaps, some points from the relative boundary of Dom f .
Now, when an affine function dT x − a is an affine minorant of f ? It is the case if and only if

f (x) ≥ dT x − a

for all x or, which is the same, if and only if

a ≥ dT x − f (x)

for all x. We see that if the slope d of an affine function dT x − a is fixed, then in order for
the function to be a minorant of f we should have

a ≥ sup [dT x − f (x)].


x∈Rn
688 APPENDIX C. CONVEX FUNCTIONS

The supremum in the right hand side of the latter relation is certain function of d; this
function is called the Legendre transformation of f and is denoted f ∗ :

f ∗ (d) = sup [dT x − f (x)].


x∈Rn

Geometrically, the Legendre transformation answers the following question: given a slope d
of an affine function, i.e., given the hyperplane t = dT x in Rn+1 , what is the minimal “shift
down” of the hyperplane which places it below the graph of f ?
From the definition of the Legendre transformation it follows that this is a proper function.
Indeed, we loose nothing when replacing sup [dT x − f (x)] by sup [dT x − f (x)], so that
x∈Rn x∈Dom f
the Legendre transformation is the upper bound of a family of affine functions. Since this
bound is finite at least at one point (namely, at every d coming form affine minorant of f ; we
know that such a minorant exists), it is a convex lower semicontinuous function, as claimed.
The most elementary (and the most fundamental) fact about the Legendre transformation
is its symmetry:

Proposition C.6.6 Let f be a convex function. Then twice taken Legendre transformation
of f is the closure cl f of f :
(f ∗ )∗ = cl f.
In particular, if f is proper, then it is the Legendre transformation of its Legendre transfor-
mation (which also is proper).

Proof is immediate. The Legendre transformation of f ∗ at the point x is, by definition,

sup [xT d − f ∗ (d)] = sup [dT x − a];


d∈Rn d∈Rn ,a≥f ∗ (d)

the second sup here is exactly the supremum of all affine minorants of f (this is the origin of
the Legendre transformation: a ≥ f ∗ (d) if and only if the affine form dT x − a is a minorant
of f ). And we already know that the upper bound of all affine minorants of f is the closure
of f . 2

The Legendre transformation is a very powerful tool – this is a “global” transformation, so


that local properties of f ∗ correspond to global properties of f . E.g.,
• d = 0 belongs to the domain of f ∗ if and only if f is below bounded, and if it is the
case, then f ∗ (0) = − inf f ;
• if f is proper, then the subgradients of f ∗ at d = 0 are exactly the minimizers of f on
Rn ;
• Dom f ∗ is the entire Rn if and only if f (x) grows, as kxk2 → ∞, faster than kxk2 :
there exists a function r(t) → ∞, as t → ∞ such that

f (x) ≥ r(kxk2 ) ∀x,

etc. Thus, whenever we can compute explicitly the Legendre transformation of f , we get
a lot of “global” information on f . Unfortunately, the more detailed investigation of the
properties of Legendre transformation is beyond our scope; I simply list several simple facts
and examples:
• From the definition of Legendre transformation,

f (x) + f ∗ (d) ≥ xT d ∀x, d.


C.6. SUBGRADIENTS AND LEGENDRE TRANSFORMATION 689

Specifying here f and f ∗ , we get certain inequality, e.g., the following one:
[Young’s Inequality] if p and q are positive reals such that p1 + 1q = 1, then

|x|p |d|q
+ ≥ xd ∀x, d ∈ R
p q

(indeed, as it is immediately seen, the Legendre transformation of the function |x|p /p


is |d|q /q)

Consequences. Very simple-looking Young’s inequality gives rise to a very nice


and useful Hölder inequality:
Let 1 ≤ p ≤ ∞ and let q be such p1 + 1
q = 1 (p = 1 ⇒ q = ∞, p = ∞ ⇒ q = 1). For
every two vectors x, y ∈ Rn one has
n
X
|xi yi | ≤ kxkp kykq (C.6.4)
i=1

Indeed, there is nothing to prove if p or q is ∞ – if it is the case, the inequality becomes


the evident relation X X
|xi yi | ≤ (max |xi |)( |yi |).
i
i i

Now let 1 < p < ∞, so that also 1 < q < ∞. In this case we should prove that
X X X
|xi yi | ≤ ( |xi |p )1/p ( |yi |q )1/q .
i i i

There is nothing to prove if one of the factors in the right hand side vanishes; thus, we
can assume that x 6= 0 and y 6= 0. Now, both sides of the inequality are of homogeneity
degree 1 with respect to x (when we multiply x by t, both sides are multiplied by
|t|), and similarly with respect to y. Multiplying x and y by appropriate reals, we can
make both factors in the right hand side equal to 1: kxkp = kykp = 1. Now we should
prove that under this normalization the left hand side in the inequality is ≤ 1, which
is immediately given by the Young inequality:
X X
|xi yi | ≤ [|xi |p /p + |yi |q /q] = 1/p + 1/q = 1.
i i

Note that the Hölder inequality says that

|xT y| ≤ kxkp kykq ; (C.6.5)

when p = q = 2, we get the Cauchy inequality. Now, inequality (C.6.5) is exact in the
sense that for every x there exists y with kykq = 1 such that

xT y = kxkp [= kxkp kykq ];

it suffices to take
yi = kxk1−p
p |xi |p−1 sign(xi )
(here x 6= 0; the case of x = 0 is trivial – here y can be an arbitrary vector with
kykq = 1).
Combining our observations, we come to an extremely important, although simple,
fact:
1 1
kxkp = max{y T x : kykq ≤ 1} [ + = 1]. (C.6.6)
p q
690 APPENDIX C. CONVEX FUNCTIONS

It follows, in particular, that kxkp is convex (as an upper bound of a family of linear
forms), whence
1 1
kx0 + x00 kp = 2k x0 + x00 kp ≤ 2(kx0 kp /2 + kx00 kp /2) = kx0 kp + kx00 kp ;
2 2
this is nothing but the triangle inequality. Thus, kxkp satisfies the triangle inequality;
it clearly possesses two other characteristic properties of a norm – positivity and ho-
mogeneity. Consequently, k · kp is a norm – the fact that we announced twice and have
finally proven now.
• The Legendre transformation of the function

f (x) ≡ −a

is the function which is equal to a at the origin and is +∞ outside the origin; similarly,
the Legendre transformation of an affine function d¯T x − a is equal to a at d = d¯ and is
+∞ when d 6= d; ¯
• The Legendre transformation of the strictly convex quadratic form
1 T
f (x) = x Ax
2
(A is positive definite symmetric matrix) is the quadratic form
1 T −1
f ∗ (d) = d A d
2
• The Legendre transformation of the Euclidean norm

f (x) = kxk2

is the function which is equal to 0 in the closed unit ball centered at the origin and is
+∞ outside the ball.
The latter example is a particular case of the following statement:
Let kxk be a norm on Rn , and let

kdk∗ = sup{dT x : kxk ≤ 1}

be the conjugate to k · k norm.


Exercise C.1 Prove that k · k∗ is a norm, and that the norm conjugate to k · k∗ is the
original norm k · k.
Hint: Observe that the unit ball of k · k∗ is exactly the polar of the unit ball of k · k.
The Legendre transformation of kxk is the characteristic function of the unit ball of
the conjugate norm, i.e., is the function of d equal to 0 when kdk∗ ≤ 1 and is +∞
otherwise.
E.g., (C.6.6) says that the norm conjugate to k · kp , 1 ≤ p ≤ ∞, is k · kq , 1/p + 1/q = 1;
consequently, the Legendre transformation of p-norm is the characteristic function of
the unit k · q-ball.
Appendix D

Convex Programming, Lagrange


Duality, Saddle Points

D.1 Mathematical Programming Program


A (constrained) Mathematical Programming program is a problem as follows:

(P) min {f (x) : x ∈ X, g(x) ≡ (g1 (x), ..., gm (x)) ≤ 0, h(x) ≡ (h1 (x), ..., hk (x)) = 0} . (D.1.1)

The standard terminology related to (D.1.1) is:

• [domain] X is called the domain of the problem

• [objective] f is called the objective

• [constraints] gi , i = 1, ..., m, are called the (functional) inequality constraints; hj , j = 1, ..., k, are
called the equality constraints1)

In the sequel, if the opposite is not explicitly stated, it always is assumed that the objective and the
constraints are well-defined on X.

• [feasible solution] a point x ∈ Rn is called a feasible solution to (D.1.1), if x ∈ X, gi (x) ≤ 0,


i = 1, ..., m, and hj (x) = 0, j = 1, ..., k, i.e., if x satisfies all restrictions imposed by the formulation
of the problem

– [feasible set] the set of all feasible solutions is called the feasible set of the problem
– [feasible problem] a problem with a nonempty feasible set (i.e., the one which admits feasible
solutions) is called feasible (or consistent)
– [active constraints] an inequality constraint gi (·) ≤ 0 is called active at a given feasible solution
x, if this constraint is satisfied at the point as an equality rather than strict inequality, i.e., if

gi (x) = 0.

A equality constraint hi (x) = 0 by definition is active at every feasible solution x.

1)
rigorously speaking, the constraints are not the functions gi , hj , but the relations gi (x) ≤ 0, hj (x) = 0; in
fact the word “constraints” is used in both these senses, and it is always clear what is meant. For example, saying
that x satisfies the constraints, we mean the relations, and saying that the constraints are differentiable, we mean
the functions

691
692APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

• [optimal value] the quantity


(
inf f (x), the problem is feasible
f∗ = x∈X:g(x)≤0,h(x)=0
+∞, the problem is infeasible

is called the optimal value of the problem


– [below boundedness] the problem is called below bounded, if its optimal value is > −∞, i.e.,
if the objective is below bounded on the feasible set

• [optimal solution] a point x ∈ Rn is called an optimal solution to (D.1.1), if x is feasible and


f (x) ≤ f (x0 ) for any other feasible solution, i.e., if

x∈ Argmin f (x0 )
x0 ∈X:g(x0 )≤0,h(x0 )=0

– [solvable problem] a problem is called solvable, if it admits optimal solutions


– [optimal set] the set of all optimal solutions to a problem is called its optimal set
To solve the problem exactly means to find its optimal solution or to detect that no optimal solution
exists.

D.2 Convex Programming program and Lagrange Duality The-


orem
A Mathematical Programming program (P) is called convex (or Convex Programming program), if
• X is a convex subset of Rn
• f, g1 , ..., gm are real-valued convex functions on X,
and
• there are no equality constraints at all.
Note that instead of saying that there are no equality constraints, we could say that there are constraints
of this type, but only linear ones; this latter case can be immediately reduced to the one without equality
constraints by replacing Rn with the affine subspace given by the (linear) equality constraints.

D.2.1 Convex Theorem on Alternative


The simplest case of a convex program is, of course, a Linear Programming program – the one where
X = Rn and the objective and all the constraints are linear. We already know what are optimality
conditions for this particular case – they are given by the Linear Programming Duality Theorem. How
did we get these conditions?
We started with the observation that the fact that a point x∗ is an optimal solution can be expressed
in terms of solvability/unsolvability of certain systems of inequalities: in our now terms, these systems
are
x ∈ G, f (x) ≤ c, gj (x) ≤ 0, j = 1, ..., m (D.2.1)
and
x ∈ G, f (x) < c, gj (x) ≤ 0, j = 1, ..., m; (D.2.2)
here c is a parameter. Optimality of x∗ for the problem means exactly that for appropriately chosen c
(this choice, of course, is c = f (x∗ )) the first of these systems is solvable and x∗ is its solution, while the
second system is unsolvable. Given this trivial observation, we converted the “negative” part of it – the
D.2. CONVEX PROGRAMMING PROGRAM AND LAGRANGE DUALITY THEOREM693

claim that (D.2.2) is unsolvable – into a positive statement, using the General Theorem on Alternative,
and this gave us the LP Duality Theorem.
Now we are going to use the same approach. What we need is a “convex analogy” to the Theorem
on Alternative – something like the latter statement, but for the case when the inequalities in question
are given by convex functions rather than the linear ones (and, besides it, we have a “convex inclusion”
x ∈ X).
It is easy to guess the result we need. How did we come to the formulation of the Theorem on
Alternative? The question we were interested in was, basically, how to express in an affirmative manner
the fact that a system of linear inequalities has no solutions; to this end we observed that if we can
combine, in a linear fashion, the inequalities of the system and get an obviously false inequality like
0 ≤ −1, then the system is unsolvable; this condition is certain affirmative statement with respect to the
weights with which we are combining the original inequalities.
Now, the scheme of the above reasoning has nothing in common with linearity (and even convexity)
of the inequalities in question. Indeed, consider an arbitrary inequality system of the type (D.2.2):

(I) :
f (x) < c
gj (x) ≤ 0, j = 1, ..., m
x ∈ X;

all we assume is that X is a nonempty subset in Rn and f, g1 , ..., gm are real-valued functions on X. It
is absolutely evident that
if there exist nonnegative λ1 , ..., λm such that the inequality
m
X
f (x) + λj gj (x) < c (D.2.3)
j=1

has no solutions in X, then (I) also has no solutions.


Indeed, a solution to (I) clearly is a solution to (D.2.3) – the latter inequality is nothing but a combination
of the inequalities from (I) with the weights 1 (for the first inequality) and λj (for the remaining ones).
Now, what does it mean that (D.2.3) has no solutions? A necessary and sufficient condition for this
is that the infimum of the left hand side of (D.2.3) in x ∈ X is ≥ c. Thus, we come to the following
evident

Proposition D.2.1 [Sufficient condition for insolvability of (I)] Consider a system (I) with arbitrary
data and assume that the system

(II) : " #
m
P
inf f (x) + λj gj (x) ≥ c
x∈X j=1
λj ≥ 0, j = 1, ..., m

with unknowns λ1 , ..., λm has a solution. Then (I) is infeasible.

Let us stress that this result is completely general; it does not require any assumptions on the entities
involved.
The result we have obtained, unfortunately, does not help us: the actual power of the Theorem on
Alternative (and the fact used to prove the Linear Programming Duality Theorem) is not the sufficiency
of the condition of Proposition for infeasibility of (I), but the necessity of this condition. Justification of
necessity of the condition in question has nothing in common with the evident reasoning which gives the
sufficiency. The necessity in the linear case (X = Rn , f , g1 , ..., gm are linear) can be established via the
Homogeneous Farkas Lemma. Now we shall prove the necessity of the condition for the convex case, and
already here we need some additional, although minor, assumptions; and in the general nonconvex case
694APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

the condition in question simply is not necessary for infeasibility of (I) [and this is very bad – this is the
reason why there exist difficult optimization problems which we do not know how to solve efficiently].
The just presented “preface” explains what we should do; now let us carry out our plan. We start
with the aforementioned “minor regularity assumptions”.
Definition D.2.1 [Slater Condition] Let X ⊂ Rn and g1 , ..., gm be real-valued functions on X. We say
that these functions satisfy the Slater condition on X, if there exists x in the relative interior of X such
that gj (x) < 0, j = 1, ..., m.
An inequality constrained program
(IC) min {f (x) : gj (x) ≤ 0, j = 1, ..., m, x ∈ X}
(f, g1 , ..., gm are real-valued functions on X) is called to satisfy the Slater condition, if g1 , ..., gm satisfy
this condition on X.

Definition D.2.2 [Relaxed Slater Condition] Let X ⊂ Rn and g1 , ..., gm be real-valued functions on X.
We say that these functions satisfy the Relaxed Slater condition on X, if, after appropriate reordering of
g1 , ..., gm , for properly selected k ∈ {0, 1, ..., m} the functions g1 , ..., gk are affine, and there exists x in
the relative interior of X such that gj (x) ≤ 0, 1 ≤ j ≤ k, and gj (x) < 0, k + 1 ≤ j ≤ m.
An inequality constrained problem (IC) is called to satisfy the Relaxed Slater condition, if g1 , ..., gm
satisfy this condition on X.
Clearly, the validity of Slater condition implies the validity of the Relaxed Slater condition (in the latter,
we should set k = 0). We are about to establish the following fundamental fact:
Theorem D.2.1 [Convex Theorem on Alternative]
Let X ⊂ Rn be convex, let f, g1 , ..., gm be real-valued convex functions on X, and let g1 , ..., gm satisfy the
Relaxed Slater condition on X. Then system (I) is solvable if and only if system (II) is unsolvable.

D.2.1.1 Conic case


We shall obtain Theorem D.2.1 as a particular case of Convex Theorem on Alternative in conic form
dealing with convex system (I) in conic form:
f (x) < c, g(x) := Ax − b ≤ 0, gb(x) ∈ K, x ∈ X, (D.2.4)
where
• X is a nonempty convex set in Rn , and f : X → R is convex,
• K ⊂ Rν is a regular cone, regularity meaning that K is closed, convex, possesses a nonempty
interior, and is pointed, that is K ∩ [−K] = {0}
• gb(x) : Z → Rν is K-convex, meaning that whenever x, y ∈ X and λ ∈ [0, 1], we have
g (x) + (1 − λ)b
λb g − gb(λx + (1 − λ)y) ∈ K.

Note that in the simplest case of K = Rν+ (nonnegative orthant is a regular cone!) K-convexity means
exactly that gb is a vector function with convex on X components.
Given a regular cone K ⊂ Rν , we can associate with it K-inequality between vectors of Rν .
saying that a ∈ Rν is K-greater then or equal to b ∈ Rν (notation: A ≥K b, or, equivalently,
b ≤K a) when a − b ∈ K:
a≥K b ⇔ b≤K a ⇔ a − b ∈ K
For example, when K = Rν+ is nonnegative orthant, ≥K is the standard coordinate-wise
vector inequality ≥: a ≥ b means that every entry of a is greater than or equal to, in the
standard arithmetic sense, the corresponding entry in b. K- vector inequality possesses all
algebraic properties of ≥:
D.2. CONVEX PROGRAMMING PROGRAM AND LAGRANGE DUALITY THEOREM695

1. it is a partial order on Rν , meaning that the relation a ≥K b is


• reflexive: a ≥K a for all a
• anti-symmetric: a ≥K b and b ≥K a if and only if a = b
• transitive: is a ≥K b and b ≥K c, then a ≥K c
2. is compatible with linear operations, meaning that
• ≥-inequalities can be summed up: is a≥K b and c≥K d, then a + c≥K b + d
• multiplied by nonnegative reals: if a≥K b and λ is a nonnegative real, then λa≥K λb
3. is compatible with convergence, meaning that one can pass to sidewise limits in ≥K -
inequality:
• if at ≥K bt , t = 1, 2, ..., and at → a and bt → b as t → ∞, then a≥K b
4. gives rise to strict version >K of ≥K -inequality: a >K b (equivalently: b <K a) mean-
ing that a − b ∈ int K. The strict K-inequality possesses the basic properties of the
coordinate-wise >, specifically
• >K is stable: if a >K b and a0 , b0 are close enough to a, b respectively, then
a0 >K b0
• if a >K b, λ is a positive real, and c≥K d, then λa >K λb and a + c >K b + d

In summary, the arithmetics of ≤K and <K inequalities is completely similar to the one of
the usual ≥ and >.

Exercise D.1 Verify the above claims

Since 1990’s, it was realized that as far as nonlinear Convex Optimization is concerned, it is
extremely convenient to consider, along with the usual Mathematical Programming form

minx∈X {g0 (x) : g(x) := [g1 (x); ...; gm (x)] ≤ 0, 1 ≤ i ≤ m, hj (x) = 0, j ≤ k}


[X: convex set; gi (x) : X → R, 0 ≤ i ≤ m: convex; hj (x), j ≤ k: affine]

of a convex problem, where nonlinearity “sits” in gi ’s and/or non-polyhedrality of X, the


conic form where the nonlinearity “sits” in ≤ - the usual coordinate-wise ≤ is replaced with
≤K , K being a regular cone. The resulting convex program in conic form reads
 
min f (x) : g(x) := Ax − b ≤ 0, gb(x) ≤K 0 (D.2.5)
x∈X | {z }
⇔bg (x)∈−K

where (cf. (D.2.4) X is convex set, f : X → R is convex function, K ⊂ Rν is a regular cone,


and gb : X → Rν is K-convex. Note that K-convexity of gb is our new notation takes the
quite common for us form

∀(x, y ∈ X, λ ∈ [0, 1]) : gb(λx + (1 − λ)y)) ≤K λb


g (x) + (1 − λ)b
g (y).

Note also that when K is nonnegative orthant, (D.2.5) recovers the Mathematical Program-
ming form of a convex problem.

Exercise D.2 Let K be the cone of m × m positive semidefinite matrices in the space Sm
of m × m symmetric matrices2 , and let gb(x) = xxT : X := Rm×n → Sm . Verify that gb is
K-convex.
2
So far, we have assumed that K “lives” in some Rν , while now we are speaking about something living in the
space of symmetric matrices. Well, we can identify the latter linear space with Rν , ν = m(m+1) 2
, by representing a
symmetric matrix [aij ] by the collection of its diagonal and below-diagonal entries arranged into a column vector
by writing down these entries, say, row by row: [aij ] 7→ [a1,1 ; a2,1 ; a2,2 ; ...; am,1 ; ...; am,m ].
696APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

It turns out that with “conic approach” to Convex Programming, we loose nothing when
restricting ourselves with X = Rn , linear f (x), and affine gb(x); this specific version of (D.2.5)
is called “conic problem” (to be considered in more details later). In our course, where convex
problems are not the only subject, it makes sense to speak about a “less extreme” conic form
of a convex program, specifically, one presented in (D.2.5); we call problems of this form
“convex problems in conic form,” reserving the words “conic problems” for problems (D.2.5)
with X = Rn , linear f and affine gb.
What we have done so far in this section can be naturally extended to the conic case. Specifically, (in
what follows X ⊂ Rn is a nonempty set, K ⊂ Rν is a regular cone, f is a real-valued function on X, and
gb(·) is a mapping from X into Rν )
1. Instead of feasibility/infeasibility of system (I) we can speak about feasibility/infeasibility of system
of constraints
(ConI) :
f (x) < c
g(x) := Ax − b ≤ 0
gb(x) ≤K 0 [⇔ gb(x) ∈ −K]
x ∈ X
in variables x. Denoting by K∗ the cone dual to K, a sufficient condition for infeasibility of (ConI)
is solvability of system of constraints

(ConII) : h i
T
bT gb(x)
inf f (x) + λ g(x) + λ ≥ c
x∈X
λ ≥ 0
λ
b ≥K∗ b ∈ K∗ ]
0 [⇔ λ

in variables λ = (λ, λ).


b

Indeed, given a feasible solution λ, λb to (ConII) and “aggregating” the constraints in


(ConI) with weights 1, λ, λ (i.e., taking sidewise inner products of constraints in (ConI)
b
and weights and summing up the results), we arrive at the inequality
T
bT gb(x) < c
f (x) + λ g(x) + λ

which due to λ ≥ 0 and λ b ∈ K∗ is a consequence of (ConI) – it must be satisfied at every


feasible solution to (ConI). On the other hand, the aggregated inequality contradicts
the first constraint in (ConII), and we conclude that under the circumstances (ConI) is
infeasible.

2. “Conic” version of Slater/Relaxed Slater condition is as follows: we say that the system of con-
straints
(S) g(x) := Ax − b ≤ 0 & gb(x)≤K 0
in variables x satisfies
— Slater condition on X, if there exists x̄ ∈ rint X such that g(x̄) < 0 and gb(x̄)<K 0 (i.e., gb(x̄) ∈
−int K),
— Relaxed Slater condition on X, if there exists x̄ ∈ rint X such that g(x̄) ≤ 0 and gb(x̄)<K 0.
We say that optimization problem in conic form – problem of the form

(ConIC) min{f (x) : g(x) := Ax − b ≤ 0, gb(x)≤K 0}


x∈X

satisfies Slater/Relaxed Slater condition, if the system of its constraints satisfies this condition on
X.
D.2. CONVEX PROGRAMMING PROGRAM AND LAGRANGE DUALITY THEOREM697

Note that in the case of K = Rν+ , (ConI) and (ConII) become, respectively, (I) and (II), and the conic
versions of Slater/Relaxed Slater condition become the usual ones.
The conic version of Theorem D.2.1 reads as follows:
Theorem D.2.2 [Convex Theorem on Alternative in conic form]
Let K ⊂ Rν be a regular cone, let X ⊂ Rn be convex, and let f be real-valued convex function on X,
g(x) = Ax − b be affine, and gb(x) : X → Rν be K-convex. Let also system (S) satisfy Relaxed Slater
condition on X. Then system (ConI) is solvable if and only if system (ConII) is unsolvable.
Note: “In reality” (ConI) may have no polyhedral part g(x) := Ax − b ≤ 0 and/or no “general part”
gb(x)≤K 0; absence of one or both of these parts leads to self-evident modifications in (ConII). To unify our
forthcoming considerations, it is convenient to assume that both these parts are present. This assumption
is for free: it is immediately seen that in our context, in absence of one or both of g-constraints in (ConI)
we lose nothing when adding artificial polyhedral part g(x) := 0T x − 1 ≤ 0 instead of missing polyhedral
part, and/or artificial general part gb(x) := 0T x − 1≤K 0 with K = R+ instead of missing general part.
Thus, we lose nothing when assuming from the very beginning that both polyhedral and general parts
are present.
It is immediately seen that Convex Theorem on Alternative (Theorem D.2.1) is a special case of the
latter Theorem corresponding to the case when K is a nonnegative orthant.
Proof of Theorem D.2.2. The first part of the statement – “if (ConII) has a solution, then (ConI) has
no solutions” – has been already verified. What we need is to prove the inverse statement. Thus, let us
assume that (ConI) has no solutions, and let us prove that then (ConII) has a solution.
00 . Without loss of generality we may assume that X is full-dimensional: ri X = int X (indeed,
otherwise we could replace our “universe” Rn with the affine span of X). Besides this, shifting f by
constant, we can assume that c = 0. Thus, we are in the case where
(ConI) :
f (x) < 0
g(x) := Ax − b ≤ 0,
gb(x) ≤K 0, [⇔ gb(x) ∈ −K]
x ∈ X;
(ConII) : h i
T
bT gb(x)
inf f (x) + λ g(x) + λ ≥ 0
x∈X
λ ≥ 0,
λ
b ≥K ∗ b ∈ K∗ ]
0 [⇔ λ

We are in the case when g(x̄) ≤ 0 and gb(x̄) ∈ −int K for some x̄ ∈ rint X = int X; by shifting X (this
clearly does not affect the statement we need to prove) we can assume that x̄ is just the origin, so that

0 ∈ int X, b ≥ 0, gb(0)<K 0. (D.2.6)

Recall that we are in the situation when (ConI) is infeasible, that is, the optimization problem

Opt(P ) = min {f (x) : x ∈ X, g(x) ≤ 0, gb(x)≤K 0} (P )

satisfies Opt(P ) ≥ 0, and our goal is to show that (ConII) is feasible.


10 . Let Y = {x ∈ X : g(x) ≤ 0} = {x ∈ X : Ax − b ≤ 0},

S = {t = [t0 ; t1 ] ∈ R × Rν : t0 < 0, t1 ≤K 0},


T = {t = [t0 ; t1 ] ∈ R × Rν : ∃x ∈ Y : f (x) ≤ t0 , gb(x)≤K t1 },

so that S and T are nonempty (since (P ) is feasible by (D.2.6)) non-intersecting (since Opt(P ) ≥ 0)
convex (since X and f are convex, and gb(x) is K-convex on X) sets. By Separation Theorem (Theorem
B.2.9) S and T can be separated by an appropriately chosen linear form α. Thus,

sup αT t ≤ inf αT t (D.2.7)


t∈S t∈T
698APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

and the form is non-constant on S ∪ T , implying that α 6= 0. We clearly have α ∈ R+ × K∗ , since


otherwise supt∈S αT t would be +∞. Since α ∈ R+ × K∗ , the left hand side in (D.2.7) is 0 (look what S
is), and therefore (D.2.7) reads
0 ≤ inf αT t. (D.2.8)
t∈T

0
2 . Denoting α = [α0 ; α1 ] with α0 ∈ R+ and α1 ∈ K∗ , we claim that α0 > 0. Indeed, the point
t̄ = [t̄0 ; t̄1 ] with the components t̄0 = f (0) and t̄1 = gb(0) belongs to T , so that by (D.2.8) it holds
α0 t̄0 + α1T t̄1 ≥ 0. Assuming that α0 = 0, it follows that α1T t̄1 ≥ 0, and since t̄1 ∈ −int K (see (D.2.6))
and α1 ∈ K∗ , we conclude (see observation (!) on p. 385) that α1 = 0 on the top of α0 = 0, which is
impossible, since α 6= 0.
In the sequel, we set ᾱ1 = α1 /α0 , and

h(x) = f (x) + ᾱ1T gb(x).

Observing that (D.2.8) remains valid when replacing α with ᾱ = α/α0 and that the vector [f (x); gb(x)]
belongs to T when x ∈ Y , we conclude that

x ∈ Y ⇒ h(x) ≥ 0. (D.2.9)

30 .1. Consider the convex sets

S = {[x; τ ] ∈ Rn × R : x ∈ X, g(x) := Ax − b ≤ 0, τ < 0}, T = {[x; τ ] ∈ Rn × R : x ∈ X, τ ≥ h(x)}.

These sets clearly are nonempty and do not intersect (since the x-component x of a point from S ∩ T
would satisfy the premise and violate the conclusion in (D.2.9)). By Separation Theorem, there exists
[e; α] 6= 0 such that
sup [eT x + ατ ] ≤ inf [eT x + ατ ].
[x;τ ]∈S [x;τ ]∈T

Taking into account what S is, we have α ≥ 0 (since otherwise the left hand side in the inequality would
be +∞). With this in mind, the inequality reads

sup eT x : x ∈ X, g(x) ≤ 0 ≤ inf eT x + αh(x) : x ∈ X .


 
(D.2.10)
x x

30 .2. We need the following fact:


Lemma D.2.1 Let X ⊂ Rn be a convex set with 0 ∈ int X, g(x) = Ax + a be such that
g(0) ≤ 0, and dT x + δ be an affine function such that

x ∈ X, g(x) ≤ 0 ⇒ dT x + δ ≤ 0.

Then there exists µ ≥ 0 such that

dT x + δ ≤ µT g(x) ∀x ∈ X.

Note that when X = Rn , Lemma D.2.1 is nothing but Inhomogeneous Farkas Lemma.
Proof of Lemma D.2.1. Consider the cones

M1 = cl {[x; t] : t > 0, x/t ∈ X}, M2 = {y = [x; t] : Ax + ta ≤ 0, t ≥ 0}.

M2 is a polyhedral cone, and M1 is a closed convex cone with a nonempty interior (since
the point [0; 1] belongs to int M1 due to 0 ∈ int X); moreover, [int M1 ] ∩ M2 6= ∅ (since
the point [0; 1] ∈ int M1 belongs to M2 due to g(0) ≤ 0). Observe that the linear form
f T [x; t] := dT x + tδ is nonpositive on M1 ∩ M2 , that is, (−f ) ∈ [M1 ∩ M2 ]∗ , where, as usual,
M∗ is the cone dual to a cone M .
D.2. CONVEX PROGRAMMING PROGRAM AND LAGRANGE DUALITY THEOREM699

Indeed, let [z; t] ∈ M1 ∩ M2 , and let ys = [zs ; ts ] = (1 − s)y + s[0; 1]. Observe that
ys ∈ M1 ∩ M2 when 0 ≤ s ≤ 1. When 0 < s ≤ 1, we have ts > 0, and ys ∈ M1
implies that ws := zs /ts ∈ cl X (why?), while ys ∈ M2 implies that g(ws ) ≤ 0.
Since 0 ∈ int X, ws ∈ cl X implies that θws ∈ X for all θ ∈ [0, 1), while g(0) ≤ 0
along with g(ws ) ≤ 0 implies that g(θws ) ≤ 0 for θ ∈ [0, 1). Invoking the premise
of Lemma, we conclude that
dT (θws ) + δ ≤ 0 ∀θ ∈ [0, 1],
whence dT ws + δ ≤ 0, or, which is the same, f T ys ≤ 0. As s → +0, we have
f T ys → f T [z; t], implying that f T [z; t] ≤ 0, as claimed.
Thus, M1 , M2 are closed cones such that [int M1 ] ∩ M2 6= ∅ and f ∈ [M1 ∩ M2 ]∗. Applying to
M1 , M2 the Dubovitski-Milutin Lemma (Proposition B.2.7), we conclude that [M1 ∩ M2 ]∗ =
(M1 )∗ + (M2 )∗ . Since −f ∈ [M1 ∩ M2 ]∗, there exist ψ ∈ (M1 )∗ and φ ∈ (M2 )∗ such that
[d; δ] = −φ − ψ. The inclusion φ ∈ (M2 )∗ means that the homogeneous linear inequality
φT y ≥ 0 is a consequence of the system [A, a]y ≤ 0 of homogeneous linear inequalities;
by Homogeneous Farkas Lemma (Lemma B.2.1), it follows that identically in x it holds
φT [x; 1] = −µT g(x), with some nonnegative µ. Thus,
∀x ∈ E : dT x + δ = [d; δ]T [x; 1] = µT g(x) − ψ T [x; 1],
and since [x; 1] ∈ M1 whenever x ∈ X, we have ψ T [x; 1] ≥ 0, x ∈ X, so that µ satisfies the
requirements stated in the lemma we are proving. 2

p
30 .3. Recall that we have seen that in (D.2.10), α ≥ 0. We claim that in fact α > 0.

Indeed, assuming α = 0 and setting a = supx eT x : x ∈ X, g(x) ≤ 0 , (D.2.10) implies that
a ∈ R and
{∀(x ∈ X, g(x) ≤ 0) : eT x − a ≤ 0} & {∀x ∈ X : eT x ≥ a}. (D.2.11)
By Lemma D.2.1, the first of these relations implies that for some nonnegative µ it holds
eT x − a ≤ µT g(x) ∀x ∈ X.
Setting x = 0, we get a ≥ 0 due to µ ≥ 0 and g(0) ≤ 0, so that the second relation in
(D.2.11) implies eT x ≥ 0 for all x ∈ X, which is impossible: since 0 ∈ int X, eT x ≥ 0 for all
x ∈ X would imply that e = 0, and since we are in the case of α = 0, we would get [e; α] = 0,
which is not the case due to the origin of [e; α].
30 .4. Thus, α in (D.2.10) is strictlypositive; replacing e with
α−1 e, we can assume that (D.2.10)
T
holds true with α = 1. Setting a = supx e x : x ∈ X, g(x) ≤ 0 , (D.2.10) reads

∀(x ∈ X) : h(x) + eT x ≥ a,
while the definition of a along with Lemma D.2.1 implies that there exists µ ≥ 0 such that
∀x ∈ X : eT x − a ≤ µT g(x).
Combining these relations, we conclude that
h(x) + µT g(x) ≥ 0 ∀x ∈ X. (D.2.12)
Recalling that h(x) = f (x) + ᾱ1T gb(x) with ᾱ1 ∈ K∗ and setting λ = µ, λb = ᾱ1 , we get λ ≥ 0, λ
b ∈ K∗
while by (D.2.12) it holds
T
bT gb(x) ≥ 0 ∀x ∈ X,
f (x) + λ g(x) + λ
meaning that λ, λ
b solve (ConII) (recall that we are in the case of c = 0). 2
700APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

D.2.2 Lagrange Function and Lagrange Duality


D.2.2.A. Lagrange function
Convex Theorem on Alternative brings to our attention the function
 
Xm
L(λ) = inf f (x) + λj gj (x) , (D.2.13)
x∈X
j=1

same as the aggregate


m
X
L(x, λ) = f (x) + λj gj (x) (D.2.14)
j=1

from which this function comes. Aggregate (D.2.14) is called the Lagrange function of the inequality
constrained optimization program

(IC) min {f (x) : gj (x) ≤ 0, j = 1, ..., m, x ∈ X} .

The Lagrange function of an optimization program is a very important entity: most of optimality condi-
tions are expressed in terms of this function. Let us start with translating of what we already know to
the language of the Lagrange function.

D.2.2.B. Convex Programming Duality Theorem


Theorem D.2.3 Consider an arbitrary inequality constrained optimization program (IC). Then
(i) The infimum
L(λ) = inf L(x, λ)
x∈X

of the Lagrange function in x ∈ X is, for every λ ≥ 0, a lower bound on the optimal value in (IC), so
that the optimal value in the optimization program

(IC∗ ) sup L(λ)


λ≥0

also is a lower bound for the optimal value in (IC);


(ii) [Convex Duality Theorem] If (IC)
• is convex,
• is below bounded
and
• satisfies the Relaxed Slater condition,
then the optimal value in (IC∗ ) is attained and is equal to the optimal value in (IC).

Proof. (i) is nothing but Proposition D.2.1 (why?). It makes sense, however, to repeat here the corre-
sponding one-line reasoning:
Let λ ≥ 0; in order to prove that
m
X
L(λ) ≡ inf L(x, λ) ≤ c∗ [L(x, λ) = f (x) + λj gj (x)],
x∈X
j=1

where c∗ is the optimal value in (IC), note that if x is feasible for (IC), then evidently
L(x, λ) ≤ f (x), so that the infimum of L over x ∈ X is ≤ the infimum c∗ of f over the
feasible set of (IC). 2
D.2. CONVEX PROGRAMMING PROGRAM AND LAGRANGE DUALITY THEOREM701

(ii) is an immediate consequence of the Convex Theorem on Alternative. Indeed, let c∗ be the optimal
value in (IC). Then the system
f (x) < c∗ , gj (x) ≤ 0, j = 1, ..., m
has no solutions in X, and by the above Theorem the system (II) associated with c = c∗ has a solution,
i.e., there exists λ∗ ≥ 0 such that L(λ∗ ) ≥ c∗ . But we know from (i) that the strict inequality here is
impossible and, besides this, that L(λ) ≤ c∗ for every λ ≥ 0. Thus, L(λ∗ ) = c∗ and λ∗ is a maximizer of
L over λ ≥ 0. 2

D.2.2.C. Dual program


Theorem D.2.3 establishes certain connection between two optimization programs – the “primal” program
(IC) min {f (x) : gj (x) ≤ 0, j = 1, ..., m, x ∈ X}
and its Lagrange dual program
 

(IC ) max L(λ) ≡ inf L(x, λ) : λ ≥ 0
x∈X

(the variables λ of the dual problem are called the Lagrange multipliers of the primal problem). The
Theorem says that the optimal value in the dual problem is ≤ the one in the primal, and under some
favourable circumstances (the primal problem is convex below bounded and satisfies the Slater condition)
the optimal values in the programs are equal to each other.
In our formulation there is some asymmetry between the primal and the dual programs. In fact both
of the programs are related to the Lagrange function in a quite symmetric way. Indeed, consider the
program
min L(x), L(x) = sup L(λ, x).
x∈X λ≥0

The objective in this program clearly is +∞ at every point x ∈ X which is not feasible for (IC) and is
f (x) on the feasible set of (IC), so that the program is equivalent to (IC). We see that both the primal
and the dual programs come from the Lagrange function: in the primal problem, we minimize over X
the result of maximization of L(x, λ) in λ ≥ 0, and in the dual program we maximize over λ ≥ 0 the
result of minimization of L(x, λ) in x ∈ X. This is a particular (and the most important) example of a
zero sum two person game – the issue we will speak about later.
We have seen that under certain convexity and regularity assumptions the optimal values in (IC)
and (IC∗ ) are equal to each other. There is also another way to say when these optimal values are equal
– this is always the case when the Lagrange function possesses a saddle point, i.e., there exists a pair
x∗ ∈ X, λ∗ ≥ 0 such that at the pair L(x, λ) attains its minimum as a function of x ∈ X and attains its
maximum as a function of λ ≥ 0:
L(x, λ∗ ) ≥ L(x∗ , λ∗ ) ≥ L(x∗ , λ) ∀x ∈ X, λ ≥ 0.
It can be easily demonstrated (do it by yourself or look at Theorem D.4.1) that
Proposition D.2.2 (x∗ , λ∗ ) is a saddle point of the Lagrange function L of (IC) if and only if x∗ is
an optimal solution to (IC), λ∗ is an optimal solution to (IC∗ ) and the optimal values in the indicated
problems are equal to each other.

D.2.2.D. Conic forms of Lagrange Function, Lagrange Duality, and Convex Pro-
gramming Duality Theorem
The above results related to convex optimization problems in the standard MP format admit instructive
extensions to the case of convex problems in conic form. These extensions are as follows:
702APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

D.2.2.D.1. Convex problem in conic form is optimization problem of the form

Opt(P ) = min {f (x) : g(x) := Ax − b ≤ 0, gb(x)≤K 0} (P )


x∈X

where X ⊂ Rn is a nonempty convex set, f : X → R is a convex function, K ⊂ Rν is a regular cone,


and gb(·) : X → Rν is K-convex.

D.2.2.D.2. Conic Lagrange function of (P ) is


Λ
z }| {
T
L(x; λ := [λ; λ]) bT gb(x) : X × [Rk × K∗ ] → R
b = f (x) + λ g(x) + λ [k = dim b]
+

By construction, for λ ∈ Λ L(x; λ) as a function of x underestimates f (x) everywhere on X.

D.2.2.D.3. Conic Lagrange dual of (P ) is the optimization problem


 
k
Opt(D) = max L(λ) := inf L(x; λ) : λ ∈ Λ := R+ × K∗ (D)
x∈X

From L(x; λ) ≤ f (x) for all x ∈ X, λ ∈ Λ we clearly extract that

Opt(D) ≤ Opt(P ); [Weak Duality]

note that this fact is independent of any assumptions of convexity on f , F and on K-convexity of gb.

D.2.2.D.4. Convex Programming Duality Theorem in conic form reads:

Theorem D.2.4 [Convex Programming Duality Theorem in conic form] Consider convex conic problem
(P ), that is, X is convex, f is real-valued and convex on X, and gb(·) is well defined and K-convex on
X. Assume that the problem is below bounded and satisfies the Relaxed Slater condition. Then (D) is
solvable and Opt(P ) = Opt(D).

Note that the only nontrivial part (ii) of Theorem D.2.3 is nothing but the special case of Theorem D.2.4
where K is a nonnegative orthant.
Proof of Theorem D.2.4 is immediate. Under the premise of Theorem, c := Opt(P ) is a real, and the
associated with this c system of constraints (ConI) has no solutions. Relaxed Slater Condition combines
with Convex Theorem on Alternative in conic form (Theorem D.2.2) to imply solvability of (ConII), or,
b∗ ] ∈ Λ such that
which is the same, existence of λ∗ = [λ∗ ; λ
n T
o
L(λ∗ ) = inf f (x) + λ∗ g(x) + λ bT gb(x) ≥ c = Opt(P ).

x∈X

We see that (D) has a feasible solution with the value of the objective ≥ Opt(P ), By Weak Duality, this
value is exactly Opt(P ), the solution in question is optimal for (D), and Opt(P ) = Opt(D). 2

D.2.2.E. Conic Programming and Conic Duality Theorem


Consider the special case of convex problem in conic form – one where f (x) is linear, X is the entire
space and gb(x) = P x − p is affine; in today optimization, problems of this type are called conic. Note
that affine mapping is K-concave whatever be the cone K. Thus, conic problem automatically satisfies
convexity restrictions from Conic Lagrange Duality Theorem and is optimization problem of the form

Opt(P ) = minn cT x : Ax − b ≤ 0, P x − x≤K 0



x∈R
D.2. CONVEX PROGRAMMING PROGRAM AND LAGRANGE DUALITY THEOREM703

where K is a regular cone in certain Rν . The Conic Lagrange dual problem reads
h n T
oi
max inf cT x + λ [Ax − b] + λ bT [P x − p] : λ ≥ 0, λ
b ∈ K∗ ,
λ,b
λ x

which, as it is immediately seen, is nothing but the problem


n o
Opt(D) = max −bT λ − pT λ b : AT λ + P T λ
b + c = 0, λ ≥ 0, λ
b ∈ K∗ (D)
λ,b
λ

It is easy to show that the cone dual to a regular cone also is a regular cone; as a result, problem (D),
called the conic dual of conic problem (P ), also is a conic problem. An immediate computation (utilizing
the fact that [K∗ ]∗ = K for every regular cone K) shows that conic duality if symmetric – the conic dual
to (D) is (equivalent to) (P ).
Indeed, rewriting (D) in the minimization form with ≤ 0 polyhedral constraints, as required
by our recipe for building the conic dual to a conic problem, we get
  T 
  A λ + PTλ b+c≤0 
−Opt(D) = min bT λ + pT λ b: −AT λ − P T λb − c ≤ 0 , −λ
b ∈ −K∗ (D)
λ,b
λ   
−λ ≤ 0

The dual to this problem, in view of [K∗ ]∗ = K, reads

max cT [u − v] : b + A[u − v] − w = 0, P [u − v] + p − y = 0, u ≥ 0, v ≥ 0, w ≥ 0, y ∈ K .

u,v,w,y

After setting x = v − u and eliminating y and w, the latter problem becomes

max −cT x : Ax − b ≤ 0, P x − p≤K 0 ,



x

which is noting but (P ).


In view of primal-dual symmetry, Convex Conic Duality Theorem in the case in question takes the
following nice form:

Theorem D.2.5 [Conic Duality Theorem] Consider a primal-dual pair of conic problems
 T
Opt(P ) = minx∈Rn n c x : Ax − b ≤ 0, P x − x≤K 0 o (P )
T Tb T
Opt(D) = maxλ,b λ
−b λ − p λ : A λ + P ˇλ + c = 0, λ ≥ 0, λ ∈ K∗
b b (D)

One always has Opt(D) ≤ Opt(P ). Besides this, if one of the problems in the pair is bounded and satisfies
the Relaxed Slater condition, then the other problem in the pair is solvable, and Opt(P ) = Opt(D).
Finally, if both the problems satisfy Relaxed Slater condition, then both are solvable with equal optimal
values.

Proof is immediate. Weak duality has already been verified. To verify the second claim, note that by
primal-dual symmetry we can assume that the bounded problem satisfying Relaxed Slater condition is
(P ); but then the claim in question is given by Theorem D.2.4. Finally, if both problems satisfy Relaxed
Slater condition (and in particular are feasible), both are bounded, and therefore solvable with equal
optimal values by the second claim. 2

D.2.3 Optimality Conditions in Convex Programming


Our current goal is to extract from what we already know optimality conditions for convex programs.
704APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

D.2.3.A. Saddle point form of optimality conditions


Theorem D.2.6 [Saddle Point formulation of Optimality Conditions in Convex Programming]
Let (IC) be an optimization program, L(x, λ) be its Lagrange function, and let x∗ ∈ X. Then
(i) A sufficient condition for x∗ to be an optimal solution to (IC) is the existence of the vector of
Lagrange multipliers λ∗ ≥ 0 such that (x∗ , λ∗ ) is a saddle point of the Lagrange function L(x, λ), i.e., a
point where L(x, λ) attains its minimum as a function of x ∈ X and attains its maximum as a function
of λ ≥ 0:
L(x, λ∗ ) ≥ L(x∗ , λ∗ ) ≥ L(x∗ , λ) ∀x ∈ X, λ ≥ 0. (D.2.15)
(ii) if the problem (IC) is convex and satisfies the Relaxed Slater condition, then the above condition
is necessary for optimality of x∗ : if x∗ is optimal for (IC), then there exists λ∗ ≥ 0 such that (x∗ , λ∗ ) is
a saddle point of the Lagrange function.

Proof. (i): assume that for a given x∗ ∈ X there exists λ∗ ≥ 0 such that (D.2.15) is satisfied, and let us
prove that then x∗ is optimal for (IC). First of all, x∗ is feasible: indeed, if gj (x∗ ) > 0 for some j, then,
of course, sup L(x∗ , λ) = +∞ (look what happens when all λ’s, except λj , are fixed, and λj → +∞); but
λ≥0
sup L(x∗ , λ) = +∞ is forbidden by the second inequality in (D.2.15).
λ≥0
Since x∗ is feasible, sup L(x∗ , λ) = f (x∗ ), and we conclude from the second inequality in (D.2.15)
λ≥0
that L(x∗ , λ∗ ) = f (x∗ ). Now the first inequality in (D.2.15) reads
m
X
f (x) + λ∗j gj (x) ≥ f (x∗ ) ∀x ∈ X.
j=1

This inequality immediately implies that x∗ is optimal: indeed, if x is feasible for (IC), then the left hand
side in the latter inequality is ≤ f (x) (recall that λ∗ ≥ 0), and the inequality implies that f (x) ≥ f (x∗ ).
2

(ii): Assume that (IC) is a convex program, x∗ is its optimal solution and the problem satisfies the
Relaxed Slater condition; we should prove that then there exists λ∗ ≥ 0 such that (x∗ , λ∗ ) is a saddle
point of the Lagrange function, i.e., that (D.2.15) is satisfied. As we know from the Convex Programming
Duality Theorem (Theorem D.2.3.(ii)), the dual problem (IC∗ ) has a solution λ∗ ≥ 0 and the optimal
value of the dual problem is equal to the optimal value in the primal one, i.e., to f (x∗ ):

f (x∗ ) = L(λ∗ ) ≡ inf L(x, λ∗ ). (D.2.16)


x∈X

We immediately conclude that


λ∗j > 0 ⇒ gj (x∗ ) = 0
(this is called complementary slackness: positive Lagrange multipliers can be associated only with active
(satisfied at x∗ as equalities) constraints). Indeed, from (D.2.16) it for sure follows that
m
X
f (x∗ ) ≤ L(x∗ , λ∗ ) = f (x∗ ) + λ∗j gj (x∗ );
j=1

in the right hand side are nonpositive (since x∗ is feasible for (IC)), and the sum
P
the terms in the
j
itself is nonnegative due to our inequality; it is possible if and only if all the terms in the sum are zero,
and this is exactly the complementary slackness.
From the complementary slackness we immediately conclude that f (x∗ ) = L(x∗ , λ∗ ), so that (D.2.16)
results in
L(x∗ , λ∗ ) = f (x∗ ) = inf L(x, λ∗ ).
x∈X
D.2. CONVEX PROGRAMMING PROGRAM AND LAGRANGE DUALITY THEOREM705

On the other hand, since x∗ is feasible for (IC), we have L(x∗ , λ) ≤ f (x∗ ) whenever λ ≥ 0. Combining
our observations, we conclude that

L(x∗ , λ) ≤ L(x∗ , λ∗ ) ≤ L(x, λ∗ )

for all x ∈ X and all λ ≥ 0. 2

Note that (i) is valid for an arbitrary inequality constrained optimization program, not necessarily
convex. However, in the nonconvex case the sufficient condition for optimality given by (i) is extremely
far from being necessary and is “almost never” satisfied. In contrast to this, in the convex case the
condition in question is not only sufficient, but also “nearly necessary” – it for sure is necessary when
(IC) is a convex program satisfying the Relaxed Slater condition
We have established Saddle Point form of optimality conditions for convex problems in the standard
Mathematical Programming form (that is, with constraints represented by scalar convex inequalities).
Similar results can be obtained for convex conic problems:

Theorem D.2.7 [Saddle Point formulation of Optimality Conditions in Convex Conic Programming]
Consider a convex conic problem

Opt(P ) = min {f (x) : g(x) := Ax − b ≤ 0, gb(x)≤K 0} (P )


x∈X

(X is convex, f : X → R is convex, K is a regular cone, and gb is K-convex on X) along with its Conic
Lagrange dual problem
 h i 
T T
Opt(D) = max L(λ) := inf f (x) + λ g(x) + λ gb(x) : λ ≥ 0, λ ∈ K∗
b b (D)
λ]
λ=[λ;b x∈X

and assume that (P ) is bounded and satisfies Relaxed Slater condition. Then a point x∗ ∈ X is an optimal
solution to (P ) is and only if x∗ can be augmented by properly selected λ ∈ Λ := R+ × [K∗ ] to a saddle
pint of the conic Lagrange function
T
L(x; λ := [λ; λ]) bT gb(x)
b := f (x) + λ g(x) + λ

on X × Λ.

Proof repeats with evident modifications the one of relevant part of Theorem D.2.6, with Convex Conic
Programming Duality Theorem (Theorem D.2.4) in the role of Convex Programming Duality Theorem
(Theorem D.2.3). 2

D.2.3.B. Karush-Kuhn-Tucker form of optimality conditions


Theorem D.2.6 provides, basically, the strongest optimality conditions for a Convex Programming pro-
gram. These conditions, however, are “implicit” – they are expressed in terms of saddle point of the
Lagrange function, and it is unclear how to verify that something is or is not the saddle point of the
Lagrange function. Fortunately, the proof of Theorem D.2.6 yields more or less explicit optimality con-
ditions. We start with a useful

Definition D.2.3 [Normal Cone] Let X ⊂ Rn and x ∈ X. The normal cone of NX (x) of X taken at
the point x is the set
NX (x) = {h ∈ Rn : hT (x0 − x) ≥ 0 ∀x0 ∈ X}.

Note that NX (x) indeed is a closed convex cone.


Examples:
706APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

1. when x ∈ int X, we have NX (x) = {0};


2. when x ∈ ri X, we have NX (x) = L⊥ , where L = Lin(X − x) is the linear subspace parallel to the
affine hull Aff(X) of X.

Theorem D.2.8 [Karush-Kuhn-Tucker Optimality Conditions in Convex Programming] Let (IC) be a


convex program, let x∗ be its feasible solution, and let the functions f , g1 ,...,gm be differentiable at x∗ .
Then
(i) [Sufficiency] The Karush-Kuhn-Tucker condition:
There exist nonnegative Lagrange multipliers λ∗j , j = 1, ..., m, such that

λ∗j gj (x∗ ) = 0, j = 1, ..., m [complementary slackness] (D.2.17)

and
m
X
∇f (x∗ ) + λ∗j ∇gj (x∗ ) ∈ NX (x∗ ) (D.2.18)
j=1

(see Definition D.2.3)


is sufficient for x∗ to be optimal solution to (IC).
(ii) [Necessity and sufficiency] If, in addition to the premise, the Relaxed Slater condition holds, then
the Karush-Kuhn-Tucker condition from (i) is necessary and sufficient for x∗ to be optimal solution to
(IC).

Proof. Observe that under the premise of Theorem D.2.8 the Karush-Kuhn-Tucker condition is necessary
and sufficient for (x∗ , λ∗ ) to be a saddle point of the Lagrange function. ∗ ∗
Pm Indeed,∗ the (x , λ ) is a saddle
∗ ∗
point of the Lagrange function if and only if a) L(x , λ) = f (x ) + i=1 λj gj (x ) as a function of λ ≥ 0
attains its minimum at λ = λ∗ . Since L(x∗ , λ) is linear in λ, this is exactly the same as to say that
for all j we have gj (x∗ ) ≥ 0 (which we knew in advance – x∗ is feasible!) and λ∗j gj (x∗ ) = 0, which is
complementary slackness;
b) L(x, λ∗ ) as a function of x ∈ X attains it minimum at x∗ . Since L(x, λ∗ ) is convex in x ∈ X due to
λ∗ ≥ 0 and L(x, λ∗ ) is differentiable at x∗ by Theorem’s premise,PProposition C.5.1 says that L(x, λ∗ )
m
achieves its minimum at x∗ if and only if ∇x L(x∗ , λ∗ ) = ∇f (x∗ ) + i=1 λ∗i ∇gi (x∗ ) has nonnegative inner
products with all vectors h from the radial cone TX (x∗ ), i.e., all Ph such that x∗ + th ∈ X for all small
m

enough t > 0, which is exactly the same as to say that ∇f (x ) + i=1 λ∗i ∇gi (x∗ ) ∈ NX (x∗ ) (since for a
convex set X and all x ∈ X it clearly holds NX (x) = {f : f T h ≥ 0 ∀h ∈ TX (x)). The bottom line is that
for a feasible x∗ and λ∗ ≥ 0, (x∗ , λ∗ ) is a saddle point of the Lagrange function if and only if (x∗ , λ∗ )
meet the Karush-Kuhn-Tucker condition. This observation proves (i), and combined with Theorem D.2.3,
proves (ii) as well. 2

Note that the optimality conditions stated in Theorem C.5.2 and Proposition C.5.1 are particular
cases of the above Theorem corresponding to m = 0.
Note that in the case when x∗ ∈ int X, we have NX (x∗ ) = {0}, so that (D.2.18) reads
m
X
∇f (x∗ ) + λ∗i ∇gi (x∗ ) = 0;
i=1

when x∗ ∈ ri X, NX (x∗ ) is the orthogonal complement to the linear subspace L to which Aff(X) is
parallel, so that (D.2.18) reads
m
X

∇f (x ) + λ∗i ∇gi (x∗ ) is orthogonal to L = Lin(X − x∗ ).
i=1
D.3. DUALITY IN LINEAR AND CONVEX QUADRATIC PROGRAMMING 707

D.2.3.C. Optimality conditions in Conic Programming


Theorem D.2.9 [Optimality Conditions in Conic Programming] Consider a primal-dual pair of conic
problems (cf. Section D.2.2.E)

Opt(P ) = minx∈Rn cnT x : Ax − b ≤ 0, P x − p≤K 0 o (P )
T Tb T Tb
Opt(D) = maxλ=[λ;b λ]
−b λ − p λ : λ ≥ 0.λ ∈ K∗ , A λ + P λ + c = 0
b (D)

and assume that both problems satisfy Relaxed Slater condition. Then a pair of feasible solutions x∗ to
(P ) and λ∗ := [λ∗ ; λ
b∗ ] to (D) is comprised of optimal solutions to the respective problems
— [Zero Duality Gap] if and only if

DualityGap(x∗ ; λ∗ ) := cT x + ∗ − [−bT λ∗ − pT λ
b∗ ] = 0,

and
— [Complementary Slackness] if and only if
T
bT [p − P x∗ ] = 0
λ∗ [b − Ax∗ ] + λ
Note: Under the premise of Theorem, from feasibility of x∗ and λ∗ for respective problems it follows
that b − Ax ≥ 0 and p − P x ∈ K. Therefore Complementary slackness (which says that the sum of two
inner products, every one of a vector from a regular cone and a vector from the dual of this cone, and as
such automatically nonnegative) is zero is a really strong restriction,
Proof of Theorem D.2.9 is immediate. By Conic Duality Theorem (Theorem D.2.5) we are in the case
when Opt(P ) = Opt(D) ∈ R, and therefore
DualityGap(x∗ ; λ∗ ) = [cT x − Opt(P )] + [Opt(D] − [−bT λ∗ − pT λ
b∗ ]]

is the sum of nonoptimalities, in terms of objective, of x∗ as a candidate solution to (P ) and for λ∗


as a candidate solutio to (D). since the candidate solutions in question are feasible for the respective
problems, the sum of nonoptimalities is always nonnegative and is 0 if and only if the solutions in question
are optimal for the respective problems.
It remains to note that Complementary Slackness condition is equivalent to Zero Duality Gap one,
since for feasible for the respective problems x∗ and λ∗ we gave
DualityGap(x∗ ; λ∗ ) = cT x∗ + bT λ∗ + pT λ
b∗ = −[AT λ∗ + P T λ
c∗ ]T x∗ + bT λ∗ + pT λ
b∗
T
bT [p − P x∗ ],
= λ∗ [b − Ax∗ ] + λ∗

so that Complementary Slackness is, for feasible for the respective problems x∗ and λ∗ . exactly the same
as Zero Duality Gap. 2

D.3 Duality in Linear and Convex Quadratic Programming


The fundamental role of the Lagrange function and Lagrange Duality in Optimization is clear already
from the Optimality Conditions given by Theorem D.2.6, but this role is not restricted by this Theorem
only. There are several cases when we can explicitly write down the Lagrange dual, and whenever it is the
case, we get a pair of explicitly formulated and closely related to each other optimization programs – the
primal-dual pair; analyzing the problems simultaneously, we get more information about their properties
(and get a possibility to solve the problems numerically in a more efficient way) than it is possible when
we restrict ourselves with only one problem of the pair. The detailed investigation of Duality in “well-
structured” Convex Programming – in the cases when we can explicitly write down both the primal and
the dual problems – goes beyond the scope of our course (mainly because the Lagrange duality is not the
best possible approach here; the best approach is given by the Fenchel Duality – something similar, but
not identical). There are, however, simple cases when already the Lagrange duality is quite appropriate.
Let us look at two of these particular cases.
708APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

D.3.1 Linear Programming Duality


Let us start with some general observation. Note that the Karush-Kuhn-Tucker condition under the
assumption of the Theorem ((IC) is convex, x∗ is an interior point of X, f, g1 , ..., gm are differentiable at
x∗ ) is exactly the condition that (x∗ , λ∗ = (λ∗1 , ..., λ∗m )) is a saddle point of the Lagrange function
m
X
L(x, λ) = f (x) + λj gj (x) : (D.3.1)
j=1

(D.2.17) says that L(x∗ , λ) attains its maximum in λ ≥ 0, and (D.2.18) says that L(x, λ∗ ) attains its at
λ∗ minimum in x at x = x∗ .
Now consider the particular case of (IC) where X = Rn is the entire space, the objective f is convex
and everywhere differentiable and the constraints g1 , ..., gm are linear. For this case, Theorem D.2.8 says
to us that the KKT (Karush-Kuhn-Tucker) condition is necessary and sufficient for optimality of x∗ ; as
we just have explained, this is the same as to say that the necessary and sufficient condition of optimality
for x∗ is that x∗ along with certain λ∗ ≥ 0 form a saddle point of the Lagrange function. Combining
these observations with Proposition D.2.2, we get the following simple result:

Proposition D.3.1 Let (IC) be a convex program with X = Rn , everywhere differentiable objective f
and linear constraints g1 , ..., gm . Then x∗ is optimal solution to (IC) if and only if there exists λ∗ ≥ 0
such that (x∗ , λ∗ ) is a saddle point of the Lagrange function (D.3.1) (regarded as a function of x ∈ Rn
and λ ≥ 0). In particular, (IC) is solvable if and only if L has saddle points, and if it is the case, then
both (IC) and its Lagrange dual
(IC∗ ) : max {L(λ) : λ ≥ 0}
λ

are solvable with equal optimal values.

Let us look what this proposition says in the Linear Programming case, i.e., when (IC) is the program

min f (x) = cT x : gj (x) ≡ bj − aTj x ≤ 0, j = 1, ..., m .



(P )
x

In order to get the Lagrange dual, we should form the Lagrange function
m
X m
X m
X
L(x, λ) = f (x) + λj gj (x) = [c − λj aj ]T x + λj bj
j=1 j=1 j=1

of (IC) and to minimize it in x ∈ Rn ; this will give us the dual objective. In our case the minimization
m
P m
P
in x is immediate: the minimal value is −∞, if c − λj aj 6= 0, and is λj bj otherwise. We see that
j=1 j=1
the Lagrange dual is  
 m
X 
(D) max bT λ : λj aj = c, λ ≥ 0 .
λ  
j=1

The problem we get is the usual LP dual to (P ), and Proposition D.3.1 is one of the equivalent forms of
the Linear Programming Duality Theorem which we already know.

D.3.2 Quadratic Programming Duality


Now consider the case when the original problem is linearly constrained convex quadratic program
 
1
(P ) min f (x) = xT Dx + cT x : gj (x) ≡ bj − aTj x ≤ 0, j = 1, ..., m ,
x 2
D.3. DUALITY IN LINEAR AND CONVEX QUADRATIC PROGRAMMING 709

where the objective is a strictly convex quadratic form, so that D = DT is positive definite matrix:
xT Dx > 0 whenever x 6= 0. It is convenient to rewrite the constraints in the vector-matrix form
   T
b1 a1
g(x) = b − Ax ≤ 0, b =  ...  , A =  ...  .
bm aTm

In order to form the Lagrange dual to (P ) program, we write down the Lagrange function
m
P
L(x, λ) = f (x) + λj gj (x)
j=1
= cT x + λ (b − Ax) + 12 xT Dx
T
1 T T T T
= 2 x Dx − [A λ − c] x + b λ

and minimize it in x. Since the function is convex and differentiable in x, the minimum, if exists, is given
by the Fermat rule
∇x L(x, λ) = 0,
which in our situation becomes
Dx = [AT λ − c].
Since D is positive definite, it is nonsingular, so that the Fermat equation has a unique solution which is
the minimizer of L(·, λ); this solution is

x = D−1 [AT λ − c].

Substituting the value of x into the expression for the Lagrange function, we get the dual objective:
1
L(λ) = − [AT λ − c]T D−1 [AT λ − c] + bT λ,
2
and the dual problem is to maximize this objective over the nonnegative orthant. Usually people rewrite
this dual problem equivalently by introducing additional variables

t = −D−1 [AT λ − c] [[AT λ − c]T D−1 [AT λ − c] = tT Dt];

with this substitution, the dual problem becomes


 
1
(D) max − tT Dt + bT λ : AT λ + Dt = c, λ ≥ 0 .
λ,t 2
We see that the dual problem also turns out to be linearly constrained convex quadratic program.
Note also that in the case in question feasible problem (P ) automatically is solvable3)
With this observation, we get from Proposition D.3.1 the following
Theorem D.3.1 [Duality Theorem in Quadratic Programming]
Let (P ) be feasible quadratic program with positive definite symmetric matrix D in the objective. Then
both (P ) and (D) are solvable, and the optimal values in the problems are equal to each other.
The pair (x; (λ, t)) of feasible solutions to the problems is comprised of the optimal solutions to them
(i) if and only if the primal objective at x is equal to the dual objective at (λ, t) [“zero duality gap”
optimality condition]
same as
(ii) if and only if
λi (Ax − b)i = 0, i = 1, ..., m, and t = −x. (D.3.2)
3)
since its objective, due to positive definiteness of D, goes to infinity as |x| → ∞, and due to the following
general fact:
Let (IC) be a feasible program with closed domain X, continuous on X objective and constraints and such that
f (x) → ∞ as x ∈ X “goes to infinity” (i.e., |x| → ∞). Then (IC) is solvable.
You are welcome to prove this simple statement (it is among the exercises accompanying this Lecture)
710APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

Proof. (i): we know from Proposition D.3.1 that the optimal value in minimization problem (P ) is equal
to the optimal value in the maximization problem (D). It follows that the value of the primal objective at
any primal feasible solution is ≥ the value of the dual objective at any dual feasible solution, and equality
is possible if and only if these values coincide with the optimal values in the problems, as claimed in (i).
(ii): Let us compute the difference ∆ between the values of the primal objective at primal feasible
solution x and the dual objective at dual feasible solution (λ, t):

∆ = cT x + 12 xT Dx − [bT λ − 12 tT Dt]
= [AT λ + Dt]T x + 21 xT Dx + 12 tT Dt − bT λ
[since AT λ + Dt = c]
= λT [Ax − b] + 12 [x + t]T D[x + t]

Since Ax − b ≥ 0 and λ ≥ 0 due to primal feasibility of x and dual feasibility of (λ, t), both terms in the
resulting expression for ∆ are nonnegative. Thus, ∆ = 0 (which, by (i). is equivalent to optimality of
m
λj (Ax − b)j = 0 and (x + t)T D(x + t) = 0.
P
x for (P ) and optimality of (λ, t) for (D)) if and only if
j=1
The first of these equalities, due to λ ≥ 0 and Ax ≥ b, is equivalent to λj (Ax − b)j = 0, j = 1, ..., m; the
second, due to positive definiteness of D, is equivalent to x + t = 0. 2

D.4 Saddle Points


D.4.1 Definition and Game Theory interpretation
When speaking about the ”saddle point” formulation of optimality conditions in Convex Programming,
we touched a very interesting in its own right topic of Saddle Points. This notion is related to the situation
as follows. Let X ⊂ Rn and Λ ∈ Rm be two nonempty sets, and let

L(x, λ) : X × Λ → R

be a real-valued function of x ∈ X and λ ∈ Λ. We say that a point (x∗ , λ∗ ) ∈ X × Λ is a saddle point


of L on X × Λ, if L attains in this point its maximum in λ ∈ Λ and attains at the point its minimum in
x ∈ X:
L(x, λ∗ ) ≥ L(x∗ , λ∗ ) ≥ L(x∗ , λ) ∀(x, λ) ∈ X × Λ. (D.4.1)
The notion of a saddle point admits natural interpretation in game terms. Consider what is called a two
person zero sum game where player I chooses x ∈ X and player II chooses λ ∈ Λ; after the players have
chosen their decisions, player I pays to player II the sum L(x, λ). Of course, I is interested to minimize
his payment, while II is interested to maximize his income. What is the natural notion of the equilibrium
in such a game – what are the choices (x, λ) of the players I and II such that every one of the players
is not interested to vary his choice independently on whether he knows the choice of his opponent? It
is immediately seen that the equilibria are exactly the saddle points of the cost function L. Indeed, if
(x∗ , λ∗ ) is such a point, than the player I is not interested to pass from x to another choice, given that
II keeps his choice λ fixed: the first inequality in (D.4.1) shows that such a choice cannot decrease the
payment of I. Similarly, player II is not interested to choose something different from λ∗ , given that I
keeps his choice x∗ – such an action cannot increase the income of II. On the other hand, if (x∗ , λ∗ ) is
not a saddle point, then either the player I can decrease his payment passing from x∗ to another choice,
given that II keeps his choice at λ∗ – this is the case when the first inequality in (D.4.1) is violated, or
similarly for the player II; thus, equilibria are exactly the saddle points.
The game interpretation of the notion of a saddle point motivates deep insight into the structure of
the set of saddle points. Consider the following two situations:
(A) player I makes his choice first, and player II makes his choice already knowing the choice of I;
(B) vice versa, player II chooses first, and I makes his choice already knowing the choice of II.
D.4. SADDLE POINTS 711

In the case (A) the reasoning of I is: If I choose some x, then II of course will choose λ which
maximizes, for my x, my payment L(x, λ), so that I shall pay the sum

L(x) = sup L(x, λ);


λ∈Λ

Consequently, my policy should be to choose x which minimizes my loss function L, i.e., the one which
solves the optimization problem
(I) min L(x);
x∈X

with this policy my anticipated payment will be

inf L(x) = inf sup L(x, λ).


x∈X x∈X λ∈Λ

In the case (B), similar reasoning of II enforces him to choose λ maximizing his profit function

L(λ) = inf L(x, λ),


x∈X

i.e., the one which solves the optimization problem

(II) max L(λ);


λ∈Λ

with this policy, the anticipated profit of II is

sup L(λ) = sup inf L(x, λ).


λ∈Λ λ∈Λ x∈X

Note that these two reasonings relate to two different games: the one with priority of II (when making
his decision, II already knows the choice of I), and the one with similar priority of I. Therefore we should
not, generally speaking, expect that the anticipated loss of I in (A) is equal to the anticipated profit of II
in (B). What can be guessed is that the anticipated loss of I in (B) is less than or equal to the anticipated
profit of II in (A), since the conditions of the game (B) are better for I than those of (A). Thus, we may
guess that independently of the structure of the function L(x, λ), there is the inequality

sup inf L(x, λ) ≤ inf sup L(x, λ). (D.4.2)


λ∈Λ x∈X x∈X λ∈Λ

This inequality indeed is true; which is seen from the following reasoning:

∀y ∈ X : inf L(x, λ) ≤ L(y, λ) ⇒


x∈X
∀y ∈ X : sup inf L(x, λ) ≤ sup L(y, λ) ≡ L(y);
λ∈Λ x∈X λ∈Λ

consequently, the quantity sup inf L(x, λ) is a lower bound for the function L(y), y ∈ X, and is therefore
λ∈Λ x∈X
a lower bound for the infimum of the latter function over y ∈ X, i.e., is a lower bound for inf sup L(y, λ).
y∈X λ∈Λ
Now let us look what happens when the game in question has a saddle point (x∗ , λ∗ ), so that

L(x, λ∗ ) ≥ L(x∗ , λ∗ ) ≥ L(x∗ , λ) ∀(x, λ) ∈ X × Λ. (D.4.3)

We claim that if it is the case, then


(*) x∗ is an optimal solution to (I), λ∗ is an optimal solution to (II) and the optimal values in these
two optimization problems are equal to each other (and are equal to the quantity L(x∗ , λ∗ )).
Indeed, from (D.4.3) it follows that

L(λ∗ ) ≥ L(x∗ , λ∗ ) ≥ L(x∗ ),


712APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

whence, of course,
sup L(λ) ≥ L(λ∗ ) ≥ L(x∗ , λ∗ ) ≥ L(x∗ ) ≥ inf L(x).
λ∈Λ x∈X

the very first quantity in the latter chain is ≤ the very last quantity by (D.4.2), which is possible if and
only if all the inequalities in the chain are equalities, which is exactly what is said by (A) and (B).
Thus, if (x∗ , λ∗ ) is a saddle point of L, then (*) takes place. We are about to demonstrate that the
inverse also is true:

Theorem D.4.1 [Structure of the saddle point set] Let L : X × Y → R be a function. The set of saddle
points of the function is nonempty if and only if the related optimization problems (I) and (II) are solvable
and the optimal values in the problems are equal to each other. If it is the case, then the saddle points of
L are exactly all pairs (x∗ , λ∗ ) with x∗ being an optimal solution to (I) and λ∗ being an optimal solution
to (II), and the value of the cost function L(·, ·) at every one of these points is equal to the common
optimal value in (I) and (II).

Proof. We already have established “half” of the theorem: if there are saddle points of L, then their
components are optimal solutions to (I), respectively, (II), and the optimal values in these two problems
are equal to each other and to the value of L at the saddle point in question. To complete the proof,
we should demonstrate that if x∗ is an optimal solution to (I), λ∗ is an optimal solution to (II) and the
optimal values in the problems are equal to each other, then (x∗ , λ∗ ) is a saddle point of L. This is
immediate: we have
L(x, λ∗ ) ≥ L(λ∗ ) [ definition of L]
= L(x∗ ) [by assumption]
≥ L(x∗ , λ) [definition of L]
whence
L(x, λ∗ ) ≥ L(x∗ , λ) ∀x ∈ X, λ ∈ Λ;
substituting λ = λ∗ in the right hand side of this inequality, we get L(x, λ∗ ) ≥ L(x∗ , λ∗ ), and substituting
x = x∗ in the right hand side of our inequality, we get L(x∗ , λ∗ ) ≥ L(x∗ , λ); thus, (x∗ , λ∗ ) indeed is a
saddle point of L. 2

D.4.2 Existence of Saddle Points


It is easily seen that a ”quite respectable” cost function may have no saddle points, e.g., the function
L(x, λ) = (x − λ)2 on the unit square [0, 1] × [0, 1]. Indeed, here

L(x) = sup (x − λ)2 = max{x2 , (1 − x)2 },


λ∈[0,1]

L(λ) = inf (x − λ)2 = 0, λ ∈ [0, 1],


x∈[0,1]

so that the optimal value in (I) is 41 , and the optimal value in (II) is 0; according to Theorem D.4.1 it
means that L has no saddle points.
On the other hand, there are generic cases when L has a saddle point, e.g., when
m
X
L(x, λ) = f (x) + λi gi (x) : X × Rm
+ →R
i=1

is the Lagrange function of a solvable convex program satisfying the Slater condition. Note that in this
case L is convex in x for every λ ∈ Λ ≡ Rm + and is linear (and therefore concave) in λ for every fixed X.
As we shall see in a while, these are the structural properties of L which take upon themselves the “main
responsibility” for the fact that in the case in question the saddle points exist. Namely, there exists the
following
D.4. SADDLE POINTS 713

Theorem D.4.2 [Existence of saddle points of a convex-concave function (Sion-Kakutani)] Let X and
Λ be convex compact sets in Rn and Rm , respectively, and let

L(x, λ) : X × Λ → R

be a continuous function which is convex in x ∈ X for every fixed λ ∈ Λ and is concave in λ ∈ Λ for
every fixed x ∈ X. Then L has saddle points on X × Λ.

Proof. According to Theorem D.4.1, we should prove that


• (i) Optimization problems (I) and (II) are solvable
• (ii) the optimal values in (I) and (II) are equal to each other.
(i) is valid independently of convexity-concavity of L and is given by the following routine reasoning from
the Analysis:
Since X and Λ are compact sets and L is continuous on X × Λ, due to the well-known Analysis
theorem L is uniformly continuous on X × Λ: for every  > 0 there exists δ() > 0 such that

|x − x0 | + |λ − λ0 | ≤ δ() ⇒ |L(x, λ) − L(x0 , λ0 )| ≤  4)


(D.4.4)

In particular,
|x − x0 | ≤ δ() ⇒ |L(x, λ) − L(x0 λ)| ≤ ,
whence, of course, also
|x − x0 | ≤ δ() ⇒ |L(x) − L(x0 )| ≤ ,
so that the function L is continuous on X. Similarly, L is continuous on Λ. Taking in account that X
and Λ are compact sets, we conclude that the problems (I) and (II) are solvable.
(ii) is the essence of the matter; here, of course, the entire construction heavily exploits convexity-
concavity of L.
00 . To prove (ii), we first establish the following statement, which is important by its own right:
Lemma D.4.1 [Minmax Lemma] Let X be a convex compact set and f0 , ..., fN be a collec-
tion of N + 1 convex and continuous functions on X. Then the minmax

min max fi (x) (D.4.5)


x∈X i=0,...,N

of the collection is equal to the minimum in x ∈ X of certain convex combination of the


functions: there exist nonnegative µi , i = 0, ..., N , with unit sum such that
N
X
min max fi (x) = min µi fi (x)
x∈X i=0,...,N x∈X
i=0

Remark D.4.1 Minimum of every convex combination of a collection of arbitrary functions


is ≤ the minmax of the collection; this evident fact can be also obtained from (D.4.2) as
applied to the function
N
X
M (x, µ) = µi fi (x)
i=0

4)
for those not too familiar with Analysis, I wish to stress the difference between the usual continuity and
the uniform continuity: continuity of L means that given  > 0 and a point (x, λ), it is possible to choose δ > 0
such that (D.4.4) is valid; the corresponding δ may depend on (x, λ), not only on . Uniform continuity means
that this positive δ may be chosen as a function of  only. The fact that a continuous on a compact set function
automatically is uniformly continuous on the set is one of the most useful features of compact sets
714APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

on the direct product of X and the standard simplex


X
∆ = {µ ∈ RN +1 | µ ≥ 0, µi = 1}.
i

The Minmax Lemma says that if fi are convex and continuous on a convex compact set X,
then the indicated inequality is in fact equality; you can easily verify that this is nothing but
the claim that the function M possesses a saddle point. Thus, the Minmax Lemma is in fact
a particular case of the Sion-Kakutani Theorem; we are about to give a direct proof of this
particular case of the Theorem and then to derive the general case from this particular one.

Proof of the Minmax Lemma. Consider the optimization program

(S) min {t : f0 (x) − t ≤ 0, f1 (x) − t ≤ 0, ..., fN (x) − t ≤ 0, x ∈ X} .


t,x

This clearly is a convex program with the optimal value

t∗ = min max fi (x)


x∈X i=0,...,N

(note that (t, x) is feasible solution for (S) if and only if x ∈ X and t ≥ max fi (x)). The
i=0,...,N
problem clearly satisfies the Slater condition and is solvable (since X is compact set and fi ,
i = 0, ..., N , are continuous on X; therefore their maximum also is continuous on X and
thus attains its minimum on the compact set X). Let (t∗ , x∗ ) be an optimal solution to the
problem. According to Theorem D.2.6, there exists λ∗ ≥ 0 such that ((t∗ , x∗ ), λ∗ ) is a saddle
point of the corresponding Lagrange function

N
X N
X N
X
L(t, x; λ) = t + λi (fi (x) − t) = t(1 − λi ) + λi fi (x),
i=0 i=0 i=0

and the value of this function at ((t∗ , x∗ ), λ∗ ) is equal to the optimal value in (S), i.e., to t∗ .
Now, since L(t, x; λ∗ ) attains its minimum in (t, x) over the set {t ∈ R, x ∈ X} at (t∗ , x∗ ),
we should have
N
X
λ∗i = 1
i=0

(otherwise the minimum of L in (t, x) would be −∞). Thus,


" N
#
X

[min max fi (x) =] t = min t×0+ λ∗i fi (x) ,
x∈X i=0,...,N t∈R,x∈X
i=0

so that
N
X
min max fi (x) = min λ∗i fi (x)
x∈X i=0,...,N x∈X
i=0

N
with some λ∗i ≥ 0, λ∗i = 1, as claimed. 2
P
i=0
D.4. SADDLE POINTS 715

From the Minmax Lemma to the Sion-Kakutani Theorem. We should prove that the
optimal values in (I) and (II) (which, by (i), are well defined reals) are equal to each other, i.e., that

inf sup L(x, λ) = sup inf L(x, λ).


x∈X λ∈Λ λ∈Λ x∈X

We know from (D.4.4) that the first of these two quantities is greater than or equal to the second, so that
all we need is to prove the inverse inequality. For me it is convenient to assume that the right quantity
(the optimal value in (II)) is 0, which, of course, does not restrict generality; and all we need to prove is
that the left quantity – the optimal value in (I) – cannot be positive.
10 . What does it mean that the optimal value in (II) is zero? When it is zero, then the function L(λ)
is nonpositive for every λ, or, which is the same, the convex continuous function of x ∈ X – the function
L(x, λ) – has nonpositive minimal value over x ∈ X. Since X is compact, this minimal value is achieved,
so that the set
X(λ) = {x ∈ X | L(x, λ) ≤ 0}
is nonempty; and since X is convex and L is convex in x ∈ X, the set X(λ) is convex (as a level set of
a convex function, Proposition C.1.4). Note also that the set is closed (since X is closed and L(x, λ) is
continuous in x ∈ X).
20 . Thus, if the optimal value in (II) is zero, then the set X(λ) is a nonempty convex compact set for
every λ ∈ Λ. And what does it mean that the optimal value in (I) is nonpositive? It means exactly that
there is a point x ∈ X where the function L is nonpositive, i.e., the point x ∈ X where L(x, λ) ≤ 0 for
all λ ∈ Λ. In other words, to prove that the optimal value in (I) is nonpositive is the same as to prove
that the sets X(λ), λ ∈ Λ, have a point in common.
30 . With the above observations we see that the situation is as follows: we are given a family of closed
nonempty convex subsets X(λ), λ ∈ Λ, of a compact set X, and we should prove that these sets have a
point in common. To this end, in turn, it suffices to prove that every finite number of sets from our family
have a point in common (to justify this claim, I can refer to the Helley Theorem II, which gives us much
stronger result: to prove that all X(λ) have a point in common, it suffices to prove that every (n + 1)
sets of this family, n being the affine dimension of X, have a point in common). Let X(λ0 ), ..., X(λN ) be
N + 1 sets from our family; we should prove that the sets have a point in common. In other words, let

fi (x) = L(x, λi ), i = 0, ..., N ;

all we should prove is that there exists a point x where all our functions are nonpositive, or, which is the
same, that the minmax of our collection of functions – the quantity

α ≡ min max fi (x)


x∈X i=1,...,N

– is nonpositive.
The proof of the inequality α ≤ 0 is as follows. According to the Minmax Lemma (which can be
applied in our situation – since L is convex and continuous in x, all fi are convex and continuous, and X
N
P
is compact), α is the minimum in x ∈ X of certain convex combination φ(x) = νi fi (x) of the functions
i=0
fi (x). We have
N
X N
X N
X
φ(x) = νi fi (x) ≡ νi L(x, λi ) ≤ L(x, νi λi )
i=0 i=0 i=0

(the last inequality follows from concavity of L in λ; this is the only – and crucial – point where we use
this assumption). We see that φ(·) is majorated by L(·, λ) for a properly chosen λ; it follows that the
minimum of φ in x ∈ X – and we already know that this minimum is exactly α – is nonpositive (recall
that the minimum of L in x is nonpositive for every λ). 2
716APPENDIX D. CONVEX PROGRAMMING, LAGRANGE DUALITY, SADDLE POINTS

Semi-Bounded case. The next theorem lifts the assumption of boundedness of X and Λ in Theorem
D.4.2 – now only one of these sets should be bounded – at the price of some weakening of the conclusion.

Theorem D.4.3 [Swapping min and max in convex-concave saddle point problem (Sion-Kakutani)] Let
X and Λ be convex sets in Rn and Rm , respectively, with X being compact, and let

L(x, λ) : X × Λ → R

be a continuous function which is convex in x ∈ X for every fixed λ ∈ Λ and is concave in λ ∈ Λ for
every fixed x ∈ X. Then
inf sup L(x, λ) = sup inf L(x, λ). (D.4.6)
x∈X λ∈Λ λ∈Λ x∈X

Proof. By general theory, in (D.4.6) the left hand side is ≥ the right hand side, so that there is nothing
to prove when the right had side is +∞. Assume that this is not the case. Since X is compact and L is
continuous in x ∈ X, inf x∈X L(x, λ) > −∞ for every λ ∈ Λ, so that the left hand side in (D.4.6) cannot
be −∞; since it is not +∞ as well, it is a real, and by shift, we can assume w.l.o.g. that this real is 0:

sup inf L(x, λ) = 0.


λ∈Λ x∈X

All we need to prove now is that the left hand side in (D.4.6) is nonpositive. Assume, on the contrary,
that it is positive and thus is > c with some c > 0. Then for every x ∈ X there exists λx ∈ Λ such that
L(x, λx ) > c. By continuity, there exists a neighborhood Vx of x on X such that L(x0 , λx ) ≥ c for all
x0 ∈ Vx . Since X is compact, we can find finitely many points x1 , ..., xn in X such that the union, over
1 ≤ i ≤ n, of Vxi is exactly X, implying that max1≤i≤n L(x, λxi ) ≥ c for every x ∈ X. Now let Λ̄ be the
convex hull of {λx1 , ..., λxn }, so that maxλ∈Λ̄ L(x, λ) ≥ c for every x ∈ X. Applying to L and the convex
compact sets X, Λ̄ Theorem D.4.2, we get the equality in the following chain:

c ≤ min max L(x, λ) = max min L(x, λ) ≤ sup min L(x, λ) = 0,


x∈X λ∈Λ̄ λ∈Λ̄ x∈X λ∈Λ x∈X

which is a desired contradiction (recall that c > 0). 2

Slightly strengthening the premise of Theorem D.4.3, we can replace (D.4.6) with existence of a saddle
point:

Theorem D.4.4 [Existence of saddle point in convex-concave saddle point problem (Sion-Kakutani,
Semi-Bounded case)] Let X and Λ be closed convex sets in Rn and Rm , respectively, with X being
compact, and let
L(x, λ) : X × Λ → R
be a continuous function which is convex in x ∈ X for every fixed λ ∈ Λ and is concave in λ ∈ Λ for
every fixed x ∈ X. Assume that for every a ∈ R, there exist a collection xa1 , ..., xana ∈ X such that the set

{λ ∈ Λ : L(xai , λ) ≥ a, 1 ≤ i ≤ na }

is bounded5 . Then L has saddle points on X × Λ.

Proof. Since X is compact and L is continuous, the function L(λ) = minx∈X L(x, λ) is real-valued and
continuous on Λ. Further, for every a ∈ R, the set {λ ∈ Λ : L(λ) ≥ a} clearly is contained in the set
{λ : L(xai , λ) ≥ a, 1 ≤ i ≤ na } and thus is bounded. Thus, L(λ) is a continuous function on a closed
set Λ, and the level sets {λ ∈ Λ : L(λ) ≥ a} are bounded, implying that L attains its maximum on
5
this definitely is the case true when L(x̄, λ) is coercive in λ for some x̄ ∈ X, meaning that the sets {λ ∈ Λ :
L(x̄, λ) ≥ a} are bounded for every a ∈ R, or, equivalently, whenever λi ∈ Λ and kλi k2 → ∞ as i → ∞, we have
L(x̄, λi ) → −∞ as i → ∞.
D.4. SADDLE POINTS 717

Λ. Invoking Theorem D.4.3, it follows that inf x∈X [L(x) := supλ∈Λ L(x, λ)] is finite, implying that the
function L(·) is not +∞ identically in x ∈ X. Since L is continuous, L is lower semicontinuous. Thus,
L : X → R ∪ {+∞} is a lower semicontinuous proper (i.e., not identically +∞) function on X; since
X is compact, L attains its minimum on X. Thus, both problems maxλ∈Λ L(λ) and minx∈X L(x) are
solvable, and the optimal values in the problem are equal by Theorem D.4.3. Invoking Theorem D.4.1,
L has a saddle point. 2
Index

S)¸, 94 canonical barriers,


properties of, 377–378
affine canonical cones,
basis, 586 central points of, 374
combination, 584 properties of, 372
dependence/independence, 585 scalings of, 373–375
dimension chance constraints, 247
of affine subspace, 586 Lagrangian relaxation of, 250–251
of set, 625 safe tractable approximation of, 249
hull, 583 characteristic polynomial, 607
structure of, 584 chip design, 185–187
plane, 589 closure, 621
subspace, 583 combination
representation by linear equations, 587 conic, 620
affinely convex, 618
independent set, 585
complementary slackness
spanning set, 584
in Convex Programming, 704
algorithm
in Linear Programming, 29, 641
Bundle Mirror, 444–446, 453
complexity
Mirror Descent, 409–420
information-based, 391
for stochastic minimization/saddle point
cone, 51, 619
problems, 424–436
dual, 54, 649
saddle point, 420, 423
interior of, 75
setup for, 403–409
Lorentz/second-order/ice-cream, 52
summary on, 474–494
normal, 677
Mirror Prox, 468–474
pointed, 51, 649
summary on, 474–494
polyhedral, 619, 657
Truncated Bundle Mirror, 444–458
radial, 676
annulator, see orthogonal complement
recessive, 652
ball regular, 52
Euclidean, 616 semidefinite, 52, 161, see semidefinite cone,
unit balls of norms, 616 612
basis self-duality of, 613
in Rn , 575 cones
in linear subspace, 575 calculus of, 75
orthonormal, 580 examples of, 76
existence of, 581 operations with, 76
black box primal-dual pairs of, 76
oriented algorithm/method, 391 conic
oriented optimization, 391 dual of conic program, 56
BM, see algorithm, Bundle Mirror duality, 53, 56, 702
boundary, 622 theorem, 702
relative, 623 Duality Theorem, 59
Bundle Mirror, see algorithm, Bundle Mirror optimization problems

718
INDEX 719

feasible and level sets of, 77 CQr, see Conic Quadratic representable
program, 702
programming, 79 derivatives, 595
conic hull, 620 calculus of, 599
Conic Programming, 702 computation of, 599
Conic Quadratic Programming, 87 directional, 596
applications of existence, 599
problems with static friction, 89 of higher order, 601
robust LP, 125 DGF, see distance generating function
program, 87 differential, see derivatives
conic quadratic programming, 159 Dikin ellipsoid, 375–377
applications of dimension formula, 576
robust LP, 139 distance generating function, 403
Conic Quadratic representable Dynamic Stability Analysis/Synthesis, 181–183
functions, 93
eigenvalue decomposition
sets, 91
of a symmetric matrix, 607
Conic Quadratic Representations
simultaneous, 608
examples of, 94, 136
eigenvalues, 607
Conic Quadratic representations
variational characterization of, 273
calculus of, 94–107
variational description of, 609
examples of, 93–110, 139
vector of, 608
of functions, 93
eigenvectors, 607
of sets, 91
ellipsoid, 255, 617
conic quadratic representations
Ellipsoid method, 323–331
examples of, 140
ellipsoidal approximation, 255–272
Conic representability
inner, 257
of convex-concave functions, 114–125
of intersections of ellipsoids, 261
of functions and sets, 113–125 of polytopes, 257–258
of monotone vector fields, 495–504 of sums of ellipsoids, 269–271
Conic Theorem on Alternative, 66 of sums of ellipsoids, 262–272, 294–299
Refined, 68 outer, 257
constraint of finite sets, 258–259
conic, 53 of sums of ellipsoids, 263–268
constraints of unions of ellipsoids, 262
active, 691 ellitope, 210
of optimization program, 691 basic, 210
convex Euclidean
optimization problem distance, 590
in conic form, 701 norm, 589
convex hull, 618 structure, 578
convex program extreme points, 651
in conic form, 694 of polyhedral sets, 654
Convex Programming Duality Theorem, 700
conic form, 702 feasibility of conic inequality
Convex Programming optimality conditions essentially strict, 59
in saddle point form, 703 strict, 59
in the Karush-Kuhn-Tucker form, 705 feasibility of conic problem
Convex Theorem on Alternative, 694 essentially strict, 59
in conic form, 697 strict, 59
coordinates, 576 first order
barycentric, 586 algorithms, 391
CQP, see Conic Quadratic Programming oracle, 391, 421
CQR, see Conic Cuadratic Representation functions
720 INDEX

affine, 663 Frobenius, 52, 161, 607


concave, 663 representation of linear forms, 579
continuous, 592 interior
basic properties of, 594 relative, 622
calculus of, 593
convex Lagrange
below boundedness of, 685 dual
calculus of, 666 in conic form, 701
closures of, 685 dual of an optimization program, 701
convexity of level sets of, 665 duality, 25, 638
definition of, 663, 664 function, 700
differential criteria of convexity, 668 for problem in conic form, 702
domains of, 665 saddle points of, 701
elementary properties of, 664 multipliers, 701
Karush-Kuhn-Tucker optimality conditions Lagrange Duality Theorem, 300
for, 677 Lagrange relaxation, 197
Legendre transformations of, 687 lemma
Lipschitz continuity of, 673 S-Lemma, 192, 235–238
maxima and minima of, 675 approximate, 239–241
proper, 680 inhomogeneous, 200, 238–239
subgradients of, 686 Dubovitski-Milutin, 650
values outside the domains of, 665 Homogeneous Farkas, 22
differentiable, 595–611 Inhomogeneous Farkas, 69
Minmax, 713
General Theorem on Alternative, 22, 636 on the Schur complement, 170
gradient, 597
line, 588
Gram-Schmidt orthogonalization, 581
linear
graph,
dependence/independence, 575
chromatic number of, 288
form, 579
GTA, see General Theorem on Alternative, see
mapping, 576
General Theorem on Alternative
representation by matrices, 576
Helley’s Theorem, 301 span, 574
Hessian, 604 subspace, 574
HFL, see Homogeneous Farkas Lemma basis of, 575
hypercube, 617 dimension of, 575
hyperoctahedron, 617 orthogonal complement of, 579
hyperplane, 589 representation by homogeneous linear equa-
supporting, 647 tions, 587
Linear Matrix Inequality, 163
inequality Linear Programming
Cauchy, 589 approximation of Conic Quadratic programs,
conic, 53 149–154
Gradient, 671 Duality Theorem, 27, 640
Hölder, 689 engineering applications of, 29–49
Jensen, 664 compressed sensing, 29, 39
triangle, 589 Support Vector Machines, 39–41
vector, 49, 50, 52, 53 synthesis of linear controllers, 41–49
geometry of, 50 necessary and sufficient optimality condi-
Young, 689 tions, 29, 641
information based program, 19, 658
complexity, 391–394 bounded/unbounded, 19, 658
theory, 391–394 dual of, 25, 638
inner product, 578 feasible/infeasible, 19, 658
INDEX 721

optimal value of, 19 optimization program, 691


solvability of, 659 below bounded, 692
solvable/unsolvable, 19, 658 conic, 53
uncertain, 125 data of, 70
robust, 125–139 dual of, 56
linearly feasible and level sets of, 78
dependent/independent set, 575 robust bounded/unbounded, 71
spanning set, 576 robust feasible/infeasible, 71
LMI, see Linear Matrix Inequality robust solvability of, 73
Lovasz capacity of a graph, 202 robust solvable, 71
Lovasz’s Sandwich Theorem, 288 sufficient condition for infeasibility, 66
Lowner – Fritz John Theorem, 257 constraints of, 691
LP, see Linear Programming convex, 692
Lyapunov domain of, 691
stability analysis/synthesis, 293–294 feasible, 691
Lyapunov Stability Analysis/Synthesis, 187–196 feasible set of, 691
under interval uncertainty, 225–228 feasible solutions of, 691
objective of, 691
Majorization Principle, 280 optimal solutions of, 691, 692
mappings, see functions optimal value of, 691
Mathematical Programming program, see opti- solvable, 692
mization program orthogonal
matrices complement, 579
Frobenius inner product of, 607 projection, 580
positive (semi)definite, 52, 610
square roots of, 612 path-following methods,
spaces of, 607 infeasible start -, 380–387
symmetric, 607
primal and dual, 378–380
eigenvalue decomposition of, 607
PET, see Positron Emission Tomography
matrix game, 429
polar, 648
MD, see algorithm, Mirror Descent
polyhedral
Mirror Descent, see algorithm, Mirror Descent
cone, 619
Mirror Prox, see algorithm, Mirror Prox
representation, 628
MP, see algorithm, Mirror Prox
set, 615
Newton decrement, 379 positive semidefiniteness
geometric interpretation, 380 criteria for, 273
nonnegative orthant, 52 Positron Emission Tomography, 389–390
norms primal-dual pair of conic problems, 56
convexity of, 664 problems
properties of, 589 with convex structure, 481
Convex Equilibrium, 485
objective Convex Minimization, 481
of optimization program, 691 Convex Nash Equilibrium, 483
online regret minimization Convex-Concave Saddle Point, 482
deterministic, 416–420 descriptive results, 486, 487–490
stochastic, 438–443 Monotone Variational Inequality, 484
optimality conditions proximal setup, 403–404
in Conic Programming, 59 `1 /`2 , 406
in Linear Programming, 29, 641 Ball, 405
in saddle point form, 703 Entropy, 405
in the Karush-Kuhn-Tucker form, 705 Euclidean, 405
optimization Nuclear norm, 408
black box oriented, 391 Simplex, 407
722 INDEX

Spechatedron, 409 closed, 591


compact, 592
robust counterpart convex
adjustable/affinely adjustable, 140–143 calculus of, 620
of conic quadratic constraint, 243–247 definition of, 615
of uncertain LP, 125–127 topological properties of, 623
open, 591
S-Lemma, 299–310
polyhedral, 615
relaxed versions of, 308–310
calculus of, 630
with multi-inequality premise, 302–310
extreme points of, 654
saddle point
structure of, 656
reformulation of an optimization program,
Shannon capacity of a graph, 200
465
simplex, 619
representation of a convex function, 465
Slater condition, 694
examples of, 466
relaxed, 694
representation of convex functions
subgradient, 686
examples of, 468
synthesis
saddle points, 701, 710
of antenna array
existence of, 712
via SDP, 294
game theory interpretation of, 710
structure of the set of, 712 Taylor expansion, 606
SDP, see Semidefinite Programming TBM, see algorithm, Truncated Bundle Mirror
SDR, see Semidefinite representation theorem
Semidefinite Birkhoff, 172, 276
program, 162 Caratheodory, 625
Programming, 161–299, 313 Eigenvalue Decomposition, 607
applications of, 181, 185, 187, 196 Eigenvalue Interlacement, 611
relaxation Goemans and Williamson, 206
and chance constraints, 247 Helley, 626
and stability number of a graph, 200–205 Krein-Milman, 651
of intractable problem, 255 Matrix Cube, 221
of the MAXCUT problem, 205–206 Nesterov, 207
Shor’s scheme of, 197–199 Radon, 626
representable function/set, 165 Separation, 643
representation Sion-Kakutani, 712
of epigraph of convex polynomial, 284 Truss Topology Design, 183–185
of function of eigenvalues, 167–173
of function of singular values, 173–175 Variational Inequality
of function/set, 165 see problems with convex structure
of nonnegative univariate polynomial, 178 Monotone Variational Inequality, 484
semidefinite
cone, 161 Weak
programming, 313 Conic Duality Theorem, 56
programs, 299 Duality Theorem in LP, 25, 638
relaxation
of intractable problem, 196
Semidefinite Programming
applications of, 180–272
semidefinite programs, 313
separation
of convex sets, 641
strong, 642
theorem, 643
sets

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy