1152ec - Optimization Technique-Draft Copy - 20th July
1152ec - Optimization Technique-Draft Copy - 20th July
a) Course Category
Program Elective
b) Preamble
The area of optimization playing a critical role even in contemporary areas such as decision and
control, signal processing, and machine learning. This course intends to present a thorough
treatment of optimization techniques with specific emphasis on modern applications. This will
provide students with a sound background in the area and benefit those who wish to pursue
doctoral or master level theses in this subject, or apply these techniques to their own
c) Prerequisite
Nil
d) Related Courses
Machine Learning
Deep Learning
e) Course Outcomes
Upon the successful completion of the course, students will be able to
Knowledge Level
CO Course Outcomes (Based on Revised
Nos. Bloom’s Taxonomy)
g) Course Content
UNIT-I
Introduction
Introduction to Optimization, sequences and limits, derivative matrix, level sets and gradients, Taylor series;
unconstrained optimization - necessary and sufficient conditions for optima. 9
UNIT-II
Convex Optimization:
convex sets, convex functions, optima of convex functions, Convex Optimization Problems , steepest
descent, Newton and quasi Newton methods. conjugate direction methods. 9
Unit-III
Optimization for Machine Learning:
Convex Optimization-Introduction, Incremental Gradient, Subgradient and Proximal Methods. Nonsmooth
Convex Optimization, DC (Difference of Convex functions) Programming, 9
Unit-IV
Non-convex Optimization:
Sparse recovery, affine rank minimization, low-rank matrix completion, Non-convex approaches - projected
gradient descent, alternating minimization. 9
Unit-V
Special topics:
Accelerated first order methods, Bayesian methods, Coordinate methods, Cutting plane methods Interior
point methods, Optimization methods for deep learning, Parallel and distributed methods. 9
Total- 45 Hours
Learning Resources
References:
S. Boyd and L. Vandenberghe: Convex Optimization, Cambridge University Press, 2004.(UNIT-I &
II)
Suvrit Sra, Sebastian Nowozin and Stephen Wright(Editors), Optimization for Machine Learning,The
MIT Press, Dec. 2011.(unit-III)
T. Hastie, R. Tibshirani and M. J. Wainwright, Statistical Learning with Sparsity: the Lasso and
Generalizations, Chapman and Hall/CRC Press,
2015. http://web.stanford.edu/~hastie/StatLearnSparsity/
Papers from leading conferences and journals in optimization, as well as applied areas such as signal
processing, information theory, and machine learning