Optimization Theory with Applications


Optimization theory (convex, non-convex, discrete) is an important field that has applications to almost every technical and non-technical field, including wireless communication, networking, machine learning, security, transportation systems, finance (portfolio management) and operation research (supply chain, inventory). It has recently attracted even more interest, as the underlying tool for advanced machine learning techniques for big data (deep neural networks, stochastic gradient descent). In this course, we will cover both the fundamentals (algorithms, duality, convergence, optimality) as well as a range of applications in the area of (network) resource allocation, distributed optimization, and machine learning. Special emphasis will be devoted to exemplify applications of optimization techniques to modern engineering and CS problems with the objective of developing skills and background necessary to recognize, formulate, and solve optimization problems.

Teaching and Learning Methods: Lectures supported by exercise sessions, and homework assignments including both problem-solving and programming of learned methods (CVX, Matlab, python)

Course Policies: Attendance to lectures and exercise sessions is not mandatory but highly recommended.

  • Book: SUNDARAM R. K. A First Course in Optimization Theory. Cambridge University Press, 1996, 376p.
  • Book: BOYD S., VANDENBERGHE L. Convex Optimization. Cambridge University Press, 2004, 727p.


Eigenvalues, Gradient, Hessian, Multi-variate calculus


Optimization Fundamentals

  • Optimization problem types: unconstrained, constrained, linear, convex, geometric, etc.
  • Optimality conditions (local/global minima), analytical solutions
  • Convexity theory (sets, functions, composition)

Unconstrained Optimization Algorithms

  • First-order methods: gradient-descent, steepest-descent, sub-gradient, convergence theory
  • Second order methods: Newton, quasi-newton
  • Acceleration & momentum: Nesterov, heavy-ball

Constrained Optimization Algorithms

  • Lagrange, duality, KKT, primal-dual algorithms
  • Interior point, penalty methods.

Advanced Topics

  • Distributed optimization, primal/dual decomposition, ADMM
  • Proximal algorithms, mirror-descent
  • (Block) coordinate descent, stochastic gradient descent, batch methods, Adam, Adagrad
  • Online convex optimization (multi-armed bandits)

Discrete and Non-Convex Optimization

  • Non-convex methods
  • Approximation algorithms
  • Sub modularity, SDP.


  • Regularization for machine learning problems (Lasso, L1/L2 regularizers)
  • Matrix completion (recommendation systems, compressed sensing)
  • Resource allocation problems (wireless communication, cloud computing, caching)
  • Deep neural networks and optimization theory

Learning Outcomes:

  • Be able to recognize, formulate, and solve optimization problems.
  • Be able to transform certain classes of problems into equivalent convex problems.
  • Be able to understand the pros and cons of different optimization algorithms.
  • Be able to understand the limits of the application of techniques for convex programming to nonconvex programming.
  • Be able to follow recent developments in optimization for big data and machine learning.

Nb hours: 21.00


  • Exam (80% of the final grade);
  • Homework (20% of the final grade).