Optimization Theory with Applications

Optim
Abstract

Optimization theory (convex, non-convex, discrete) is an important field that has application to almost every technical and non-technical field, including wireless communication, networking, machine learning, security, transportation systems, finance (portfolio management) and operation research (supply chain, inventory). It has recently attracted even more interest, as the underlying tool for advanced machine learning techniques for big data (deep neural networks, stochastic gradient descent). In this course we will cover both the fundamentals (algorithms, duality, convergence, optimality) as well as a range of applications in the area of (network) resource allocation, distributed optimization, and machine learning. Special emphasis will be devoted to exemplify applications of optimization techniques to modern engineering and CS problems with the objective of  developing skills and background necessary to recognize, formulate, and solve optimization problems.

Teaching and Learning Methods : Lectures supported by exercise sessions, and homework assignments including both problem solving and programming of learned methods (CVX, matlab, python)

Course Policies : Attendance to lectures and exercise session is not mandatory but highly recommended.

Bibliography
  • Sundaram, "A First Course in Optimization Theory", Cambridge University Press
  • Boyd and Vandenberghe, "Convex Optimization", Cambridge University Press

Requirements

Eigenvalues, Gradient, Hessian, Multi-variate calculus

Description

1. Optimization Fundamentals

  • Optimization problem types: unconstrained, constrained, linear, convex, geometric, etc.
  • Optimality conditions (local/global minima), analytical solutions
  • Convexity theory (sets, functions, composition)

2. Unconstrained Optimization Algorithms

  • First order methods: gradient-descent, steepest-descent, subgradient, convergence theory
  • Second order methods: Newton, quasi-newton
  • Acceleration & momentum: Nesterov, heavy-ball

3. Constrained Optimization Algorithms

  • Lagrange, duality, KKT, primal-dual algorithms
  • Interior point, penalty methods.

4. Advanced Topics

  • Distributed optimization, primal/dual decomposition, ADMM
  • Proximal algorithms, mirror-descent
  • (Block) coordinate descent, stochastic gradient descent, batch methods, Adam, Adagrad
  • Online convex optimization (multi-armed bandits)

5. Discrete and Non-Convex Optimization

  • Non-convex methods
  • Approximation algorithms
  • Submodularity, SDP.

6. Applications:

  • Regularization for machine learning problems (Lasso, L1/L2 regularizers)
  • Matrix completion (recommendation systems, compressed sensing)
  • Resource allocation problems (wireless communication, cloud computing, caching)
  • Deep neural networks and optimization theory

Learning outcomes:

  • be able to recognize, formulate, and solve optimization problems.
  • be able to transform certain classes of problems in equivalent convex problems.
  • be able to understand the pros and cons of different optimization algorithms
  • be able to understand the limits of application of techniques for  convex programming to nonconvex programming.
  • be able to follow recent developments in optimization for big data and machine learning

Nb hours:21.00 organized in 6 teaching sessions with at least 30% of the time (1 hour) dedicated to examples and explicative exercises and 1 dedicated exercise session (3 hours).

Grading Policy: 80% exam, 20% grade for homeworks and take home programming exercises.