COMSYS TALK EURECOM -

Mikael Johansson - Professor at KTH, Sweden
Communication systems

Date: -
Location: Eurecom

Title: "Novel theory for large-scale and distributed optimization : asynchrony, communication-efficiency and acceleration" Speaker: Prof. Mikael Johansson, KTH Room: Shannon meeting room, EURECOM Date: Friday Feb 1st. 2pm Abstract : Distributed optimization problems appear naturally in many engineering systems. Examples include radio resource management in wireless networks, decentralized coordination of multi-robot systems, and machine learning on large distributed data sets. However, the development of distributed optimization algorithms involves many design decisions for which we have very limited theoretical support. For example, what information needs to be shared among devices, how often, and at what accuracy? How should one best process delayed and compressed data to guarantee real-time performance with limited information exchange? This talk describes our recent efforts in understanding and addressing some of these challenges. We first describe a framework for analyzing asynchronous iterations, which allows to guarantee convergence under total and partial asynchronism and to provide explicit bounds on how the amount of asynchrony impacts the convergence times. We use this framework to design a proximal incremental aggregate gradient method, and demonstrate how our theoretical tools are easy to apply and offer strong performance guarantees. We then turn our attention to gradient compression techniques for reducing the communication overhead in distributed learning. We show how several compression heuristics can be analyzed in a uniform manner and provide theoretical bounds on convergence rates and information exchange for convex optimization using compressed gradients. Simulations and real-world implementations illustrate our theoretical findings. Finally, we demonstrate how general-purpose schemes for improving the convergence of iterative methods can be extended to deal with constrained optimization problems. We focus on Anderson acceleration of the classical projected gradient descent (PGD) method. Due to the presence of a constraint set, the relevant fixed-point mapping for PGD is not differentiable. However, we show that the existing convergence results for Anderson acceleration of smooth fixed-point iterations can be extended to the non-smooth case. The talk is based on joint work with students and collaborators, including Arda Aytekin, Hamid Reza Feyzmahdavian, Sarit Khirirat, Vien Vann Mai and Dan Alistarh. ----------------------------------------------- Résumé: L'optimisation distribuée apparait dans de nombreux problèmes d'ingénierie, communication sans-fil, coordination de robots, et big data. Les algorithmes reposent sur une théorie de la décision distribuée qui est encore très limitée. Par exemple quelle information doit être partagée et avec quelle précision. Cette présentation donne un état des progrès récents dans le domaine dans notre groupe. ---------------------------------------------- Biography : Mikael Johansson is a Professor at KTH, Sweden. He earned the M.Sc. and Ph.D. in Electrical Engineering from Lund University, Sweden, in 1994 and 1999, respectively. He held postdoctoral research positions at Stanford University and UC Berkeley before joining KTH in 2002, where he now serves as a full professor. His research interests are in distributed optimization, wireless networking, and control. He has published several ISI-highly cited papers, has served on the editorial board of Automatica and the IEEE Transaction on Control of Network Systems, as well as on the program committee for conferences such as IEEE CDC, IEEE Infocom, ACM SenSys.