Machine Learning for Communication systems


This course introduces fundamental concepts in machine learning withapplications to networked systems and Internet of Intelligent Things (IoIT). Students will gain foundational knowledge of cutting edge methods, including autoencoders, deep generative models, and reinforcement learning. They will also get familiar with fundamental information-theoretic frameworks (e.g., information bottleneck) and theoretical principles. We will also introduce large-scale distributed and decentralized learning over wireless networks, in particular under constraints (completion time, radio resources, computational efficiency, etc.). Finally, we highlight key theoretical and practical challenges, together with emerging topics, such as trustworthiness, fairness, and energy efficiency.

Teaching and Learning Methods: Lectures, exercise sessions, and lab sessions. Each lecture starts summarizing key concepts from previous lecture. Part of each lecture is often dedicated to illustrative examples and exercises.

Course Policies: Attendance to lab session is mandatory. Attendance to lectures and exercise sessions is highly recommended..

  • S. Shalev-Shwartz and S. Ben-David, “Understanding Machine Learning”, Cambridge University Press
  • M. Mohri, A. Rostamizadeh, and A. Talwalkar, “Foundations of Machine Learning”, MIT Press


Basic knowledge in linear algebra, probability, and calculus


1. Machine Learning Techniques

  • Preliminaries & Recap on ML basics
  • Basic fundamentals of deep learning
  • Autoencoders and End-to-End Communication Systems
  • Deep Generative models (VAEs and GANs)
  • Applications to autonomous networked systems and Internet of Intelligent Things (IoIT)

2. Theoretical Aspects

  • Information theoretic measures
  • Statistical distances
  • Information bottleneck and rate distortion theory

3. Distributed Machine Learning over Networks

  • Distributed optimization in resource-constrained systems
  • Communication-Efficient Distributed Edge Learning
  • Federated learning
  • Decentralized learning
  • Low-latency and on-device AI

4. Reinforcement Learning

  • Markov decision processes
  • Q-learning and Policy Optimization methods
  • Deep Reinforcement Learning (DRL)
  • Multi-agent systems

5. Emerging Topics

  • Trustworthiness and Fairness
  • Explainability & Interpretability
  • Sustainable and Green AI

Learning outcomes:

  • understand the fundamentals of machine learning and deep learning
  • be able to apply learning algorithms to communication problems and networked systems
  • understand the communication aspects involved in distributed edge learning
  • be able to follow recent developments and emerging directions in ML theory and applications

Nb hours: 42.00 (including 9 hours lab session).

Grading Policy: Lab reports (30%), Final Exam (70%.) - written exam.