This course introduces fundamental concepts in machine learning (ML), with particular emphasis on communication-efficient distributed learning and applications to networked systems. After a brief introduction to ML methods and deep neural networks (adapted to enrolled students’ prior knowledge), we present key aspects for their efficient application to communication systems. We will cover applications that span different layers and system configurations, including physical layer (signaling, detection), multiple access and radio resource management. We then focus on large-scale distributed and decentralized learning in wireless networks, in particular under constraints (completion time, radio resources, computational efficiency, etc.). We also cover reinforcement learning and theoretical ML topics (generalization, approximation, fairness). Finally, we highlight key challenges in realizing the promise of machine learning for communication networks.
Teaching and Learning Methods: Lectures, exercise sessions, lab sessions, and potentially homework assignments including both problem solving and programming of learned methods. Each session starts summarizing key concepts from previous lecture. Part of each lecture is dedicated to illustrative examples and exercises.
Course Policies: Attendance to lab session is mandatory. Attendance to lectures and exercise sessions is highly recommended.
- S. Shalev-Shwartz and S. Ben-David, “Understanding Machine Learning”, Cambridge University Press
- M. Mohri, A. Rostamizadeh, and A. Talwalkar, “Foundations of Machine Learning”, MIT Press
- J. Friedman, R. Tobshirani, T. Hastie, “The Elements of Statistical Learning”, Springer
Basic knowledge in linear algebra, probability, and calculus
1. Machine Learning Techniques
- Preliminaries & ML basics
- Supervised (regression, classification) and unsupervised learning
- Deep learning
- Convolution Neural Networks
- Generative models (VAEs and GANs)
2. Applications to Communication Systems
- PHY- layer: modulation, coding, channel estimation, detection, MIMO
- Autoencoders and End-to-End Communication Systems
- Multiple access and Resource allocation (power control, scheduling, spectrum management)
- Autonomous networks, Internet-of-Things (IoT)
3. Distributed Machine Learning in Networks
- Distributed optimization & SGD in resource-constrained systems
- Communication-Efficient Distributed Learning
- Low-latency ML
- Edge and On-device AI
- Federated learning
- Decentralized learning
4. Reinforcement Learning (RL)
- Markov decision processes
- Q-learning and Policy Optimization methods
- Deep Reinforcement Learning (DRL)
- Multi-agent systems
5. Theoretical Aspects
- Representation and Approximation
- Explainability & Interpretability
- Algorithmic fairness
- understand the fundamentals of machine learning and deep learning
- be able to apply learning algorithms to communication and networking problems
- understand the communication aspects involved in ML-empowered wireless networks
- be able to follow recent developments and emerging directions in ML theory and applications
Nb hours: 42.00 (including 9 hours lab session).
Grading Policy: Lab reports (30%), Final Exam (70%.) - written exam. Optional Project (20% - bonus).
Nb hours per week: 3.00