Graduate School and Research Center in Digital Sciences

Machine Learning for Communication systems

T Technical Teaching


This course introduces fundamental concepts and methods in machine learning, with particular emphasis on applications to wireless communications and networking. After a brief description (crash course adapted to the class background and prior knowledge) of several important algorithms and their theoretical underpinnings, we illustrate key aspects for their practical application to communication systems. We will cover applications that span different layers and system configurations, such as physical layer (channels, signal transmission, detection) and resource allocation in link/access layer. We will focus on distributed and large-scale learning in wireless networks with certain constraints (resource, computational, latency, etc.). Finally, we will highlight key challenges in realizing the promise of machine learning for communication networks.

Teaching and Learning Methods: Lectures, exercise sessions, and homework assignments including both problem solving and programming of learned methods. Each session starts summarizing key concepts from previous lecture. Part of each lecture is dedicated to illustrative examples and exercises.

Course Policies: Attendance to lab session is mandatory. Attendance to lectures and exercise sessions is highly recommended.


  • S. Shalev-Shwartz and S. Ben-David, “Understanding Machine Learning”, Cambridge University Press
  • M. Mohri, A. Rostamizadeh, and A. Talwalkar, “Foundations of Machine Learning”, MIT Press
  • J. Friedman, R. Tobshirani, T. Hastie, “The Elements of Statistical Learning”, Springer


Basic knowledge in linear algebra, probability, and calculus


1. Preliminaries (ML basics)

  • Supervised learning (linear and logistic regression)
  • Classification (SVM)
  • Unsupervised learning (RL, geometric, etc.)
  • Generative networks (GANs, VAE)
  • Deep networks


2. Applications to Communication Systems

  • Wireless channel estimation and prediction
  • PHY- layer: modulation, coding, detection, MIMO, beamforming, hardware impairments/non-linearities, end-to-end systems
  • Resource allocation: power control, scheduling
  • Traffic management
  • Spectrum management: sensing, access
  • Vehicular networks, Internet-of-Things (IoT)


3. Distributed Learning in Constrained Communication Systems

  • Distributed optimization, stochastic gradient descent in resource-constrained systems
  • Edge learning
  • Federated learning
  • Low-latency ML
  • Stragglers avoiding techniques


4. Theoretical Aspects

  • Representation and Approximation
  • Optimization and Online learning
  • Generalization (concentration of measure, Rademacher complexity, VC dimension)


5. Special topics

  • Reducing complexity: quantization, binarization
  • Active batch learning
  • Transfer learning
  • Few-shot learning


Learning outcomes:

  • understand the fundamentals of machine learning
  • be able to apply learning algorithms to communication system problems
  • understand the communication aspects involved in ML-enabled networks
  • be able to follow recent developments in neural networks and machine learning


Grading Policy:  Lab reports (25%), Final Exam (75%.) - written exam. Optional Project (20% bonus)

Nb hours:42.00

Nb hours per week: 3.00