This tutorial aims to present the current research efforts on implementing machine learning algorithms in wireless systems. Specifically, we provide a comprehensive coverage of a distributed learning paradigm based on over-the-air computing, a.k.a. machine learning over-the-air (ML-OTA). We will present the general architecture, model training algorithm, and an analytical framework that establishes the convergence rate of ML-OTA. The analysis takes into account key effects from wireless transmissions, i.e. channel fading and interference, on the convergence performance, disclosing how interference deteriorates the model training process. Then, we elaborate on several improvements to the ML-OTA from different aspects. Particularly, we introduce model pruning schemes that reduce the computation and communication overheads for ML-OTA. We also discuss algorithmic approaches for system enhancements by adopting adaptive optimization methods to accelerate the model training, leveraging gradient clipping to improve the robustness of the training process, and a personalization framework to cope with system heterogeneity. Finally, we will introduce the analysis of generalization error of the statistical models trained by ML-OTA, which shows that wireless interference has the positive potential of improving the generalization capability. A few signal processing methods that exploit interference for a better generalization will also be discussed.
Tutorial: A tale of interference in machine learning over the air
VCC 2024, IEEE Virtual Conference on Communications, 3-5 December 2024 (Virtual Conference)
Type:
Tutorial
Date:
2024-12-03
Department:
Communication systems
Eurecom Ref:
8057
See also:
PERMALINK : https://www.eurecom.fr/publication/8057