Communication-efficient decentralized learning for intelligent networked systems

Jeong, Eunjeong
Thesis

The advent of the Internet of Things (IoT) led to the significance of fully decentralized (serverless) learning that enables individual models to preserve specific values such as local preference, privacy preservation, or coping with unstable connectivity. This thesis aims to provide a comprehensive guide to address intriguing problems and behaviors in decentralized networks, where participants utilize neural networks to perform various tasks.

The first part of the thesis explores asynchronous communication in decentralized learning over unreliable networks. Two types of impairments leading to the straggler problem, namely communication delay and computation delay, are discussed and analyzed. The objective is to achieve a global consensus in such systems.

The second part of the thesis delves into asynchronous decentralized federated learning over row-stochastic wireless communications. We underline the decoupling of timelines for communication and computation, leading to full autonomy of the network users, unlike the coupled methods where gradient update and gossip exchange happen either in a predefined order or sequentially. Particularly, our scheme is the first work that has considered collaborative model updates on a continuous timeline. Convergence analysis and numerical experiments support the validity of the scheme, along with several intriguing open questions for further research that might develop and enhance the performance and the philosophy of the framework.

In the third part, the thesis addresses personalization in decentralized collaborative networks. It is acknowledged that each agent exhibits distinct behaviors or patterns during its learning process, rather than blindly pursuing a global model applied uniformly to all agents. To address this challenge, a novel algorithm leveraging knowledge distillation for similarity measurement is proposed. This approach enables the quantification of statistical distance across local models, ultimately enhancing the connections between agents with higher relevance and significantly improving per-client performance.

By combining these three key aspects, the thesis aims to provide comprehensive guidance for the development of communication-efficient decentralized learning approaches for intelligent networked systems. This research strives to push the boundaries of fully decentralized learning, contributing to the advancement of distributed machine learning techniques.


Type:
Thèse
Date:
2024-09-27
Department:
Systèmes de Communication
Eurecom Ref:
7710
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Thesis and is available at :
See also:

PERMALINK : https://www.eurecom.fr/publication/7710