Federated learning (FL) has enabled training ma-chine learning models that exploit the data of multiple agents without compromising privacy. However, FL is known to be vulnerable to data heterogeneity, partial device participation, and infrequent communication with the server, which are nonetheless distinctive characteristics of this framework. While much of the literature has tackled these weaknesses using different tools, only a few works have considered inter-agent communication to improve FL’s performance. In this work, we present FedDec, an algorithm that interleaves peer-to-peer communication and parameter averaging between the local gradient updates of FL. We analyze the convergence of FedDec and show that inter-agent communication alleviates the negative impact of infrequent communication rounds with the server by reducing the depen-dence on the number of local updates H from O(H2) to O(H). Furthermore, our analysis reveals that the term improved in the bound vanishes quickly the more connected the network is. We confirm the predictions of our theory in numerical simulations, where we show that FedDec converges faster than FedAvg, and that the gains are greater as either H or the connectivity of the network increase.
FedDec: Peer-to-peer aided federated learning
SPAWC 2024, 25th IEEE International Workshop on Signal Processing Advances in Wireless Communications, 10-13 September 2024, Lucca, Italy
Type:
Conference
City:
Lucca
Date:
2024-09-10
Department:
Communication systems
Eurecom Ref:
7837
Copyright:
© 2024 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
See also:
PERMALINK : https://www.eurecom.fr/publication/7837