Learning from failures: Secure and fault-tolerant aggregation for federated learning

Mansouri, Mohamad; Önen, Melek; Ben Jaballah, Wafa
ACSAC 2022, Annual Computer Security Applications Conference, 5-9 December 2022, Austin, Texas, USA

Federated learning allows multiple parties to collaboratively train a global machine learning (ML) model without sharing their private datasets. To make sure that these local datasets are not leaked, existing works propose to rely on a secure aggregation scheme that allows parties to encrypt their model updates before sending them to the central server that aggregates the encrypted inputs. In this work, we design and evaluate a new secure and fault-tolerant aggregation scheme for federated learning that is robust against client failures. We first develop a threshold-variant of the secure aggregation scheme proposed by Joye and Libert. Using this new building block together with a dedicated decentralized key management scheme and an input encoding solution, we design a privacy-preserving federated learning protocol that, when executed among n clients, can recover from up to failures. Our solution is secure against a malicious aggregator who can manipulate messages to learn clients’ individual inputs. We show that our solution outperforms the state-of-the-art fault-tolerant secure aggregation schemes in terms of computation cost on the client. For example, with an ML model of 100,000 parameters, trained with 600 clients, our protocol is 5.5x faster (1.6x faster in case of 180 clients drop).


DOI
Type:
Conférence
City:
Austin
Date:
2022-12-05
Department:
Sécurité numérique
Eurecom Ref:
7109
Copyright:
© ACM, 2022. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACSAC 2022, Annual Computer Security Applications Conference, 5-9 December 2022, Austin, Texas, USA https://doi.org/10.1145/3564625.3568135

PERMALINK : https://www.eurecom.fr/publication/7109