SACMAT 2021, 26th ACM Symposium on Access Control Models and Technologies, 16-18 June 2021, Barcelona, Spain
Secure aggregation protocols allow an aggregator to compute the sum of multiple users’ data in a privacy-preserving manner. Existing protocols assume that users from whom the data is collected, are fully trusted on the correctness of their individual inputs.We believe
that this assumption is too strong, for example when such protocols are used for federated learning whereby the aggregator receives all users’ contributions and aggregate them to train and obtain the joint model. A malicious user contributing with incorrect inputs can
generate model poisoning or backdoor injection attacks without being detected. In this paper,we propose the first secure aggregation protocol that considers users as potentially malicious. This new protocol enables the correct computation of the aggregate result, in
a privacy preserving manner, only if individual inputs belong to a legitimate interval. To this aim, the solution uses a newly designed oblivious programmable pseudo-random function. We validate our solution as a proof of concept under a federated learning scenario whereby potential backdoor injection attacks exist.
© ACM, 2021. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in SACMAT 2021, 26th ACM Symposium on Access Control Models and Technologies, 16-18 June 2021, Barcelona, Spain https://doi.org/10.1145/3450569.3463572