The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning

03/07/2022
by   Wei-Ning Chen, et al.
0

We consider the problem of training a d dimensional model with distributed differential privacy (DP) where secure aggregation (SecAgg) is used to ensure that the server only sees the noisy sum of n model updates in every training round. Taking into account the constraints imposed by SecAgg, we characterize the fundamental communication cost required to obtain the best accuracy achievable under ε central DP (i.e. under a fully trusted server and no communication constraints). Our results show that Õ( min(n^2ε^2, d) ) bits per client are both sufficient and necessary, and this fundamental limit can be achieved by a linear scheme based on sparse random projections. This provides a significant improvement relative to state-of-the-art SecAgg distributed DP schemes which use Õ(dlog(d/ε^2)) bits per client. Empirically, we evaluate our proposed scheme on real-world federated learning tasks. We find that our theoretical analysis is well matched in practice. In particular, we show that we can reduce the communication cost significantly to under 1.2 bits per parameter in realistic privacy settings without decreasing test-time performance. Our work hence theoretically and empirically specifies the fundamental price of using SecAgg.

READ FULL TEXT

page 26

page 27

research
11/18/2022

The communication cost of security and privacy in federated frequency estimation

We consider the federated frequency estimation problem, where each user ...
research
06/22/2023

DP-BREM: Differentially-Private and Byzantine-Robust Federated Learning with Client Momentum

Federated Learning (FL) allows multiple participating clients to train m...
research
04/04/2023

Privacy Amplification via Compression: Achieving the Optimal Privacy-Accuracy-Communication Trade-off in Distributed Mean Estimation

Privacy and communication constraints are two major bottlenecks in feder...
research
09/26/2022

Taming Client Dropout for Distributed Differential Privacy in Federated Learning

Federated learning (FL) is increasingly deployed among multiple clients ...
research
03/02/2021

Privacy Amplification for Federated Learning via User Sampling and Wireless Aggregation

In this paper, we study the problem of federated learning over a wireles...
research
02/12/2021

The Distributed Discrete Gaussian Mechanism for Federated Learning with Secure Aggregation

We consider training models on private data that is distributed across u...
research
07/25/2023

Federated Heavy Hitter Recovery under Linear Sketching

Motivated by real-life deployments of multi-round federated analytics wi...

Please sign up or login with your details

Forgot password? Click here to reset