How Much Privacy Does Federated Learning with Secure Aggregation Guarantee?

08/03/2022
by   Ahmed Roushdy Elkordy, et al.
0

Federated learning (FL) has attracted growing interest for enabling privacy-preserving machine learning on data stored at multiple users while avoiding moving the data off-device. However, while data never leaves users' devices, privacy still cannot be guaranteed since significant computations on users' training data are shared in the form of trained local models. These local models have recently been shown to pose a substantial privacy threat through different privacy attacks such as model inversion attacks. As a remedy, Secure Aggregation (SA) has been developed as a framework to preserve privacy in FL, by guaranteeing the server can only learn the global aggregated model update but not the individual model updates. While SA ensures no additional information is leaked about the individual model update beyond the aggregated model update, there are no formal guarantees on how much privacy FL with SA can actually offer; as information about the individual dataset can still potentially leak through the aggregated model computed at the server. In this work, we perform a first analysis of the formal privacy guarantees for FL with SA. Specifically, we use Mutual Information (MI) as a quantification metric and derive upper bounds on how much information about each user's dataset can leak through the aggregated model update. When using the FedSGD aggregation algorithm, our theoretical bounds show that the amount of privacy leakage reduces linearly with the number of users participating in FL with SA. To validate our theoretical bounds, we use an MI Neural Estimator to empirically evaluate the privacy leakage under different FL setups on both the MNIST and CIFAR10 datasets. Our experiments verify our theoretical bounds for FedSGD, which show a reduction in privacy leakage as the number of users and local batch size grow, and an increase in privacy leakage with the number of training rounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/22/2020

A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

Federated learning (FL) is an emerging distributed machine learning fram...
research
06/07/2021

Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning

Secure aggregation is a critical component in federated learning, which ...
research
01/09/2023

Is Federated Learning a Practical PET Yet?

Federated learning (FL) is a framework for users to jointly train a mach...
research
12/16/2021

CodedPaddedFL and CodedSecAgg: Straggler Mitigation and Secure Aggregation in Federated Learning

We present two novel coded federated learning (FL) schemes for linear re...
research
12/07/2021

Location Leakage in Federated Signal Maps

We consider the problem of predicting cellular network performance (sign...
research
03/27/2023

The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning

Secure aggregation promises a heightened level of privacy in federated l...
research
01/05/2022

Towards Understanding Quality Challenges of the Federated Learning: A First Look from the Lens of Robustness

Federated learning (FL) is a widely adopted distributed learning paradig...

Please sign up or login with your details

Forgot password? Click here to reset