Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning

06/07/2021
by   Jinhyun So, et al.
0

Secure aggregation is a critical component in federated learning, which enables the server to learn the aggregate model of the users without observing their local models. Conventionally, secure aggregation algorithms focus only on ensuring the privacy of individual users in a single training round. We contend that such designs can lead to significant privacy leakages over multiple training rounds, due to partial user selection/participation at each round of federated learning. In fact, we empirically show that the conventional random user selection strategies for federated learning lead to leaking users' individual models within number of rounds linear in the number of users. To address this challenge, we introduce a secure aggregation framework with multi-round privacy guarantees. In particular, we introduce a new metric to quantify the privacy guarantees of federated learning over multiple training rounds, and develop a structured user selection strategy that guarantees the long-term privacy of each user (over any number of training rounds). Our framework also carefully accounts for the fairness and the average number of participating users at each round. We perform several experiments on MNIST and CIFAR-10 datasets in the IID and the non-IID settings to demonstrate the performance improvement over the baseline algorithms, both in terms of privacy protection and test accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/03/2022

How Much Privacy Does Federated Learning with Secure Aggregation Guarantee?

Federated learning (FL) has attracted growing interest for enabling priv...
research
07/25/2023

Federated Heavy Hitter Recovery under Linear Sketching

Motivated by real-life deployments of multi-round federated analytics wi...
research
02/11/2020

Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning

Federated learning is gaining significant interests as it enables model ...
research
02/04/2021

SAFELearning: Enable Backdoor Detectability In Federated Learning With Secure Aggregation

For model privacy, local model parameters in federated learning shall be...
research
10/05/2021

Secure Aggregation for Buffered Asynchronous Federated Learning

Federated learning (FL) typically relies on synchronous training, which ...
research
08/04/2023

LISA: LIghtweight single-server Secure Aggregation with a public source of randomness

Secure Aggregation (SA) is a key component of privacy-friendly federated...
research
04/21/2020

Federated Learning with Only Positive Labels

We consider learning a multi-class classification model in the federated...

Please sign up or login with your details

Forgot password? Click here to reset