When the Curious Abandon Honesty: Federated Learning Is Not Private

12/06/2021
by   Franziska Boenisch, et al.
5

In federated learning (FL), data does not leave personal devices when they are jointly training a machine learning model. Instead, these devices share gradients with a central party (e.g., a company). Because data never "leaves" personal devices, FL is presented as privacy-preserving. Yet, recently it was shown that this protection is but a thin facade, as even a passive attacker observing gradients can reconstruct data of individual users. In this paper, we argue that prior work still largely underestimates the vulnerability of FL. This is because prior efforts exclusively consider passive attackers that are honest-but-curious. Instead, we introduce an active and dishonest attacker acting as the central party, who is able to modify the shared model's weights before users compute model gradients. We call the modified weights "trap weights". Our active attacker is able to recover user data perfectly and at near zero costs: the attack requires no complex optimization objectives. Instead, it exploits inherent data leakage from model gradients and amplifies this effect by maliciously altering the weights of the shared model. These specificities enable our attack to scale to models trained with large mini-batches of data. Where attackers from prior work require hours to recover a single data point, our method needs milliseconds to capture the full mini-batch of data from both fully-connected and convolutional deep neural networks. Finally, we consider mitigations. We observe that current implementations of differential privacy (DP) in FL are flawed, as they explicitly trust the central party with the crucial task of adding DP noise, and thus provide no protection against a malicious central party. We also consider other defenses and explain why they are similarly inadequate. A significant redesign of FL is required for it to provide any meaningful form of data privacy to users.

READ FULL TEXT

page 1

page 11

page 18

page 19

research
09/12/2022

Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis

Federated learning (FL) aims to perform privacy-preserving machine learn...
research
08/25/2022

DPAUC: Differentially Private AUC Computation in Federated Learning

Federated learning (FL) has gained significant attention recently as a p...
research
01/09/2023

Is Federated Learning a Practical PET Yet?

Federated learning (FL) is a framework for users to jointly train a mach...
research
08/21/2022

Cluster Based Secure Multi-Party Computation in Federated Learning for Histopathology Images

Federated learning (FL) is a decentralized method enabling hospitals to ...
research
06/08/2021

Incentive Mechanism for Privacy-Preserving Federated Learning

Federated learning (FL) is an emerging paradigm for machine learning, in...
research
10/28/2021

Gradient Inversion with Generative Image Prior

Federated Learning (FL) is a distributed learning framework, in which th...
research
10/18/2021

Towards General Deep Leakage in Federated Learning

Unlike traditional central training, federated learning (FL) improves th...

Please sign up or login with your details

Forgot password? Click here to reset