DeepAI
Log In Sign Up

When the Curious Abandon Honesty: Federated Learning Is Not Private

12/06/2021
by   Franziska Boenisch, et al.
5

In federated learning (FL), data does not leave personal devices when they are jointly training a machine learning model. Instead, these devices share gradients with a central party (e.g., a company). Because data never "leaves" personal devices, FL is presented as privacy-preserving. Yet, recently it was shown that this protection is but a thin facade, as even a passive attacker observing gradients can reconstruct data of individual users. In this paper, we argue that prior work still largely underestimates the vulnerability of FL. This is because prior efforts exclusively consider passive attackers that are honest-but-curious. Instead, we introduce an active and dishonest attacker acting as the central party, who is able to modify the shared model's weights before users compute model gradients. We call the modified weights "trap weights". Our active attacker is able to recover user data perfectly and at near zero costs: the attack requires no complex optimization objectives. Instead, it exploits inherent data leakage from model gradients and amplifies this effect by maliciously altering the weights of the shared model. These specificities enable our attack to scale to models trained with large mini-batches of data. Where attackers from prior work require hours to recover a single data point, our method needs milliseconds to capture the full mini-batch of data from both fully-connected and convolutional deep neural networks. Finally, we consider mitigations. We observe that current implementations of differential privacy (DP) in FL are flawed, as they explicitly trust the central party with the crucial task of adding DP noise, and thus provide no protection against a malicious central party. We also consider other defenses and explain why they are similarly inadequate. A significant redesign of FL is required for it to provide any meaningful form of data privacy to users.

READ FULL TEXT

page 1

page 11

page 18

page 19

08/25/2022

DPAUC: Differentially Private AUC Computation in Federated Learning

Federated learning (FL) has gained significant attention recently as a p...
01/09/2023

Is Federated Learning a Practical PET Yet?

Federated learning (FL) is a framework for users to jointly train a mach...
08/21/2022

Cluster Based Secure Multi-Party Computation in Federated Learning for Histopathology Images

Federated learning (FL) is a decentralized method enabling hospitals to ...
07/21/2021

Defending against Reconstruction Attack in Vertical Federated Learning

Recently researchers have studied input leakage problems in Federated Le...
10/18/2021

Towards General Deep Leakage in Federated Learning

Unlike traditional central training, federated learning (FL) improves th...
04/02/2021

Fast-adapting and Privacy-preserving Federated Recommender System

In the mobile Internet era, recommender systems have become an irreplace...