Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis

09/12/2022
by   Sanjay Kariyappa, et al.
10

Federated learning (FL) aims to perform privacy-preserving machine learning on distributed data held by multiple data owners. To this end, FL requires the data owners to perform training locally and share the gradient updates (instead of the private inputs) with the central server, which are then securely aggregated over multiple data owners. Although aggregation by itself does not provably offer privacy protection, prior work showed that it may suffice if the batch size is sufficiently large. In this paper, we propose the Cocktail Party Attack (CPA) that, contrary to prior belief, is able to recover the private inputs from gradients aggregated over a very large batch size. CPA leverages the crucial insight that aggregate gradients from a fully connected layer is a linear combination of its inputs, which leads us to frame gradient inversion as a blind source separation (BSS) problem (informally called the cocktail party problem). We adapt independent component analysis (ICA)–a classic solution to the BSS problem–to recover private inputs for fully-connected and convolutional networks, and show that CPA significantly outperforms prior gradient inversion attacks, scales to ImageNet-sized inputs, and works on large batch sizes of up to 1024.

READ FULL TEXT

page 1

page 6

page 7

page 11

page 12

page 13

page 14

research
12/06/2021

When the Curious Abandon Honesty: Federated Learning Is Not Private

In federated learning (FL), data does not leave personal devices when th...
research
06/13/2023

Temporal Gradient Inversion Attacks with Robust Optimization

Federated Learning (FL) has emerged as a promising approach for collabor...
research
05/31/2023

Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning

In Federated Learning (FL) and many other distributed training framework...
research
02/17/2022

LAMP: Extracting Text from Gradients with Language Model Priors

Recent work shows that sensitive user data can be reconstructed from gra...
research
03/27/2023

The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning

Secure aggregation promises a heightened level of privacy in federated l...
research
05/17/2022

Recovering Private Text in Federated Learning of Language Models

Federated learning allows distributed users to collaboratively train a m...
research
06/05/2023

Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning

Malicious server (MS) attacks have enabled the scaling of data stealing ...

Please sign up or login with your details

Forgot password? Click here to reset