DeepAI AI Chat
Log In Sign Up

Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis

09/12/2022
by   Sanjay Kariyappa, et al.
Facebook
Penn State University
Georgia Institute of Technology
10

Federated learning (FL) aims to perform privacy-preserving machine learning on distributed data held by multiple data owners. To this end, FL requires the data owners to perform training locally and share the gradient updates (instead of the private inputs) with the central server, which are then securely aggregated over multiple data owners. Although aggregation by itself does not provably offer privacy protection, prior work showed that it may suffice if the batch size is sufficiently large. In this paper, we propose the Cocktail Party Attack (CPA) that, contrary to prior belief, is able to recover the private inputs from gradients aggregated over a very large batch size. CPA leverages the crucial insight that aggregate gradients from a fully connected layer is a linear combination of its inputs, which leads us to frame gradient inversion as a blind source separation (BSS) problem (informally called the cocktail party problem). We adapt independent component analysis (ICA)–a classic solution to the BSS problem–to recover private inputs for fully-connected and convolutional networks, and show that CPA significantly outperforms prior gradient inversion attacks, scales to ImageNet-sized inputs, and works on large batch sizes of up to 1024.

READ FULL TEXT

page 1

page 6

page 7

page 11

page 12

page 13

page 14

12/06/2021

When the Curious Abandon Honesty: Federated Learning Is Not Private

In federated learning (FL), data does not leave personal devices when th...
08/03/2022

How Much Privacy Does Federated Learning with Secure Aggregation Guarantee?

Federated learning (FL) has attracted growing interest for enabling priv...
10/19/2022

Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning

Gradient inversion attack enables recovery of training samples from mode...
02/17/2022

LAMP: Extracting Text from Gradients with Language Model Priors

Recent work shows that sensitive user data can be reconstructed from gra...
03/27/2023

The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning

Secure aggregation promises a heightened level of privacy in federated l...
04/01/2020

An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies

With the increased attention and legislation for data-privacy, collabora...
10/15/2020

R-GAP: Recursive Gradient Attack on Privacy

Federated learning frameworks have been regarded as a promising approach...