Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix

06/10/2021
by   Maximilian Lam, et al.
0

We show that aggregated model updates in federated learning may be insecure. An untrusted central server may disaggregate user updates from sums of updates across participants given repeated observations, enabling the server to recover privileged information about individual users' private training data via traditional gradient inference attacks. Our method revolves around reconstructing participant information (e.g: which rounds of training users participated in) from aggregated model updates by leveraging summary information from device analytics commonly used to monitor, debug, and manage federated learning systems. Our attack is parallelizable and we successfully disaggregate user updates on settings with up to thousands of participants. We quantitatively and qualitatively demonstrate significant improvements in the capability of various inference attacks on the disaggregated updates. Our attack enables the attribution of learned properties to individual users, violating anonymity, and shows that a determined central server may undermine the secure aggregation protocol to break individual users' data privacy in federated learning.

READ FULL TEXT
research
11/14/2021

Eluding Secure Aggregation in Federated Learning via Model Inconsistency

Federated learning allows a set of users to train a deep neural network ...
research
03/07/2023

Client-specific Property Inference against Secure Aggregation in Federated Learning

Federated learning has become a widely used paradigm for collaboratively...
research
09/26/2021

MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers

Machine Learning (ML) has emerged as a core technology to provide learni...
research
07/04/2023

An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems

Federated online learning to rank (FOLTR) aims to preserve user privacy ...
research
03/31/2020

Inverting Gradients – How easy is it to break privacy in federated learning?

The idea of federated learning is to collaboratively train a neural netw...
research
10/17/2022

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

Federated learning is particularly susceptible to model poisoning and ba...
research
10/25/2021

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models

Federated learning has quickly gained popularity with its promises of in...

Please sign up or login with your details

Forgot password? Click here to reset