Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models

10/25/2021
by   Liam Fowl, et al.
0

Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency. Previous works have shown that federated gradient updates contain information that can be used to approximately recover user data in some situations. These previous attacks on user privacy have been limited in scope and do not scale to gradient updates aggregated over even a handful of data points, leaving some to conclude that data privacy is still intact for realistic training regimes. In this work, we introduce a new threat model based on minimal but malicious modifications of the shared model architecture which enable the server to directly obtain a verbatim copy of user data from gradient updates without solving difficult inverse problems. Even user data aggregated over large batches – where previous methods fail to extract meaningful content – can be reconstructed by these minimally modified models.

READ FULL TEXT

page 5

page 16

page 17

page 18

page 19

page 20

page 21

page 22

research
02/01/2022

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification

Federated learning (FL) has rapidly risen in popularity due to its promi...
research
06/10/2021

Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix

We show that aggregated model updates in federated learning may be insec...
research
08/22/2019

An End-to-End Encrypted Neural Network for Gradient Updates Transmission in Federated Learning

Federated learning is a distributed learning method to train a shared mo...
research
03/21/2023

Secure Aggregation in Federated Learning is not Private: Leaking User Data at Large Scale through Model Modification

Security and privacy are important concerns in machine learning. End use...
research
06/21/2020

Free-rider Attacks on Model Aggregation in Federated Learning

Free-rider attacks on federated learning consist in dissimulating partic...
research
06/12/2020

Backdoor Attacks on Federated Meta-Learning

Federated learning allows multiple users to collaboratively train a shar...
research
02/17/2022

LAMP: Extracting Text from Gradients with Language Model Priors

Recent work shows that sensitive user data can be reconstructed from gra...

Please sign up or login with your details

Forgot password? Click here to reset