Evaluating Gradient Inversion Attacks and Defenses in Federated Learning

11/30/2021
by   Yangsibo Huang, et al.
1

Gradient inversion attack (or input recovery from gradient) is an emerging threat to the security and privacy preservation of Federated learning, whereby malicious eavesdroppers or participants in the protocol can recover (partially) the clients' private data. This paper evaluates existing attacks and defenses. We find that some attacks make strong assumptions about the setup. Relaxing such assumptions can substantially weaken these attacks. We then evaluate the benefits of three proposed defense mechanisms against gradient inversion attacks. We show the trade-offs of privacy leakage and data utility of these defense methods, and find that combining them in an appropriate manner makes the attack less effective, even under the original strong assumptions. We also estimate the computation cost of end-to-end recovery of a single image under each evaluated defense. Our findings suggest that the state-of-the-art attacks can currently be defended against with minor data utility loss, as summarized in a list of potential strategies. Our code is available at: https://github.com/Princeton-SysML/GradAttack.

READ FULL TEXT

page 5

page 14

page 15

page 16

page 17

page 18

page 19

page 20

research
10/19/2022

Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning

Gradient inversion attack enables recovery of training samples from mode...
research
04/26/2022

Enhancing Privacy against Inversion Attacks in Federated Learning by using Mixing Gradients Strategies

Federated learning reduces the risk of information leakage, but remains ...
research
03/01/2023

Single Image Backdoor Inversion via Robust Smoothed Classifiers

Backdoor inversion, the process of finding a backdoor trigger inserted i...
research
11/21/2022

SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks

The applications concerning vehicular networks benefit from the vision o...
research
04/06/2023

Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding

Knowledge Graph Embedding (KGE) is a fundamental technique that extracts...
research
10/15/2020

R-GAP: Recursive Gradient Attack on Privacy

Federated learning frameworks have been regarded as a promising approach...
research
06/11/2022

Bilateral Dependency Optimization: Defending Against Model-inversion Attacks

Through using only a well-trained classifier, model-inversion (MI) attac...

Please sign up or login with your details

Forgot password? Click here to reset