R-GAP: Recursive Gradient Attack on Privacy

10/15/2020
by   Junyi Zhu, et al.
0

Federated learning frameworks have been regarded as a promising approach to break the dilemma between demands on privacy and the promise of learning from large collections of distributed data. Many such frameworks only ask collaborators to share their local update of a common model, i.e. gradients with respect to locally stored data, instead of exposing their raw data to other collaborators. However, recent optimization-based gradient attacks show that raw data can often be accurately recovered from gradients. It has been shown that minimizing the Euclidean distance between true gradients and those calculated from estimated data is often effective in fully recovering private data. However, there is a fundamental lack of theoretical understanding of how and when gradients can lead to unique recovery of original data. Our research fills this gap by providing a closed-form recursive procedure to recover data from gradients in deep neural networks. We demonstrate that gradient attacks consist of recursively solving a sequence of systems of linear equations. Furthermore, our closed-form approach works as well as or even better than optimization-based approaches at a fraction of the computation, we name it Recursive Gradient Attack on Privacy (R-GAP). Additionally, we propose a rank analysis method, which can be used to estimate a network architecture's risk of a gradient attack. Experimental results demonstrate the validity of the closed-form attack and rank analysis, while demonstrating its superior computational properties and lack of susceptibility to local optima vis a vis optimization-based attacks. Source code is available for download from https://github.com/JunyiZhu-AI/R-GAP.

READ FULL TEXT

page 7

page 8

page 11

page 12

research
05/31/2023

Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning

In Federated Learning (FL) and many other distributed training framework...
research
09/14/2020

SAPAG: A Self-Adaptive Privacy Attack From Gradients

Distributed learning such as federated learning or collaborative learnin...
research
11/30/2021

Evaluating Gradient Inversion Attacks and Defenses in Federated Learning

Gradient inversion attack (or input recovery from gradient) is an emergi...
research
06/08/2020

Attacks to Federated Learning: Responsive Web User Interface to Recover Training Data from User Gradients

Local differential privacy (LDP) is an emerging privacy standard to prot...
research
06/13/2023

Temporal Gradient Inversion Attacks with Robust Optimization

Federated Learning (FL) has emerged as a promising approach for collabor...
research
06/15/2023

Your Room is not Private: Gradient Inversion Attack for Deep Q-Learning

The prominence of embodied Artificial Intelligence (AI), which empowers ...
research
03/31/2020

Inverting Gradients – How easy is it to break privacy in federated learning?

The idea of federated learning is to collaboratively train a neural netw...

Please sign up or login with your details

Forgot password? Click here to reset