SAPAG: A Self-Adaptive Privacy Attack From Gradients

09/14/2020
by   Yijue Wang, et al.
15

Distributed learning such as federated learning or collaborative learning enables model training on decentralized data from users and only collects local gradients, where data is processed close to its sources for data privacy. The nature of not centralizing the training data addresses the privacy issue of privacy-sensitive data. Recent studies show that a third party can reconstruct the true training data in the distributed machine learning system through the publicly-shared gradients. However, existing reconstruction attack frameworks lack generalizability on different Deep Neural Network (DNN) architectures and different weight distribution initialization, and can only succeed in the early training phase. To address these limitations, in this paper, we propose a more general privacy attack from gradient, SAPAG, which uses a Gaussian kernel based of gradient difference as a distance measure. Our experiments demonstrate that SAPAG can construct the training data on different DNNs with different weight initializations and on DNNs in any training phases.

READ FULL TEXT

page 4

page 5

page 6

research
04/25/2022

Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning

The idea of federated learning is to train deep neural network models co...
research
12/07/2022

Reconstructing Training Data from Model Gradient, Provably

Understanding when and how much a model gradient leaks information about...
research
10/15/2020

R-GAP: Recursive Gradient Attack on Privacy

Federated learning frameworks have been regarded as a promising approach...
research
06/16/2023

Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized Language Model Finetuning Using Shared Randomness

Language model training in distributed settings is limited by the commun...
research
03/11/2021

TAG: Transformer Attack from Gradient

Although federated learning has increasingly gained attention in terms o...
research
08/10/2021

PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage

Collaborative training of neural networks leverages distributed data by ...
research
10/31/2021

Revealing and Protecting Labels in Distributed Training

Distributed learning paradigms such as federated learning often involve ...

Please sign up or login with your details

Forgot password? Click here to reset