Reconstructing Training Data from Model Gradient, Provably

12/07/2022
by   Zihan Wang, et al.
0

Understanding when and how much a model gradient leaks information about the training sample is an important question in privacy. In this paper, we present a surprising result: even without training or memorizing the data, we can fully reconstruct the training samples from a single gradient query at a randomly chosen parameter value. We prove the identifiability of the training data under mild conditions: with shallow or deep neural networks and a wide range of activation functions. We also present a statistically and computationally efficient algorithm based on tensor decomposition to reconstruct the training data. As a provable attack that reveals sensitive training data, our findings suggest potential severe threats to privacy, especially in federated learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2020

SAPAG: A Self-Adaptive Privacy Attack From Gradients

Distributed learning such as federated learning or collaborative learnin...
research
06/15/2022

Reconstructing Training Data from Trained Neural Networks

Understanding to what extent neural networks memorize training data is a...
research
05/05/2023

Reconstructing Training Data from Multiclass Neural Networks

Reconstructing samples from the training set of trained neural networks ...
research
06/08/2020

Responsive Web User Interface to Recover Training Data from User Gradients in Federated Learning

Local differential privacy (LDP) is an emerging privacy standard to prot...
research
06/08/2020

Attacks to Federated Learning: Responsive Web User Interface to Recover Training Data from User Gradients

Local differential privacy (LDP) is an emerging privacy standard to prot...
research
04/25/2022

Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning

The idea of federated learning is to train deep neural network models co...
research
11/05/2021

Reconstructing Training Data from Diverse ML Models by Ensemble Inversion

Model Inversion (MI), in which an adversary abuses access to a trained M...

Please sign up or login with your details

Forgot password? Click here to reset