Reconstructing Training Data from Trained Neural Networks

06/15/2022
by   Niv Haim, et al.
160

Understanding to what extent neural networks memorize training data is an intriguing question with practical and theoretical implications. In this paper we show that in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier. We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods. To the best of our knowledge, our results are the first to show that reconstructing a large portion of the actual training samples from a trained neural network classifier is generally possible. This has negative implications on privacy, as it can be used as an attack for revealing sensitive training data. We demonstrate our method for binary MLP classifiers on a few standard computer vision datasets.

READ FULL TEXT

page 6

page 17

page 18

page 20

page 21

page 22

page 23

page 25

research
05/05/2023

Reconstructing Training Data from Multiclass Neural Networks

Reconstructing samples from the training set of trained neural networks ...
research
09/26/2019

Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently

It has been observed zhang2016understanding that deep neural networks ca...
research
12/07/2022

Reconstructing Training Data from Model Gradient, Provably

Understanding when and how much a model gradient leaks information about...
research
06/10/2023

Revealing Model Biases: Assessing Deep Neural Networks via Recovered Sample Analysis

This paper proposes a straightforward and cost-effective approach to ass...
research
03/01/2021

Computing the Information Content of Trained Neural Networks

How much information does a learning algorithm extract from the training...
research
06/12/2019

Does Learning Require Memorization? A Short Tale about a Long Tail

State-of-the-art results on image recognition tasks are achieved using o...
research
12/19/2018

A Note on Lazy Training in Supervised Differentiable Programming

In a series of recent theoretical works, it has been shown that strongly...

Please sign up or login with your details

Forgot password? Click here to reset