Reconstructing Training Data from Trained Neural Networks

06/15/2022
by   Niv Haim, et al.
158

Understanding to what extent neural networks memorize training data is an intriguing question with practical and theoretical implications. In this paper we show that in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier. We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods. To the best of our knowledge, our results are the first to show that reconstructing a large portion of the actual training samples from a trained neural network classifier is generally possible. This has negative implications on privacy, as it can be used as an attack for revealing sensitive training data. We demonstrate our method for binary MLP classifiers on a few standard computer vision datasets.

READ FULL TEXT

Authors

page 6

page 17

page 18

page 20

page 21

page 22

page 23

page 25

09/26/2019

Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently

It has been observed zhang2016understanding that deep neural networks ca...
12/19/2018

A Note on Lazy Training in Supervised Differentiable Programming

In a series of recent theoretical works, it has been shown that strongly...
03/01/2021

Computing the Information Content of Trained Neural Networks

How much information does a learning algorithm extract from the training...
02/05/2022

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks

Understanding the asymptotic behavior of gradient-descent training of de...
06/12/2019

Does Learning Require Memorization? A Short Tale about a Long Tail

State-of-the-art results on image recognition tasks are achieved using o...
12/19/2019

Optimization for deep learning: theory and algorithms

When and why can a neural network be successfully trained? This article ...
05/20/2022

Unintended memorisation of unique features in neural networks

Neural networks pose a privacy risk due to their propensity to memorise ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.