Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks

11/12/2019
by   Aditya Golatkar, et al.
16

We explore the problem of selectively forgetting a particular set of data used for training a deep neural network. While the effects of the data to be forgotten can be hidden from the output of the network, insights may still be gleaned by probing deep into its weights. We propose a method for "scrubbing" the weights clean of information about a particular set of training data. The method does not require retraining from scratch, nor access to the data originally used for training. Instead, the weights are modified so that any probing function of the weights, computed with no knowledge of the random seed used for training, is indistinguishable from the same function applied to the weights of a network trained without the data to be forgotten. This condition is a generalized and weaker form of Differential Privacy. Exploiting ideas related to the stability of stochastic gradient descent, we introduce an upper-bound on the amount of information remaining in the weights, which can be estimated efficiently even for deep neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2019

Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Neural Networks

We explore the problem of selectively forgetting a particular set of dat...
research
12/24/2020

Mixed-Privacy Forgetting in Deep Networks

We show that the influence of a subset of the training samples can be re...
research
06/19/2020

Robust Differentially Private Training of Deep Neural Networks

Differentially private stochastic gradient descent (DPSGD) is a variatio...
research
03/14/2017

Convergence of Deep Neural Networks to a Hierarchical Covariance Matrix Decomposition

We show that in a deep neural network trained with ReLU, the low-lying l...
research
03/05/2020

Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations

We describe a procedure for removing dependency on a cohort of training ...
research
12/18/2014

On the Stability of Deep Networks

In this work we study the properties of deep neural networks (DNN) with ...
research
03/01/2021

Computing the Information Content of Trained Neural Networks

How much information does a learning algorithm extract from the training...

Please sign up or login with your details

Forgot password? Click here to reset