
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Neural Networks
We explore the problem of selectively forgetting a particular set of dat...
read it

MixedPrivacy Forgetting in Deep Networks
We show that the influence of a subset of the training samples can be re...
read it

Computing the Information Content of Trained Neural Networks
How much information does a learning algorithm extract from the training...
read it

Convergence of Deep Neural Networks to a Hierarchical Covariance Matrix Decomposition
We show that in a deep neural network trained with ReLU, the lowlying l...
read it

TRADI: Tracking deep neural network weight distributions
During training, the weights of a Deep Neural Network (DNN) are optimize...
read it

Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from InputOutput Observations
We describe a procedure for removing dependency on a cohort of training ...
read it

On the Stability of Deep Networks
In this work we study the properties of deep neural networks (DNN) with ...
read it
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks
We explore the problem of selectively forgetting a particular set of data used for training a deep neural network. While the effects of the data to be forgotten can be hidden from the output of the network, insights may still be gleaned by probing deep into its weights. We propose a method for "scrubbing" the weights clean of information about a particular set of training data. The method does not require retraining from scratch, nor access to the data originally used for training. Instead, the weights are modified so that any probing function of the weights, computed with no knowledge of the random seed used for training, is indistinguishable from the same function applied to the weights of a network trained without the data to be forgotten. This condition is a generalized and weaker form of Differential Privacy. Exploiting ideas related to the stability of stochastic gradient descent, we introduce an upperbound on the amount of information remaining in the weights, which can be estimated efficiently even for deep neural networks.
READ FULL TEXT
Comments
There are no comments yet.