Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Neural Networks

11/12/2019
by   Aditya Golatkar, et al.
20

We explore the problem of selectively forgetting a particular set of data used for training a deep neural network. While the effects of the data to be forgotten can be hidden from the output of the network, insights may still be gleaned by probing deep into its weights. We propose a method for “scrubbing” the weights clean of information about a particular set of training data. The method does not require retraining from scratch, nor access to the data originally used for training. Instead, the weights are modified so that any probing function of the weights, computed with no knowledge of the random seed used for training, is indistinguishable from the same function applied to the weights of a network trained without the data to be forgotten. This condition is weaker than Differential Privacy, which seeks protection against adversaries that have access to the entire training process, and is more appropriate for deep learning, where a potential adversary might have access to the trained network, but generally, have no knowledge of how it was trained.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset