Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations

03/05/2020
by   Aditya Golatkar, et al.
1

We describe a procedure for removing dependency on a cohort of training data from a trained deep network that improves upon and generalizes previous methods to different readout functions and can be extended to ensure forgetting in the activations of the network. We introduce a new bound on how much information can be extracted per query about the forgotten cohort from a black-box network for which only the input-output behavior is observed. The proposed forgetting procedure has a deterministic part derived from the differential equations of a linearized version of the model, and a stochastic part that ensures information destruction by adding noise tailored to the geometry of the loss landscape. We exploit the connections between the activation and weight dynamics of a DNN inspired by Neural Tangent Kernels to compute the information in the activations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2020

Selective Forgetting of Deep Networks at a Finer Level than Samples

Selective forgetting or removing information from deep neural networks (...
research
12/24/2020

Mixed-Privacy Forgetting in Deep Networks

We show that the influence of a subset of the training samples can be re...
research
11/12/2019

Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks

We explore the problem of selectively forgetting a particular set of dat...
research
01/17/2022

Evaluating Inexact Unlearning Requires Revisiting Forgetting

Existing works in inexact machine unlearning focus on achieving indistin...
research
10/10/2019

Coloring the Black Box: Visualizing neural network behavior with a self-introspective model

The following work presents how autoencoding all the possible hidden act...
research
07/16/2017

Theoretical insights into the optimization landscape of over-parameterized shallow neural networks

In this paper we study the problem of learning a shallow artificial neur...
research
02/22/2023

Considering Layerwise Importance in the Lottery Ticket Hypothesis

The Lottery Ticket Hypothesis (LTH) showed that by iteratively training ...

Please sign up or login with your details

Forgot password? Click here to reset