Evaluating Inexact Unlearning Requires Revisiting Forgetting

01/17/2022
by   Shashwat Goel, et al.
0

Existing works in inexact machine unlearning focus on achieving indistinguishability from models retrained after removing the deletion set. We argue that indistinguishability is unnecessary, infeasible to measure, and its practical relaxations can be insufficient. We redefine the goal of unlearning as forgetting all information specific to the deletion set while maintaining high utility and resource efficiency. Motivated by the practical application of removing mislabelled and biased data from models, we introduce a novel test to measure the degree of forgetting called Interclass Confusion (IC). It allows us to analyze two aspects of forgetting: (i) memorization and (ii) property generalization. Despite being a black-box test, IC can investigate whether information from the deletion set was erased until the early layers of the network. We empirically show that two simple unlearning methods, exact-unlearning and catastrophic-forgetting the final k layers of a network, scale well to large deletion sets unlike prior unlearning methods. k controls the forgetting-efficiency tradeoff at similar utility. Overall, we believe our formulation of unlearning and the IC test will guide the design of better unlearning algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/03/2020

Online Forgetting Process for Linear Regression Models

Motivated by the EU's "Right To Be Forgotten" regulation, we initiate a ...
research
07/14/2020

Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics

A central challenge in developing versatile machine learning systems is ...
research
12/12/2018

An Empirical Study of Example Forgetting during Deep Neural Network Learning

Inspired by the phenomenon of catastrophic forgetting, we investigate th...
research
12/22/2020

Selective Forgetting of Deep Networks at a Finer Level than Samples

Selective forgetting or removing information from deep neural networks (...
research
03/05/2020

Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations

We describe a procedure for removing dependency on a cohort of training ...
research
10/30/2017

Forgetting the Forgotten with Letheia, Concealing Content Deletion from Persistent Observers

Most people are susceptible to oversharing their personal information pu...
research
06/15/2021

Bridge Networks

Despite rapid progress, current deep learning methods face a number of c...

Please sign up or login with your details

Forgot password? Click here to reset