Adaptive Machine Unlearning

by   Varun Gupta, et al.

Data deletion algorithms aim to remove the influence of deleted data points from trained models at a cheaper computational cost than fully retraining those models. However, for sequences of deletions, most prior work in the non-convex setting gives valid guarantees only for sequences that are chosen independently of the models that are published. If people choose to delete their data as a function of the published models (because they don't like what the models reveal about them, for example), then the update sequence is adaptive. In this paper, we give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information. Combined with ideas from prior work which give guarantees for non-adaptive deletion sequences, this leads to extremely flexible algorithms able to handle arbitrary model classes and training methodologies, giving strong provable deletion guarantees for adaptive deletion sequences. We show in theory how prior work for non-convex models fails against adaptive deletion sequences, and use this intuition to design a practical attack against the SISA algorithm of Bourtoule et al. [2021] on CIFAR-10, MNIST, Fashion-MNIST.


page 1

page 2

page 3

page 4


Forget Unlearning: Towards True Data-Deletion in Machine Learning

Unlearning has emerged as a technique to efficiently erase information o...

Approximate Data Deletion from Machine Learning Models: Algorithms and Evaluations

Deleting data from a trained machine learning (ML) model is a critical t...

Descent-to-Delete: Gradient-Based Methods for Machine Unlearning

We study the data deletion problem for convex models. By leveraging tech...

Approximate Data Deletion in Generative Models

Users have the right to have their data deleted by third-party learned s...

Tutorial on algebraic deletion correction codes

The deletion channel is known to be a notoriously diffcult channel to de...

To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods

The right to be forgotten (RTBF) is motivated by the desire of people no...

Algorithms that Approximate Data Removal: New Results and Limitations

We study the problem of deleting user data from machine learning models ...

Please sign up or login with your details

Forgot password? Click here to reset