Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening

08/15/2023
by   Jack Foster, et al.
0

Machine unlearning, the ability for a machine learning model to forget, is becoming increasingly important to comply with data privacy regulations, as well as to remove harmful, manipulated, or outdated information. The key challenge lies in forgetting specific information while protecting model performance on the remaining data. While current state-of-the-art methods perform well, they typically require some level of retraining over the retained data, in order to protect or restore model performance. This adds computational overhead and mandates that the training data remain available and accessible, which may not be feasible. In contrast, other methods employ a retrain-free paradigm, however, these approaches are prohibitively computationally expensive and do not perform on par with their retrain-based counterparts. We present Selective Synaptic Dampening (SSD), a novel two-step, post hoc, retrain-free approach to machine unlearning which is fast, performant, and does not require long-term storage of the training data. First, SSD uses the Fisher information matrix of the training and forgetting data to select parameters that are disproportionately important to the forget set. Second, SSD induces forgetting by dampening these parameters proportional to their relative importance to the forget set with respect to the wider training data. We evaluate our method against several existing unlearning methods in a range of experiments using ResNet18 and Vision Transformer. Results show that the performance of SSD is competitive with retrain-based post hoc methods, demonstrating the viability of retrain-free post hoc unlearning approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2022

Explaining Neural Networks without Access to Training Data

We consider generating explanations for neural networks in cases where t...
research
06/09/2023

One-Shot Machine Unlearning with Mnemonic Code

Deep learning has achieved significant improvements in accuracy and has ...
research
12/22/2020

Selective Forgetting of Deep Networks at a Finer Level than Samples

Selective forgetting or removing information from deep neural networks (...
research
02/22/2021

Post-hoc Overall Survival Time Prediction from Brain MRI

Overall survival (OS) time prediction is one of the most common estimate...
research
08/14/2022

Forgetting Fast in Recommender Systems

Users of a recommender system may want part of their data being deleted,...
research
04/25/2023

SAFE: Machine Unlearning With Shard Graphs

We present Synergy Aware Forgetting Ensemble (SAFE), a method to adapt l...
research
06/11/2020

Understanding Regularisation Methods for Continual Learning

The problem of Catastrophic Forgetting has received a lot of attention i...

Please sign up or login with your details

Forgot password? Click here to reset