One-Shot Machine Unlearning with Mnemonic Code

06/09/2023
by   Tomoya Yamashita, et al.
0

Deep learning has achieved significant improvements in accuracy and has been applied to various fields. With the spread of deep learning, a new problem has also emerged; deep learning models can sometimes have undesirable information from an ethical standpoint. This problem must be resolved if deep learning is to make sensitive decisions such as hiring and prison sentencing. Machine unlearning (MU) is the research area that responds to such demands. MU aims at forgetting about undesirable training data from a trained deep learning model. A naive MU approach is to re-train the whole model with the training data from which the undesirable data has been removed. However, re-training the whole model can take a huge amount of time and consumes significant computer resources. To make MU even more practical, a simple-yet-effective MU method is required. In this paper, we propose a one-shot MU method, which does not need additional training. To design one-shot MU, we add noise to the model parameters that are sensitive to undesirable information. In our proposed method, we use the Fisher information matrix (FIM) to estimate the sensitive model parameters. Training data were usually used to evaluate the FIM in existing methods. In contrast, we avoid the need to retain the training data for calculating the FIM by using class-specific synthetic signals called mnemonic code. Extensive experiments using artificial and natural datasets demonstrate that our method outperforms the existing methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2021

Few-Shot Lifelong Learning

Many real-world classification problems often have classes with very few...
research
08/27/2019

Learning Continually from Low-shot Data Stream

While deep learning has achieved remarkable results on various applicati...
research
08/15/2023

Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening

Machine unlearning, the ability for a machine learning model to forget, ...
research
10/19/2022

Attaining Class-level Forgetting in Pretrained Model using Few Samples

In order to address real-world problems, deep learning models are jointl...
research
12/05/2019

Towards Explainable Deep Neural Networks (xDNN)

In this paper, we propose an elegant solution that is directly addressin...
research
09/02/2022

Learn to Adapt to New Environment from Past Experience and Few Pilot

In recent years, deep learning has been widely applied in communications...
research
05/31/2022

Few-Shot Unlearning by Model Inversion

We consider the problem of machine unlearning to erase a target dataset,...

Please sign up or login with your details

Forgot password? Click here to reset