LegoNet: A Fast and Exact Unlearning Architecture

10/28/2022
by   Sihao Yu, et al.
0

Machine unlearning aims to erase the impact of specific training samples upon deleted requests from a trained model. Re-training the model on the retained data after deletion is an effective but not efficient way due to the huge number of model parameters and re-training samples. To speed up, a natural way is to reduce such parameters and samples. However, such a strategy typically leads to a loss in model performance, which poses the challenge that increasing the unlearning efficiency while maintaining acceptable performance. In this paper, we present a novel network, namely LegoNet, which adopts the framework of “fixed encoder + multiple adapters”. We fix the encoder (the backbone for representation learning) of LegoNet to reduce the parameters that need to be re-trained during unlearning. Since the encoder occupies a major part of the model parameters, the unlearning efficiency is significantly improved. However, fixing the encoder empirically leads to a significant performance drop. To compensate for the performance loss, we adopt the ensemble of multiple adapters, which are independent sub-models adopted to infer the prediction by the encoding (the output of the encoder). Furthermore, we design an activation mechanism for the adapters to further trade off unlearning efficiency against model performance. This mechanism guarantees that each sample can only impact very few adapters, so during unlearning, parameters and samples that need to be re-trained are both reduced. The empirical experiments verify that LegoNet accomplishes fast and exact unlearning while maintaining acceptable performance, synthetically outperforming unlearning baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/23/2021

Recurrent multiple shared layers in Depth for Neural Machine Translation

Learning deeper models is usually a simple and effective approach to imp...
research
08/31/2021

A manifold learning perspective on representation learning: Learning decoder and representations without an encoder

Autoencoders are commonly used in representation learning. They consist ...
research
07/04/2023

Toward more frugal models for functional cerebral networks automatic recognition with resting-state fMRI

We refer to a machine learning situation where models based on classical...
research
05/05/2022

FastRE: Towards Fast Relation Extraction with Convolutional Encoder and Improved Cascade Binary Tagging Framework

Recent work for extracting relations from texts has achieved excellent p...
research
02/19/2018

Finding Influential Training Samples for Gradient Boosted Decision Trees

We address the problem of finding influential training samples for a par...
research
06/10/2023

Revealing Model Biases: Assessing Deep Neural Networks via Recovered Sample Analysis

This paper proposes a straightforward and cost-effective approach to ass...
research
11/21/2022

You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model

Large-scale Transformer models bring significant improvements for variou...

Please sign up or login with your details

Forgot password? Click here to reset