Disposable Transfer Learning for Selective Source Task Unlearning

08/19/2023
by   Seunghee Koh, et al.
0

Transfer learning is widely used for training deep neural networks (DNN) for building a powerful representation. Even after the pre-trained model is adapted for the target task, the representation performance of the feature extractor is retained to some extent. As the performance of the pre-trained model can be considered the private property of the owner, it is natural to seek the exclusive right of the generalized performance of the pre-trained weight. To address this issue, we suggest a new paradigm of transfer learning called disposable transfer learning (DTL), which disposes of only the source task without degrading the performance of the target task. To achieve knowledge disposal, we propose a novel loss named Gradient Collision loss (GC loss). GC loss selectively unlearns the source knowledge by leading the gradient vectors of mini-batches in different directions. Whether the model successfully unlearns the source task is measured by piggyback learning accuracy (PL accuracy). PL accuracy estimates the vulnerability of knowledge leakage by retraining the scrubbed model on a subset of source data or new downstream data. We demonstrate that GC loss is an effective approach to the DTL problem by showing that the model trained with GC loss retains the performance on the target task with a significantly reduced PL accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2021

TransTailor: Pruning the Pre-trained Model for Improved Transfer Learning

The increasing of pre-trained models has significantly facilitated the p...
research
03/02/2023

Optimal transfer protocol by incremental layer defrosting

Transfer learning is a powerful tool enabling model training with limite...
research
06/27/2022

Transfer Learning via Test-Time Neural Networks Aggregation

It has been demonstrated that deep neural networks outperform traditiona...
research
04/08/2019

A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning

Due to the lack of enough training data and high computational cost to t...
research
06/12/2022

PAC-Net: A Model Pruning Approach to Inductive Transfer Learning

Inductive transfer learning aims to learn from a small amount of trainin...
research
02/26/2023

TransferD2: Automated Defect Detection Approach in Smart Manufacturing using Transfer Learning Techniques

Quality assurance is crucial in the smart manufacturing industry as it i...
research
02/26/2023

Scalable Weight Reparametrization for Efficient Transfer Learning

This paper proposes a novel, efficient transfer learning method, called ...

Please sign up or login with your details

Forgot password? Click here to reset