Learning with Recoverable Forgetting

07/17/2022
by   Jingwen Ye, et al.
0

Life-long learning aims at learning a sequence of tasks without forgetting the previously acquired knowledge. However, the involved training data may not be life-long legitimate due to privacy or copyright reasons. In practical scenarios, for instance, the model owner may wish to enable or disable the knowledge of specific tasks or specific samples from time to time. Such flexible control over knowledge transfer, unfortunately, has been largely overlooked in previous incremental or decremental learning methods, even at a problem-setup level. In this paper, we explore a novel learning scheme, termed as Learning wIth Recoverable Forgetting (LIRF), that explicitly handles the task- or sample-specific knowledge removal and recovery. Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal, which allow for isolating user-designated knowledge from a pre-trained network and injecting it back when necessary. During the knowledge deposit process, the specified knowledge is extracted from the target network and stored in a deposit module, while the insensitive or general knowledge of the target network is preserved and further augmented. During knowledge withdrawal, the taken-off knowledge is added back to the target network. The deposit and withdraw processes only demand for a few epochs of finetuning on the removal data, ensuring both data and time efficiency. We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2023

Knowledge Restore and Transfer for Multi-label Class-Incremental Learning

Current class-incremental learning research mainly focuses on single-lab...
research
06/03/2019

Random Path Selection for Incremental Learning

Incremental life-long learning is a main challenge towards the long-stan...
research
04/17/2019

A Multi-Task Learning Framework for Overcoming the Catastrophic Forgetting in Automatic Speech Recognition

Recently, data-driven based Automatic Speech Recognition (ASR) systems h...
research
03/19/2023

Partial Network Cloning

In this paper, we study a novel task that enables partial knowledge tran...
research
05/17/2021

Class-Incremental Few-Shot Object Detection

Conventional detection networks usually need abundant labeled training s...
research
01/27/2023

Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers

Since the recent advent of regulations for data protection (e.g., the Ge...
research
12/15/2021

Forgiveness is an Adaptation in Iterated Prisoner's Dilemma with Memory

The Prisoner's Dilemma is used to represent many real life phenomena whe...

Please sign up or login with your details

Forgot password? Click here to reset