DeepAI AI Chat
Log In Sign Up

Learning with Recoverable Forgetting

by   Jingwen Ye, et al.
National University of Singapore
Zhejiang University

Life-long learning aims at learning a sequence of tasks without forgetting the previously acquired knowledge. However, the involved training data may not be life-long legitimate due to privacy or copyright reasons. In practical scenarios, for instance, the model owner may wish to enable or disable the knowledge of specific tasks or specific samples from time to time. Such flexible control over knowledge transfer, unfortunately, has been largely overlooked in previous incremental or decremental learning methods, even at a problem-setup level. In this paper, we explore a novel learning scheme, termed as Learning wIth Recoverable Forgetting (LIRF), that explicitly handles the task- or sample-specific knowledge removal and recovery. Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal, which allow for isolating user-designated knowledge from a pre-trained network and injecting it back when necessary. During the knowledge deposit process, the specified knowledge is extracted from the target network and stored in a deposit module, while the insensitive or general knowledge of the target network is preserved and further augmented. During knowledge withdrawal, the taken-off knowledge is added back to the target network. The deposit and withdraw processes only demand for a few epochs of finetuning on the removal data, ensuring both data and time efficiency. We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.


page 1

page 2

page 3

page 4


Knowledge Restore and Transfer for Multi-label Class-Incremental Learning

Current class-incremental learning research mainly focuses on single-lab...

Random Path Selection for Incremental Learning

Incremental life-long learning is a main challenge towards the long-stan...

A Multi-Task Learning Framework for Overcoming the Catastrophic Forgetting in Automatic Speech Recognition

Recently, data-driven based Automatic Speech Recognition (ASR) systems h...

Partial Network Cloning

In this paper, we study a novel task that enables partial knowledge tran...

Class-Incremental Few-Shot Object Detection

Conventional detection networks usually need abundant labeled training s...

Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers

Since the recent advent of regulations for data protection (e.g., the Ge...

Forgiveness is an Adaptation in Iterated Prisoner's Dilemma with Memory

The Prisoner's Dilemma is used to represent many real life phenomena whe...