Self-distilled Knowledge Delegator for Exemplar-free Class Incremental Learning

05/23/2022
by   Fanfan Ye, et al.
0

Exemplar-free incremental learning is extremely challenging due to inaccessibility of data from old tasks. In this paper, we attempt to exploit the knowledge encoded in a previously trained classification model to handle the catastrophic forgetting problem in continual learning. Specifically, we introduce a so-called knowledge delegator, which is capable of transferring knowledge from the trained model to a randomly re-initialized new model by generating informative samples. Given the previous model only, the delegator is effectively learned using a self-distillation mechanism in a data-free manner. The knowledge extracted by the delegator is then utilized to maintain the performance of the model on old tasks in incremental learning. This simple incremental learning framework surpasses existing exemplar-free methods by a large margin on four widely used class incremental benchmarks, namely CIFAR-100, ImageNet-Subset, Caltech-101 and Flowers-102. Notably, we achieve comparable performance to some exemplar-based methods without accessing any exemplars.

READ FULL TEXT

page 1

page 10

research
10/11/2021

Addressing the Stability-Plasticity Dilemma via Knowledge-Aware Continual Learning

Continual learning agents should incrementally learn a sequence of tasks...
research
12/16/2022

Robust Saliency Guidance for Data-free Class Incremental Learning

Data-Free Class Incremental Learning (DFCIL) aims to sequentially learn ...
research
07/18/2019

Autoencoder-Based Incremental Class Learning without Retraining on Old Data

Incremental class learning, a scenario in continual learning context whe...
research
08/21/2023

When Prompt-based Incremental Learning Does Not Meet Strong Pretraining

Incremental learning aims to overcome catastrophic forgetting when learn...
research
08/18/2023

Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning

In this work, we investigate exemplar-free class incremental learning (C...
research
02/03/2019

Incremental Learning with Maximum Entropy Regularization: Rethinking Forgetting and Intransigence

Incremental learning suffers from two challenging problems; forgetting o...
research
08/29/2023

Rotation Augmented Distillation for Exemplar-Free Class Incremental Learning with Detailed Analysis

Class incremental learning (CIL) aims to recognize both the old and new ...

Please sign up or login with your details

Forgot password? Click here to reset