M2KD: Multi-model and Multi-level Knowledge Distillation for Incremental Learning

04/03/2019
by   Peng Zhou, et al.
0

Incremental learning targets at achieving good performance on new categories without forgetting old ones. Knowledge distillation has been shown critical in preserving the performance on old classes. Conventional methods, however, sequentially distill knowledge only from the last model, leading to performance degradation on the old classes in later incremental learning steps. In this paper, we propose a multi-model and multi-level knowledge distillation strategy. Instead of sequentially distilling knowledge only from the last model, we directly leverage all previous model snapshots. In addition, we incorporate an auxiliary distillation to further preserve knowledge encoded at the intermediate feature levels. To make the model more memory efficient, we adapt mask based pruning to reconstruct all previous models with a small memory footprint. Experiments on standard incremental learning benchmarks show that our method preserves the knowledge on old classes better and improves the overall performance over standard distillation techniques.

READ FULL TEXT

page 1

page 3

page 7

research
10/12/2022

Decomposed Knowledge Distillation for Class-Incremental Semantic Segmentation

Class-incremental semantic segmentation (CISS) labels each pixel of an i...
research
02/23/2022

Multi-Teacher Knowledge Distillation for Incremental Implicitly-Refined Classification

Incremental learning methods can learn new classes continually by distil...
research
03/09/2020

Faster ILOD: Incremental Learning for Object Detectors based on Faster RCNN

The human vision and perception system is inherently incremental where n...
research
04/02/2022

Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation

We present a novel class incremental learning approach based on deep neu...
research
04/26/2022

Improving Feature Generalizability with Multitask Learning in Class Incremental Learning

Many deep learning applications, like keyword spotting, require the inco...
research
04/01/2020

Memory-Efficient Incremental Learning Through Feature Adaptation

In this work we introduce an approach for incremental learning, which pr...
research
01/12/2023

Effective Decision Boundary Learning for Class Incremental Learning

Rehearsal approaches in class incremental learning (CIL) suffer from dec...

Please sign up or login with your details

Forgot password? Click here to reset