Multi-Teacher Knowledge Distillation for Incremental Implicitly-Refined Classification

02/23/2022
by   Longhui Yu, et al.
0

Incremental learning methods can learn new classes continually by distilling knowledge from the last model (as a teacher model) to the current model (as a student model) in the sequentially learning process. However, these methods cannot work for Incremental Implicitly-Refined Classification (IIRC), an incremental learning extension where the incoming classes could have two granularity levels, a superclass label and a subclass label. This is because the previously learned superclass knowledge may be occupied by the subclass knowledge learned sequentially. To solve this problem, we propose a novel Multi-Teacher Knowledge Distillation (MTKD) strategy. To preserve the subclass knowledge, we use the last model as a general teacher to distill the previous knowledge for the student model. To preserve the superclass knowledge, we use the initial model as a superclass teacher to distill the superclass knowledge as the initial model contains abundant superclass knowledge. However, distilling knowledge from two teacher models could result in the student model making some redundant predictions. We further propose a post-processing mechanism, called as Top-k prediction restriction to reduce the redundant predictions. Our experimental results on IIRC-ImageNet120 and IIRC-CIFAR100 show that the proposed method can achieve better classification accuracy compared with existing state-of-the-art methods.

READ FULL TEXT

page 1

page 4

page 7

research
05/04/2022

Generalized Knowledge Distillation via Relationship Matching

The knowledge of a well-trained deep neural network (a.k.a. the "teacher...
research
06/01/2023

Teacher Agent: A Non-Knowledge Distillation Method for Rehearsal-based Video Incremental Learning

With the rise in popularity of video-based social media, new categories ...
research
12/23/2020

IIRC: Incremental Implicitly-Refined Classification

We introduce the "Incremental Implicitly-Refined Classi-fication (IIRC)"...
research
04/03/2019

M2KD: Multi-model and Multi-level Knowledge Distillation for Incremental Learning

Incremental learning targets at achieving good performance on new catego...
research
03/06/2023

KDSM: An uplift modeling framework based on knowledge distillation and sample matching

Uplift modeling aims to estimate the treatment effect on individuals, wi...
research
06/13/2022

Robust Distillation for Worst-class Performance

Knowledge distillation has proven to be an effective technique in improv...
research
08/18/2023

Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning

In this work, we investigate exemplar-free class incremental learning (C...

Please sign up or login with your details

Forgot password? Click here to reset