Fixing the Teacher-Student Knowledge Discrepancy in Distillation

03/31/2021
by   Jiangfan Han, et al.
0

Training a small student network with the guidance of a larger teacher network is an effective way to promote the performance of the student. Despite the different types, the guided knowledge used to distill is always kept unchanged for different teacher and student pairs in previous knowledge distillation methods. However, we find that teacher and student models with different networks or trained from different initialization could have distinct feature representations among different channels. (e.g. the high activated channel for different categories). We name this incongruous representation of channels as teacher-student knowledge discrepancy in the distillation process. Ignoring the knowledge discrepancy problem of teacher and student models will make the learning of student from teacher more difficult. To solve this problem, in this paper, we propose a novel student-dependent distillation method, knowledge consistent distillation, which makes teacher's knowledge more consistent with the student and provides the best suitable knowledge to different student networks for distillation. Extensive experiments on different datasets (CIFAR100, ImageNet, COCO) and tasks (image classification, object detection) reveal the widely existing knowledge discrepancy problem between teachers and students and demonstrate the effectiveness of our proposed method. Our method is very flexible that can be easily combined with other state-of-the-art approaches.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

07/03/2020

Interactive Knowledge Distillation

Knowledge distillation is a standard teacher-student learning framework ...
02/22/2021

Multi-View Feature Representation for Dialogue Generation with Bidirectional Distillation

Neural dialogue models suffer from low-quality responses when interacted...
09/30/2020

Pea-KD: Parameter-efficient and Accurate Knowledge Distillation

How can we efficiently compress a model while maintaining its performanc...
05/02/2020

Heterogeneous Knowledge Distillation using Information Flow Modeling

Knowledge Distillation (KD) methods are capable of transferring the know...
07/16/2021

Representation Consolidation for Training Expert Students

Traditionally, distillation has been used to train a student model to em...
04/07/2021

Distilling and Transferring Knowledge via cGAN-generated Samples for Image Classification and Regression

Knowledge distillation (KD) has been actively studied for image classifi...
01/26/2022

Anomaly Detection via Reverse Distillation from One-Class Embedding

Knowledge distillation (KD) achieves promising results on the challengin...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.