Knowledge Distillation from A Stronger Teacher

05/21/2022
by   Tao Huang, et al.
0

Unlike existing knowledge distillation methods focus on the baseline settings, where the teacher models and training strategies are not that strong and competing as state-of-the-art approaches, this paper presents a method dubbed DIST to distill better from a stronger teacher. We empirically find that the discrepancy of predictions between the student and a stronger teacher may tend to be fairly severer. As a result, the exact match of predictions in KL divergence would disturb the training and make existing methods perform poorly. In this paper, we show that simply preserving the relations between the predictions of teacher and student would suffice, and propose a correlation-based loss to capture the intrinsic inter-class relations from the teacher explicitly. Besides, considering that different instances have different semantic similarities to each class, we also extend this relational match to the intra-class level. Our method is simple yet practical, and extensive experiments demonstrate that it adapts well to various architectures, model sizes and training strategies, and can achieve state-of-the-art performance consistently on image classification, object detection, and semantic segmentation tasks. Code is available at: https://github.com/hunto/DIST_KD .

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2023

AICSD: Adaptive Inter-Class Similarity Distillation for Semantic Segmentation

In recent years, deep neural networks have achieved remarkable accuracy ...
research
11/16/2022

Stare at What You See: Masked Image Modeling without Reconstruction

Masked Autoencoders (MAE) have been prevailing paradigms for large-scale...
research
04/14/2022

Cross-Image Relational Knowledge Distillation for Semantic Segmentation

Current Knowledge Distillation (KD) methods for semantic segmentation of...
research
04/11/2020

Inter-Region Affinity Distillation for Road Marking Segmentation

We study the problem of distilling knowledge from a large deep teacher n...
research
09/18/2023

Heterogeneous Generative Knowledge Distillation with Masked Image Modeling

Small CNN-based models usually require transferring knowledge from a lar...
research
02/08/2022

Exploring Inter-Channel Correlation for Diversity-preserved KnowledgeDistillation

Knowledge Distillation has shown very promising abil-ity in transferring...
research
10/17/2022

Distilling Object Detectors With Global Knowledge

Knowledge distillation learns a lightweight student model that mimics a ...

Please sign up or login with your details

Forgot password? Click here to reset