Decoupled Knowledge Distillation

03/16/2022
by   Borui Zhao, et al.
0

State-of-the-art distillation methods are mainly based on distilling deep features from intermediate layers, while the significance of logit distillation is greatly overlooked. To provide a novel viewpoint to study logit distillation, we reformulate the classical KD loss into two parts, i.e., target class knowledge distillation (TCKD) and non-target class knowledge distillation (NCKD). We empirically investigate and prove the effects of the two parts: TCKD transfers knowledge concerning the "difficulty" of training samples, while NCKD is the prominent reason why logit distillation works. More importantly, we reveal that the classical KD loss is a coupled formulation, which (1) suppresses the effectiveness of NCKD and (2) limits the flexibility to balance these two parts. To address these issues, we present Decoupled Knowledge Distillation (DKD), enabling TCKD and NCKD to play their roles more efficiently and flexibly. Compared with complex feature-based methods, our DKD achieves comparable or even better results and has better training efficiency on CIFAR-100, ImageNet, and MS-COCO datasets for image classification and object detection tasks. This paper proves the great potential of logit distillation, and we hope it will be helpful for future research. The code is available at https://github.com/megvii-research/mdistiller.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2022

Spot-adaptive Knowledge Distillation

Knowledge distillation (KD) has become a well established paradigm for c...
research
04/25/2023

Class Attention Transfer Based Knowledge Distillation

Previous knowledge distillation methods have shown their impressive perf...
research
03/18/2023

Whole-slide-imaging Cancer Metastases Detection and Localization with Limited Tumorous Data

Recently, various deep learning methods have shown significant successes...
research
05/23/2023

Decoupled Kullback-Leibler Divergence Loss

In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence...
research
11/30/2020

KD-Lib: A PyTorch library for Knowledge Distillation, Pruning and Quantization

In recent years, the growing size of neural networks has led to a vast a...
research
11/25/2020

torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation

While knowledge distillation (transfer) has been attracting attentions f...
research
08/01/2023

NormKD: Normalized Logits for Knowledge Distillation

Logit based knowledge distillation gets less attention in recent years s...

Please sign up or login with your details

Forgot password? Click here to reset