Knowledge Transfer via Dense Cross-Layer Mutual-Distillation

08/18/2020
by   Anbang Yao, et al.
0

Knowledge Distillation (KD) based methods adopt the one-way Knowledge Transfer (KT) scheme in which training a lower-capacity student network is guided by a pre-trained high-capacity teacher network. Recently, Deep Mutual Learning (DML) presented a two-way KT strategy, showing that the student network can be also helpful to improve the teacher network. In this paper, we propose Dense Cross-layer Mutual-distillation (DCM), an improved two-way KT method in which the teacher and student networks are trained collaboratively from scratch. To augment knowledge representation learning, well-designed auxiliary classifiers are added to certain hidden layers of both teacher and student networks. To boost KT performance, we introduce dense bidirectional KD operations between the layers appended with classifiers. After training, all auxiliary classifiers are discarded, and thus there are no extra parameters introduced to final models. We test our method on a variety of KT tasks, showing its superiorities over related methods. Code is available at https://github.com/sundw2014/DCM

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2022

Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural Networks

Existing knowledge distillation methods on graph neural networks (GNNs) ...
research
05/23/2023

NORM: Knowledge Distillation via N-to-One Representation Matching

Existing feature distillation methods commonly adopt the One-to-one Repr...
research
07/12/2022

Knowledge Condensation Distillation

Knowledge Distillation (KD) transfers the knowledge from a high-capacity...
research
07/23/2022

Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition

The teacher-free online Knowledge Distillation (KD) aims to train an ens...
research
03/06/2019

Learning from Higher-Layer Feature Visualizations

Driven by the goal to enable sleep apnea monitoring and machine learning...
research
06/01/2017

Deep Mutual Learning

Model distillation is an effective and widely used technique to transfer...
research
05/16/2023

Lightweight Self-Knowledge Distillation with Multi-source Information Fusion

Knowledge Distillation (KD) is a powerful technique for transferring kno...

Please sign up or login with your details

Forgot password? Click here to reset