Learning from a Lightweight Teacher for Efficient Knowledge Distillation

05/19/2020
by   Yuang Liu, et al.
23

Knowledge Distillation (KD) is an effective framework for compressing deep learning models, realized by a student-teacher paradigm requiring small student networks to mimic the soft target generated by well-trained teachers. However, the teachers are commonly assumed to be complex and need to be trained on the same datasets as students. This leads to a time-consuming training process. The recent study shows vanilla KD plays a similar role as label smoothing and develops teacher-free KD, being efficient and mitigating the issue of learning from heavy teachers. But because teacher-free KD relies on manually-crafted output distributions kept the same for all data instances belonging to the same class, its flexibility and performance are relatively limited. To address the above issues, this paper proposes en efficient knowledge distillation learning framework LW-KD, short for lightweight knowledge distillation. It firstly trains a lightweight teacher network on a synthesized simple dataset, with an adjustable class number equal to that of a target dataset. The teacher then generates soft target whereby an enhanced KD loss could guide student learning, which is a combination of KD loss and adversarial loss for making student output indistinguishable from the output of the teacher. Experiments on several public datasets with different modalities demonstrate LWKD is effective and efficient, showing the rationality of its main design principles.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2019

Similarity-Preserving Knowledge Distillation

Knowledge distillation is a widely applicable technique for training a s...
research
07/03/2021

Pool of Experts: Realtime Querying Specialized Knowledge in Massive Neural Networks

In spite of the great success of deep learning technologies, training an...
research
11/23/2020

Generative Adversarial Simulator

Knowledge distillation between machine learning models has opened many n...
research
02/16/2023

Fuzzy Knowledge Distillation from High-Order TSK to Low-Order TSK

High-order Takagi-Sugeno-Kang (TSK) fuzzy classifiers possess powerful c...
research
03/09/2022

Efficient Sub-structured Knowledge Distillation

Structured prediction models aim at solving a type of problem where the ...
research
05/16/2021

Undistillable: Making A Nasty Teacher That CANNOT teach students

Knowledge Distillation (KD) is a widely used technique to transfer knowl...
research
06/11/2022

Reducing Capacity Gap in Knowledge Distillation with Review Mechanism for Crowd Counting

The lightweight crowd counting models, in particular knowledge distillat...

Please sign up or login with your details

Forgot password? Click here to reset