Knowledge Distillation: A Survey

06/09/2020
by   Jianping Gou, et al.
0

In recent years, deep neural networks have been very successful in the fields of both industry and academia, especially for the applications of visual recognition and neural language processing. The great success of deep learning mainly owes to its great scalabilities to both large-scale data samples and billions of model parameters. However, it also poses a great challenge for the deployment of these cumbersome deep models on devices with limited resources, e.g., mobile phones and embedded devices, not only because of the great computational complexity but also the storage. To this end, a variety of model compression and acceleration techniques have been developed, such as pruning, quantization, and neural architecture search. As a typical model compression and acceleration method, knowledge distillation aims to learn a small student model from a large teacher model and has received increasing attention from the community. In this paper, we provide a comprehensive survey on knowledge distillation from the perspectives of different knowledge categories, training schemes, distillation algorithms, as well as applications. Furthermore, we briefly review challenges in knowledge distillation and provide some insights on the subject of future study.

READ FULL TEXT

page 2

page 6

page 10

page 11

page 12

page 13

page 15

page 16

research
07/17/2020

Knowledge Distillation in Deep Learning and its Applications

Deep learning based models are relatively large, and it is hard to deplo...
research
06/19/2023

Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation

Deep neural networks have achieved remarkable performance for artificial...
research
02/14/2023

Multi-teacher knowledge distillation as an effective method for compressing ensembles of neural networks

Deep learning has contributed greatly to many successes in artificial in...
research
09/21/2019

Positive-Unlabeled Compression on the Cloud

Many attempts have been done to extend the great success of convolutiona...
research
09/19/2020

Weight Distillation: Transferring the Knowledge in Neural Network Parameters

Knowledge distillation has been proven to be effective in model accelera...
research
07/01/2019

Compression of Acoustic Event Detection Models With Quantized Distillation

Acoustic Event Detection (AED), aiming at detecting categories of events...
research
08/01/2018

SlimNets: An Exploration of Deep Model Compression and Acceleration

Deep neural networks have achieved increasingly accurate results on a wi...

Please sign up or login with your details

Forgot password? Click here to reset