Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification

08/18/2022
by   Quanshi Zhang, et al.
0

Compared to traditional learning from scratch, knowledge distillation sometimes makes the DNN achieve superior performance. This paper provides a new perspective to explain the success of knowledge distillation, i.e., quantifying knowledge points encoded in intermediate layers of a DNN for classification, based on the information theory. To this end, we consider the signal processing in a DNN as the layer-wise information discarding. A knowledge point is referred to as an input unit, whose information is much less discarded than other input units. Thus, we propose three hypotheses for knowledge distillation based on the quantification of knowledge points. 1. The DNN learning from knowledge distillation encodes more knowledge points than the DNN learning from scratch. 2. Knowledge distillation makes the DNN more likely to learn different knowledge points simultaneously. In comparison, the DNN learning from scratch tends to encode various knowledge points sequentially. 3. The DNN learning from knowledge distillation is often optimized more stably than the DNN learning from scratch. In order to verify the above hypotheses, we design three types of metrics with annotations of foreground objects to analyze feature representations of the DNN, i.e. the quantity and the quality of knowledge points, the learning speed of different knowledge points, and the stability of optimization directions. In experiments, we diagnosed various DNNs for different classification tasks, i.e., image classification, 3D point cloud classification, binary sentiment classification, and question answering, which verified above hypotheses.

READ FULL TEXT

page 2

page 12

page 17

research
03/07/2020

Explaining Knowledge Distillation by Quantifying the Knowledge

This paper presents a method to interpret the success of knowledge disti...
research
04/10/2023

A Survey on Recent Teacher-student Learning Studies

Knowledge distillation is a method of transferring the knowledge from a ...
research
11/05/2021

Visualizing the Emergence of Intermediate Visual Patterns in DNNs

This paper proposes a method to visualize the discrimination power of in...
research
02/23/2023

Practical Knowledge Distillation: Using DNNs to Beat DNNs

For tabular data sets, we explore data and model distillation, as well a...
research
10/10/2020

Distilling a Deep Neural Network into a Takagi-Sugeno-Kang Fuzzy Inference System

Deep neural networks (DNNs) demonstrate great success in classification ...
research
03/13/2021

Robust Model Compression Using Deep Hypotheses

Machine Learning models should ideally be compact and robust. Compactnes...
research
06/29/2020

Interpreting and Disentangling Feature Components of Various Complexity from DNNs

This paper aims to define, quantify, and analyze the feature complexity ...

Please sign up or login with your details

Forgot password? Click here to reset