Controlling the Quality of Distillation in Response-Based Network Compression

12/19/2021
by   Vibhas Vats, et al.
0

The performance of a distillation-based compressed network is governed by the quality of distillation. The reason for the suboptimal distillation of a large network (teacher) to a smaller network (student) is largely attributed to the gap in the learning capacities of given teacher-student pair. While it is hard to distill all the knowledge of a teacher, the quality of distillation can be controlled to a large extent to achieve better performance. Our experiments show that the quality of distillation is largely governed by the quality of teacher's response, which in turn is heavily affected by the presence of similarity information in its response. A well-trained large capacity teacher loses similarity information between classes in the process of learning fine-grained discriminative properties for classification. The absence of similarity information causes the distillation process to be reduced from one example-many class learning to one example-one class learning, thereby throttling the flow of diverse knowledge from the teacher. With the implicit assumption that only the instilled knowledge can be distilled, instead of focusing only on the knowledge distilling process, we scrutinize the knowledge inculcation process. We argue that for a given teacher-student pair, the quality of distillation can be improved by finding the sweet spot between batch size and number of epochs while training the teacher. We discuss the steps to find this sweet spot for better distillation. We also propose the distillation hypothesis to differentiate the behavior of the distillation process between knowledge distillation and regularization effect. We conduct all our experiments on three different datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2020

Interactive Knowledge Distillation

Knowledge distillation is a standard teacher-student learning framework ...
research
05/25/2023

On the Impact of Knowledge Distillation for Model Interpretability

Several recent studies have elucidated why knowledge distillation (KD) i...
research
05/24/2020

Joint learning of interpretation and distillation

The extra trust brought by the model interpretation has made it an indis...
research
03/08/2019

Everything old is new again: A multi-view learning approach to learning using privileged information and distillation

We adopt a multi-view approach for analyzing two knowledge transfer sett...
research
07/11/2019

Privileged Features Distillation for E-Commerce Recommendations

Features play an important role in most prediction tasks of e-commerce r...
research
10/23/2022

Respecting Transfer Gap in Knowledge Distillation

Knowledge distillation (KD) is essentially a process of transferring a t...
research
06/13/2022

Robust Distillation for Worst-class Performance

Knowledge distillation has proven to be an effective technique in improv...

Please sign up or login with your details

Forgot password? Click here to reset