DeepAI AI Chat
Log In Sign Up

Controlling the Quality of Distillation in Response-Based Network Compression

by   Vibhas Vats, et al.
Indiana University Bloomington
Indiana University

The performance of a distillation-based compressed network is governed by the quality of distillation. The reason for the suboptimal distillation of a large network (teacher) to a smaller network (student) is largely attributed to the gap in the learning capacities of given teacher-student pair. While it is hard to distill all the knowledge of a teacher, the quality of distillation can be controlled to a large extent to achieve better performance. Our experiments show that the quality of distillation is largely governed by the quality of teacher's response, which in turn is heavily affected by the presence of similarity information in its response. A well-trained large capacity teacher loses similarity information between classes in the process of learning fine-grained discriminative properties for classification. The absence of similarity information causes the distillation process to be reduced from one example-many class learning to one example-one class learning, thereby throttling the flow of diverse knowledge from the teacher. With the implicit assumption that only the instilled knowledge can be distilled, instead of focusing only on the knowledge distilling process, we scrutinize the knowledge inculcation process. We argue that for a given teacher-student pair, the quality of distillation can be improved by finding the sweet spot between batch size and number of epochs while training the teacher. We discuss the steps to find this sweet spot for better distillation. We also propose the distillation hypothesis to differentiate the behavior of the distillation process between knowledge distillation and regularization effect. We conduct all our experiments on three different datasets.


page 1

page 2

page 3

page 4


Interactive Knowledge Distillation

Knowledge distillation is a standard teacher-student learning framework ...

Joint learning of interpretation and distillation

The extra trust brought by the model interpretation has made it an indis...

Knowledge Concentration: Learning 100K Object Classifiers in a Single CNN

Fine-grained image labels are desirable for many computer vision applica...

Robust Distillation for Worst-class Performance

Knowledge distillation has proven to be an effective technique in improv...

Introspective Learning by Distilling Knowledge from Online Self-explanation

In recent years, many explanation methods have been proposed to explain ...

Privileged Features Distillation for E-Commerce Recommendations

Features play an important role in most prediction tasks of e-commerce r...