Revisiting Self-Distillation

06/17/2022
by   Minh Pham, et al.
4

Knowledge distillation is the procedure of transferring "knowledge" from a large model (the teacher) to a more compact one (the student), often being used in the context of model compression. When both models have the same architecture, this procedure is called self-distillation. Several works have anecdotally shown that a self-distilled student can outperform the teacher on held-out data. In this work, we systematically study self-distillation in a number of settings. We first show that even with a highly accurate teacher, self-distillation allows a student to surpass the teacher in all cases. Secondly, we revisit existing theoretical explanations of (self) distillation and identify contradicting examples, revealing possible drawbacks of these explanations. Finally, we provide an alternative explanation for the dynamics of self-distillation through the lens of loss landscape geometry. We conduct extensive experiments to show that self-distillation leads to flatter minima, thereby resulting in better generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2021

Learning Interpretation with Explainable Knowledge Distillation

Knowledge Distillation (KD) has been considered as a key solution in mod...
research
01/30/2023

On student-teacher deviations in distillation: does it pay to disobey?

Knowledge distillation has been widely-used to improve the performance o...
research
07/17/2023

DOT: A Distillation-Oriented Trainer

Knowledge distillation transfers knowledge from a large model to a small...
research
02/23/2023

Random Teachers are Good Teachers

In this work, we investigate the implicit regularization induced by teac...
research
12/20/2022

Adam: Dense Retrieval Distillation with Adaptive Dark Examples

To improve the performance of the dual-encoder retriever, one effective ...
research
09/19/2020

Introspective Learning by Distilling Knowledge from Online Self-explanation

In recent years, many explanation methods have been proposed to explain ...
research
05/24/2020

Joint learning of interpretation and distillation

The extra trust brought by the model interpretation has made it an indis...

Please sign up or login with your details

Forgot password? Click here to reset