Distilling Double Descent

02/13/2021
by   Andrew Cotter, et al.
0

Distillation is the technique of training a "student" model based on examples that are labeled by a separate "teacher" model, which itself is trained on a labeled dataset. The most common explanations for why distillation "works" are predicated on the assumption that student is provided with soft labels, probabilities or confidences, from the teacher model. In this work, we show, that, even when the teacher model is highly overparameterized, and provides hard labels, using a very large held-out unlabeled dataset to train the student model can result in a model that outperforms more "traditional" approaches. Our explanation for this phenomenon is based on recent work on "double descent". It has been observed that, once a model's complexity roughly exceeds the amount required to memorize the training data, increasing the complexity further can, counterintuitively, result in better generalization. Researchers have identified several settings in which it takes place, while others have made various attempts to explain it (thus far, with only partial success). In contrast, we avoid these questions, and instead seek to exploit this phenomenon by demonstrating that a highly-overparameterized teacher can avoid overfitting via double descent, while a student trained on a larger independent dataset labeled by this teacher will avoid overfitting due to the size of its training set.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/10/2020

Subclass Distillation

After a large "teacher" neural network has been trained on labeled data,...
06/17/2022

Revisiting Self-Distillation

Knowledge distillation is the procedure of transferring "knowledge" from...
04/20/2021

Knowledge Distillation as Semiparametric Inference

A popular approach to model compression is to train an inexpensive stude...
07/06/2021

Self-training with noisy student model and semi-supervised loss function for dcase 2021 challenge task 4

This report proposes a polyphonic sound event detection (SED) method for...
06/09/2021

Reliable Adversarial Distillation with Unreliable Teachers

In ordinary distillation, student networks are trained with soft labels ...
02/10/2020

FAU, Facial Expressions, Valence and Arousal: A Multi-task Solution

In the paper, we aim to train a unified model that performs three tasks:...
08/22/2018

Approximation Trees: Statistical Stability in Model Distillation

This paper examines the stability of learned explanations for black-box ...