Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning

12/17/2020
by   Zeyuan Allen-Zhu, et al.
78

We formally study how Ensemble of deep learning models can improve test accuracy, and how the superior performance of ensemble can be distilled into a single model using Knowledge Distillation. We consider the challenging case where the ensemble is simply an average of the outputs of a few independently trained neural networks with the SAME architecture, trained using the SAME algorithm on the SAME data set, and they only differ by the random seeds used in the initialization. We empirically show that ensemble/knowledge distillation in deep learning works very differently from traditional learning theory, especially differently from ensemble of random feature mappings or the neural-tangent-kernel feature mappings, and is potentially out of the scope of existing theorems. Thus, to properly understand ensemble and knowledge distillation in deep learning, we develop a theory showing that when data has a structure we refer to as "multi-view", then ensemble of independently trained neural networks can provably improve test accuracy, and such superior test accuracy can also be provably distilled into a single model by training a single model to match the output of the ensemble instead of the true label. Our result sheds light on how ensemble works in deep learning in a way that is completely different from traditional theorems, and how the "dark knowledge" is hidden in the outputs of the ensemble – that can be used in knowledge distillation – comparing to the true data labels. In the end, we prove that self-distillation can also be viewed as implicitly combining ensemble and knowledge distillation to improve test accuracy.

READ FULL TEXT

page 10

page 22

page 23

page 24

research
09/17/2019

Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Ensemble models comprising of deep Convolutional Neural Networks (CNN) h...
research
11/15/2019

Stagewise Knowledge Distillation

The deployment of modern Deep Learning models requires high computationa...
research
11/13/2020

Wisdom of the Ensemble: Improving Consistency of Deep Learning Models

Deep learning classifiers are assisting humans in making decisions and h...
research
04/08/2019

Knowledge Distillation For Recurrent Neural Network Language Modeling With Trust Regularization

Recurrent Neural Networks (RNNs) have dominated language modeling becaus...
research
03/30/2023

Asymmetric Face Recognition with Cross Model Compatible Ensembles

The asymmetrical retrieval setting is a well suited solution for resourc...
research
05/07/2021

Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing Regressions In NLP Model Updates

Behavior of deep neural networks can be inconsistent between different v...
research
02/01/2020

Periodic Intra-Ensemble Knowledge Distillation for Reinforcement Learning

Off-policy ensemble reinforcement learning (RL) methods have demonstrate...

Please sign up or login with your details

Forgot password? Click here to reset