On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective

09/09/2020
by   Seonguk Park, et al.
0

To put a state-of-the-art neural network to practical use, it is necessary to design a model that has a good trade-off between the resource consumption and performance on the test set. Many researchers and engineers are developing methods that enable training or designing a model more efficiently. Developing an efficient model includes several strategies such as network architecture search, pruning, quantization, knowledge distillation, utilizing cheap convolution, regularization, and also includes any craft that leads to a better performance-resource trade-off. When combining these technologies together, it would be ideal if one source of performance improvement does not conflict with others. We call this property as the orthogonality in model efficiency. In this paper, we focus on knowledge distillation and demonstrate that knowledge distillation methods are orthogonal to other efficiency-enhancing methods both analytically and empirically. Analytically, we claim that knowledge distillation functions analogous to a ensemble method, bootstrap aggregating. This analytical explanation is provided from the perspective of implicit data augmentation property of knowledge distillation. Empirically, we verify knowledge distillation as a powerful apparatus for practical deployment of efficient neural network, and also introduce ways to integrate it with other methods effectively.

READ FULL TEXT
research
12/20/2019

The State of Knowledge Distillation for Classification

We survey various knowledge distillation (KD) strategies for simple clas...
research
06/24/2022

Online Distillation with Mixed Sample Augmentation

Mixed Sample Regularization (MSR), such as MixUp or CutMix, is a powerfu...
research
10/24/2018

HAKD: Hardware Aware Knowledge Distillation

Despite recent developments, deploying deep neural networks on resource ...
research
04/08/2019

Knowledge Distillation For Recurrent Neural Network Language Modeling With Trust Regularization

Recurrent Neural Networks (RNNs) have dominated language modeling becaus...
research
03/14/2023

MetaMixer: A Regularization Strategy for Online Knowledge Distillation

Online knowledge distillation (KD) has received increasing attention in ...
research
02/17/2023

Explicit and Implicit Knowledge Distillation via Unlabeled Data

Data-free knowledge distillation is a challenging model lightweight task...
research
02/13/2020

Self-Distillation Amplifies Regularization in Hilbert Space

Knowledge distillation introduced in the deep learning context is a meth...

Please sign up or login with your details

Forgot password? Click here to reset