VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale

05/25/2023
by   Zhiwei Hao, et al.
0

The tremendous success of large models trained on extensive datasets demonstrates that scale is a key ingredient in achieving superior results. Therefore, the reflection on the rationality of designing knowledge distillation (KD) approaches for limited-capacity architectures solely based on small-scale datasets is now deemed imperative. In this paper, we identify the small data pitfall that presents in previous KD methods, which results in the underestimation of the power of vanilla KD framework on large-scale datasets such as ImageNet-1K. Specifically, we show that employing stronger data augmentation techniques and using larger datasets can directly decrease the gap between vanilla KD and other meticulously designed KD variants. This highlights the necessity of designing and evaluating KD approaches in the context of practical scenarios, casting off the limitations of small-scale datasets. Our investigation of the vanilla KD and its variants in more complex schemes, including stronger training strategies and different model capacities, demonstrates that vanilla KD is elegantly simple but astonishingly effective in large-scale scenarios. Without bells and whistles, we obtain state-of-the-art ResNet-50, ViT-S, and ConvNeXtV2-T models for ImageNet, which achieve 83.1%, 84.3%, and 85.0% top-1 accuracy, respectively. PyTorch code and checkpoints can be found at https://github.com/Hao840/vanillaKD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2020

MEAL V2: Boosting Vanilla ResNet-50 to 80 without Tricks

In this paper, we introduce a simple yet effective approach that can boo...
research
06/09/2021

Knowledge distillation: A good teacher is patient and consistent

There is a growing discrepancy in computer vision between large-scale mo...
research
12/05/2020

Knowledge Distillation Thrives on Data Augmentation

Knowledge distillation (KD) is a general deep neural network training fr...
research
11/19/2018

Self-Referenced Deep Learning

Knowledge distillation is an effective approach to transferring knowledg...
research
10/27/2019

MOD: A Deep Mixture Model with Online Knowledge Distillation for Large Scale Video Temporal Concept Localization

In this paper, we present and discuss a deep mixture model with online k...
research
03/15/2023

Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement

We propose Dataset Reinforcement, a strategy to improve a dataset once s...
research
02/28/2022

An Empirical Study of Graphormer on Large-Scale Molecular Modeling Datasets

This technical note describes the recent updates of Graphormer, includin...

Please sign up or login with your details

Forgot password? Click here to reset