Circumventing Outliers of AutoAugment with Knowledge Distillation

03/25/2020
by   Longhui Wei, et al.
6

AutoAugment has been a powerful algorithm that improves the accuracy of many vision tasks, yet it is sensitive to the operator space as well as hyper-parameters, and an improper setting may degenerate network optimization. This paper delves deep into the working mechanism, and reveals that AutoAugment may remove part of discriminative information from the training image and so insisting on the ground-truth label is no longer the best option. To relieve the inaccuracy of supervision, we make use of knowledge distillation that refers to the output of a teacher model to guide network training. Experiments are performed in standard image classification benchmarks, and demonstrate the effectiveness of our approach in suppressing noise of data augmentation and stabilizing training. Upon the cooperation of knowledge distillation and AutoAugment, we claim the new state-of-the-art on ImageNet classification with a top-1 accuracy of 85.8

READ FULL TEXT

page 6

page 9

research
12/20/2019

The State of Knowledge Distillation for Classification

We survey various knowledge distillation (KD) strategies for simple clas...
research
03/04/2022

Better Supervisory Signals by Observing Learning Paths

Better-supervised models might have better performance. In this paper, w...
research
05/09/2023

DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing

The knowledge distillation uses a high-performance teacher network to gu...
research
09/04/2019

Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks

Knowledge distillation (KD) is a very popular method for model size redu...
research
06/02/2021

Not All Knowledge Is Created Equal

Mutual knowledge distillation (MKD) improves a model by distilling knowl...
research
03/13/2022

CEKD:Cross Ensemble Knowledge Distillation for Augmented Fine-grained Data

Data augmentation has been proved effective in training deep models. Exi...
research
10/14/2021

ClonalNet: Classifying Better by Focusing on Confusing Categories

Existing neural classification networks predominately adopt one-hot enco...

Please sign up or login with your details

Forgot password? Click here to reset