CEKD:Cross Ensemble Knowledge Distillation for Augmented Fine-grained Data

03/13/2022
by   Ke Zhang, et al.
0

Data augmentation has been proved effective in training deep models. Existing data augmentation methods tackle the fine-grained problem by blending image pairs and fusing corresponding labels according to the statistics of mixed pixels, which produces additional noise harmful to the performance of networks. Motivated by this, we present a simple yet effective cross ensemble knowledge distillation (CEKD) model for fine-grained feature learning. We innovatively propose a cross distillation module to provide additional supervision to alleviate the noise problem, and propose a collaborative ensemble module to overcome the target conflict problem. The proposed model can be trained in an end-to-end manner, and only requires image-level label supervision. Extensive experiments on widely used fine-grained benchmarks demonstrate the effectiveness of our proposed model. Specifically, with the backbone of ResNet-101, CEKD obtains the accuracy of 89.59 datasets respectively, outperforming state-of-the-art API-Net by 0.99 and 1.16

READ FULL TEXT

page 4

page 7

research
04/06/2020

Attribute Mix: Semantic Data Augmentation for Fine Grained Recognition

Collecting fine-grained labels usually requires expert-level domain know...
research
03/22/2022

Channel Self-Supervision for Online Knowledge Distillation

Recently, researchers have shown an increased interest in the online kno...
research
12/09/2020

SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data

Data mixing augmentation has proved effective in training deep models. R...
research
03/25/2020

Circumventing Outliers of AutoAugment with Knowledge Distillation

AutoAugment has been a powerful algorithm that improves the accuracy of ...
research
02/24/2020

Learning Attentive Pairwise Interaction for Fine-Grained Classification

Fine-grained classification is a challenging problem, due to subtle diff...
research
05/21/2023

Understanding the Effect of Data Augmentation on Knowledge Distillation

Knowledge distillation (KD) requires sufficient data to transfer knowled...
research
12/12/2020

Fine-grained Classification via Categorical Memory Networks

Motivated by the desire to exploit patterns shared across classes, we pr...

Please sign up or login with your details

Forgot password? Click here to reset