CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features

05/13/2019
by   Sangdoo Yun, et al.
0

Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects ( leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout removes informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on the ImageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances.

READ FULL TEXT

page 3

page 7

research
06/02/2020

SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization

Advanced data augmentation strategies have widely been studied to improv...
research
03/29/2020

Attentive CutMix: An Enhanced Data Augmentation Approach for Deep Learning Based Image Classification

Convolutional neural networks (CNN) are capable of learning robust repre...
research
06/16/2021

Evolving Image Compositions for Feature Representation Learning

Convolutional neural networks for visual recognition require large amoun...
research
09/06/2023

ViewMix: Augmentation for Robust Representation in Self-Supervised Learning

Joint Embedding Architecture-based self-supervised learning methods have...
research
12/07/2020

VideoMix: Rethinking Data Augmentation for Video Classification

State-of-the-art video action classifiers often suffer from overfitting....
research
11/16/2020

Hierarchical Complementary Learning for Weakly Supervised Object Localization

Weakly supervised object localization (WSOL) is a challenging problem wh...
research
03/29/2021

Selective Output Smoothing Regularization: Regularize Neural Networks by Softening Output Distributions

In this paper, we propose Selective Output Smoothing Regularization, a n...

Please sign up or login with your details

Forgot password? Click here to reset