When to Learn What: Model-Adaptive Data Augmentation Curriculum

09/09/2023
by   Chengkai Hou, et al.
0

Data augmentation (DA) is widely used to improve the generalization of neural networks by enforcing the invariances and symmetries to pre-defined transformations applied to input data. However, a fixed augmentation policy may have different effects on each sample in different training stages but existing approaches cannot adjust the policy to be adaptive to each sample and the training model. In this paper, we propose Model Adaptive Data Augmentation (MADAug) that jointly trains an augmentation policy network to teach the model when to learn what. Unlike previous work, MADAug selects augmentation operators for each input image by a model-adaptive policy varying between training stages, producing a data augmentation curriculum optimized for better generalization. In MADAug, we train the policy through a bi-level optimization scheme, which aims to minimize a validation-set loss of a model trained using the policy-produced data augmentations. We conduct an extensive evaluation of MADAug on multiple image classification tasks and network architectures with thorough comparisons to existing DA approaches. MADAug outperforms or is on par with other baselines and exhibits better fairness: it brings improvement to all classes and more to the difficult ones. Moreover, MADAug learned policy shows better performance when transferred to fine-grained datasets. In addition, the auto-optimized policy in MADAug gradually introduces increasing perturbations and naturally forms an easy-to-hard curriculum.

READ FULL TEXT

page 2

page 8

page 13

research
12/06/2021

SelectAugment: Hierarchical Deterministic Sample Selection for Data Augmentation

Data augmentation (DA) has been widely investigated to facilitate model ...
research
07/14/2022

Universal Adaptive Data Augmentation

Existing automatic data augmentation (DA) methods either ignore updating...
research
03/08/2020

DADA: Differentiable Automatic Data Augmentation

Data augmentation (DA) techniques aim to increase data variability, and ...
research
05/25/2022

Augmentation-induced Consistency Regularization for Classification

Deep neural networks have become popular in many supervised learning tas...
research
01/28/2022

You Only Cut Once: Boosting Data Augmentation with a Single Cut

We present You Only Cut Once (YOCO) for performing data augmentations. Y...
research
11/24/2021

Challenges of Adversarial Image Augmentations

Image augmentations applied during training are crucial for the generali...
research
03/20/2021

Patch AutoAugment

Data augmentation (DA) plays a critical role in training deep neural net...

Please sign up or login with your details

Forgot password? Click here to reset