Hypernetwork-Based Augmentation

06/11/2020
by   Chih-Yang Chen, et al.
0

Data augmentation is an effective technique to improve the generalization of deep neural networks. Recently, AutoAugment proposed a well-designed search space and a search algorithm that automatically finds augmentation policies in a data-driven manner. However, AutoAugment is computationally intensive. In this paper, we propose an efficient gradient-based search algorithm, called Hypernetwork-Based Augmentation (HBA), which simultaneously learns model parameters and augmentation hyperparameters in a single training. Our HBA uses a hypernetwork to approximate a population-based training algorithm, which enables us to tune augmentation hyperparameters by gradient descent. Besides, we introduce a weight sharing strategy that simplifies our hypernetwork architecture and speeds up our search algorithm. We conduct experiments on CIFAR-10, CIFAR-100, SVHN, and ImageNet. Our results demonstrate that HBA is significantly faster than state-of-the-art methods while achieving competitive accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2021

Direct Differentiable Augmentation Search

Data augmentation has been an indispensable tool to improve the performa...
research
05/01/2019

Fast AutoAugment

Data augmentation is an indispensable technique to improve generalizatio...
research
05/14/2019

Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules

A key challenge in leveraging data augmentation for neural network train...
research
08/29/2010

Entropy-Based Search Algorithm for Experimental Design

The scientific method relies on the iterated processes of inference and ...
research
03/13/2022

Training Protocol Matters: Towards Accurate Scene Text Recognition via Training Protocol Searching

The development of scene text recognition (STR) in the era of deep learn...
research
05/01/2020

A Dual-Dimer Method for Training Physics-Constrained Neural Networks with Minimax Architecture

Data sparsity is a common issue to train machine learning tools such as ...
research
07/11/2020

An Asymptotically Optimal Multi-Armed Bandit Algorithm and Hyperparameter Optimization

The evaluation of hyperparameters, neural architectures, or data augment...

Please sign up or login with your details

Forgot password? Click here to reset