Fast AutoAugment

05/01/2019 ∙ by Sungbin Lim, et al. ∙ Kakao Corp. Université de Montréal 0

Data augmentation is an indispensable technique to improve generalization and also to deal with imbalanced datasets. Recently, AutoAugment has been proposed to automatically search augmentation policies from a dataset and has significantly improved performances on many image recognition tasks. However, its search method requires thousands of GPU hours to train even in a reduced setting. In this paper, we propose Fast AutoAugment algorithm that learns augmentation policies using a more efficient search strategy based on density matching. In comparison to AutoAugment, the proposed algorithm speeds up the search time by orders of magnitude while maintaining the comparable performances on the image recognition tasks with various models and datasets including CIFAR-10, CIFAR-100, and ImageNet.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning has become a state-of-the-art technique for computer vision tasks, including object recognition

yamada2018shakedrop ; real2018regularized ; hu2018squeeze , detection ren2015faster ; liu2016ssd , and segmentation chen2018deeplab ; he2017mask . However, deep learning models with large capacity often suffer from overfitting unless significantly large amounts of labeled data are supported. Data augmentation (DA) has been shown as a useful regularization technique to increase both the quantity and the diversity of training data. Notably, applying a carefully designed set of augmentations rather than naive random transformations in training improves the generalization ability of a network significantly krizhevsky2012imagenet ; paschali2019data . However, in most cases, designing such augmentations has relied on human experts with prior knowledge on the dataset.

With the recent advancement of automated machine learning (AutoML), there exist some efforts for designing an automated process of searching for augmentation strategies directly from a dataset. AutoAugment

cubuk2018autoaugment

uses reinforcement learning (RL) to automatically find data augmentation policy when a target dataset and a model are given. It samples an augmentation policy at a time using a controller RNN, trains the model using the policy, and gets the validation accuracy as a reward to update the controller. AutoAugment especially achieves a dramatic improvement in performances on several image recognition benchmarks. However, AutoAugment requires thousands of GPU hours even in a reduced setting, in which the size of the target dataset and the network is small. Recently proposed Population Based Augmentation (PBA)

ho2019pba

is a method to deal with this problem, which is based on population-based training method of hyperparameter optimization. In contrast to previous methods, we propose a new search strategy that does not require any repeated training of child models. Instead, the proposed algorithm directly searches for augmentation policies that maximize the match between the distribution of augmented split and the distribution of another, unaugmented split via a single model.

Dataset AutoAugment cubuk2018autoaugment Fast AutoAugment
CIFAR-10 5000 3.5
SVHN 1000 1.5
ImageNet 15000 450
Table 1:

GPU hours comparison of Fast AutoAugment with AutoAugment. We estimate computation cost with an NVIDIA Tesla V100 while AutoAugment measured computation cost in Tesla P100.

In this paper, we propose an efficient search method of augmentation policies, called Fast AutoAugment, motivated by Bayesian DA tran2017bayesian . Our strategy is to improve the generalization performance of a given network by learning the augmentation policies which treat augmented data as missing data points of training data. However, different from Bayesian DA, the proposed method recovers those missing data points by the exploitation-and-exploration of a family of inference-time augmentations simonyan2014very ; szegedy2016rethinking via Bayesian optimization in the policy search phase. We realize this by using an efficient density matching algorithm that does not require any back-propagation for network training for each policy evaluation. The proposed algorithm can be easily implemented by making good use of distributed learning frameworks such as Ray ray .

Our experiments show that the proposed method can search augmentation policies significantly faster than AutoAugment (see Table 1

), while retaining comparable performances to AutoAugment on diverse image datasets and networks, especially in two use cases: (a) direct augmentation search on the dataset of interest, (b) transferring learned augmentation policies to new datasets. On ImageNet, we achieve an error rate of 19.4% for ResNet-200 trained with our searched policy, which is 0.6% better than 20.0% with AutoAugment.

This paper is organized as follows. First, we introduce related works on automatic data augmentation in Section 2. Then, we present our problem setting to achieve the desired goal and suggest Fast AutoAugment algorithm to solve the objective efficiently in Section 3. Finally, we demonstrate the efficiency of our method through comparison with baseline augmentation methods and AutoAugment in Section 4.

2 Related Work

There are many studies on data augmentation, especially for image recognition. On the benchmark image dataset, such as CIFAR and ImageNet, random crop, flip, rotation, scaling, and color transformation, have been performed as baseline augmentation methods krizhevsky2012imagenet ; Han_2017_CVPR ; sato2015apac . Mixup zhang2017mixup , Cutout devries2017cutout , and CutMix yun2019cutmix have been recently proposed to either replace or mask out the image patches randomly and obtained more improved performances on image recognition tasks. However, these methods are designed manually based on domain knowledge.

Naturally, automatically finding data augmentation methods from data in principle has emerged to overcome the performance limitation that originated from a cumbersome exploration of methods by a human. Smart Augmentation lemley2017smart introduced a network that learns to generate augmented data by merging two or more samples in the same class. shrivastava2017learning employed a generative adversarial network (GAN) goodfellow2014generative to generate images that augment datasets. Bayesian DA tran2017bayesian

combined Monte Carlo expectation maximization algorithm with GAN to generate data by treating augmented data as missing data points on the distribution of the training set.

Due to the remarkable successes of NAS algorithms on various computer vision tasks real2018regularized ; zoph2018learning ; kim2018scalable , several current studies also deal with automated search algorithms to obtain augmentation policies for given datasets and models. The main difference between the previously learned methods and these automated augmentation search methods is that the former methods exploit generative models to create augmented data directly, whereas the latter methods find optimal combinations of predefined transformation functions. AutoAugment cubuk2018autoaugment introduced an RL based search strategy that alternately trained a child model and RNN controller and showed the state-of-the-art performances on various datasets with different models. Recently, PBA ho2019pba proposed a new algorithm which generates augmentation policy schedules based on population based training jaderberg2017population . Similar to PBA, our method also employs hyperparameter optimization to search for optimal policies but uses Tree-structured Parzen Estimator (TPE) algorithm bergstra2011algorithms for practical implementation.

3 Fast AutoAugment

In this section, we first introduce the search space of the symbolic augmentation operations and formulate a new search strategy, efficient density matching, to find the optimal augmentation policies efficiently. We then describe our implementation based on Bayesian hyperparameter optimization incorporated into a distributed learning framework.

3.1 Search Space

Figure 1: An example of augmented images via a sub-policy in the search space . Each sub-policy consists of 2 operations; for instance, [cutout, autocontrast] is used in this figure. Each operation

has two parameters: the probability

of calling the operation and the magnitude of the operation. These operations are applied with the corresponding probabilities. As a result, a sub-policy randomly maps an input data to the one of 4 images. Note that the identity map (no augmentation) is also possible with probability .

Let be a set of augmentation (image transformation) operations defined on the input image space . Each operation has two parameters: the calling probability and the magnitude which determines the variability of operation. Some operations (e.g. invert, flip) do not use the magnitude. Let be the set of sub-policies where a sub-policy consists of consecutive operations where each operation is applied to an input image sequentially with the probability as follows:

(1)

Hence, the output of sub-policy can be described by a composition of operations as

where and . Figure 1 shows a specific example of augmented images by . Note that each sub-policy is a random sequence of image transformations which depend on and , and this enables to cover a wide range of data augmentations. Our final policy is a collection of sub-policies and indicates a set of augmented images of dataset transformed by every sub-policies :

Our search space is similar to previous methods except that we use both continuous values of probability and magnitude at which has more possibilities than discretized search space.

3.2 Search Strategy

In Fast AutoAugment, we consider searching the augmentation policy as a density matching between a pair of train datasets. Let

be a probability distribution on

and assume dataset is sampled from this distribution. For a given classification model that is parameterized by , the expected accuracy and the expected loss of on dataset are denoted by and , respectively.

3.2.1 Efficient Density Matching for Augmentation Policy Search

Figure 2: An overall procedure of augmentation search by Fast AutoAugment algorithm. For exploration, the proposed method splits the train dataset into -folds, which consists of two datasets and . Then model parameter is trained in parallel on each . After training , the algorithm evaluates bundles of augmentation policies on without training . The top- policies obtained from each -fold are appended to an augmentation list .

For any given pair of and , our goal is to improve the generalization ability by searching the augmentation policies that match the density of with density of augmented . However, it is impractical to compare these two distributions directly for an evaluation of every candidate policy. Therefore, we perform this evaluation by measuring how much one dataset follows the pattern of the other by making use of the model predictions on both datasets. In detail, let us split into and that are used for learning the model parameter and exploring the augmentation policy , respectively. We employ the following objective to find a set of learned augmentation policies

(2)

where model parameter is trained on . It is noted that in this objective, approximately minimizes the distance between density of and density of from the perspective of maxmizing the performance of both model predictions with the same parameter .

To achieve (2), we propose an efficient strategy for augmentation policy search (see Figure 2). First, we conduct the -fold stratified shuffling shahrokh2013effect to split the train dataset into where each consists of two datasets and . As a matter of convenience, we omit in the notation of datasets in the remaining parts. Next, we train model parameter on from scratch without data augmentation. Contrary to previous methods cubuk2018autoaugment ; ho2019pba , our method does not necessarily reduce the given network to child models or proxy tasks.

After training the model parameter, for each step , we explore candidate policies via Bayesian optimization method which repeatedly samples a sequence of sub-policies from search space to construct a policy and tunes corresponding calling probabilities and magnitudes to minimize the expected loss on augmented dataset (see line 6 in Algorithm 1). Note that, during the policy exploration-and-exploitation procedure, the proposed algorithm does not train model parameter from scratch again, hence the proposed method find augmentation policies significantly faster than AutoAugment. The concrete Bayesian optimization method is explained in Section 3.2.2.

As the algorithm completes the exploration step, we select top- policies over and denote them collectively. Finally, we merge every into . See Algorithm 1 for the overall procedure. At the end of the process, we augment the whole dataset with and retrain the model parameter . Through the proposed method, we can expect the performance on augmented dataset is statistically higher than that on :

since augmentation policy works as optimized inference-time augmentation simonyan2014very ; szegedy2016rethinking to make the model robustly predict correct answers. Consequently, learned augmentation policies approach (2) and improve generalization performance as we desired.

3.2.2 Policy Exploration via Bayesian Optimization

Policy exploration is an essential ingredient in the process of automated augmentation search. Since the evaluation of the model performance for every candidate policies is computationally expensive, we apply Bayesian optimization to the exploration of augmentation strategies. Precisely, at the line 6 in Algorithm 1, we employ the following Expected Improvement (EI) criterion jones2001taxonomy

(3)

for acquisition function to explore efficiently. Here,

denotes the constant threshold determined by the quantile of observations among previously explored policies. We employ variable kernel density estimation

terrell1992adaptive-kde on graph-structured search space to approximate the criterion (3). Practically, since the optimization method is already proposed in tree-structured Parzen estimator (TPE) algorithm bergstra2011algorithms , we apply their HyperOpt library for the parallelized implementation.

3.3 Implementation

Input : 
Split into -fold data   // stratified shuffling
1 for  do
       ,   // initialize
2       Train on
3       for  do
             BayesOptim   // explore-and-exploit
4             Select top- policies in
               // merge augmentation policies
5            
6      
return
Algorithm 1 Fast AutoAugment

Fast AutoAugment searches desired augmentation policies applying aforementioned Bayesian optimization to distributed train splits. In other words, the overall search process consists of two steps, (1) training model parameters on -fold train data with default augmentation rules and (2) exploration-and-exploitation using HyperOpt to search the optimal augmentation policies. In the below, we describe the practical implementation of the overall steps in Algorithm 1. The following procedures are mostly parallelizable, which makes the proposed method more efficient to be used in actual usage. We utilize Ray ray to implement Fast AutoAugment, which enables us to train models and search policies in a distributed manner.

Shuffle (Line 1): We split training sets while preserving the percentage of samples for each class (stratified shuffling) using StratifiedShuffleSplit method in sklearn scikit-learn .

Train (Line 4): Train models on each training split. We implement this to run parallelly across multiple machines to reduce total running time if the computational resource is enough.

Explore-and-Exploit (Line 6): We use HyperOpt library from Ray with search numbers and 20 maximum concurrent evaluations. Different from AutoAugment, we do not discretize search spaces since our search algorithm can handle continuous values. We explore one of the possible operations with probability and magnitude . The values of probability and magnitude are uniformly sampled from at the beginning, then HyperOpt modulates the values to optimize the objective .

Merge (Line 7-9): Select the top best policies for each split and then combine the obtained policies from all splits. This set of final policies is used for re-train.

4 Experiments and Results

In this section, we examine the performance of Fast AutoAugment on the CIFAR-10, CIFAR-100 cifar , and ImageNet imagenet datasets and compare the results with baseline preprocessing, Cutout devries2017cutout , AutoAugment cubuk2018autoaugment , and PBA ho2019pba . For ImageNet, we only compare the baseline, AutoAugment, and Fast AutoAugment since PBA does not conduct experiments on ImageNet. We follow the experimental setting of AutoAugment for fair comparison, except that an evaluation of the proposed method on AmoebaNet-B model real2018regularized is omitted. As in AutoAugment, each sub-policy consists of two operations (), each policy consists of five sub-policies (), and the search space consists of the same 16 operations (ShearX, ShearY, TranslateX, TranslateY, Rotate, AutoContrast, Invert, Equalize, Solarize, Posterize, Contrast, Color, Brightness, Sharpness, Cutout, Sample Pairing). In Fast AutoAugment algorithm, we utilize 5-folds stratified shuffling (), 2 search width (), 200 search depth (), and 10 selected policies () for policy evaluation. We increase the batch size and adapt the learning rate accordingly to boost the training you2017large . Otherwise, we set other hyperparameters equal to AutoAugment if possible. For the unknown hyperparameters, we follow values from the original references or we tune them to match baseline performances.

Model Baseline Cutout devries2017cutout AA cubuk2018autoaugment PBA ho2019pba
Wide-ResNet-40-2 5.3 4.1 3.7 3.6 / 3.7
Wide-ResNet-28-10 3.9 3.1 2.6 2.6 2.7 / 2.7
Shake-Shake(26 232d) 3.6 3.0 2.5 2.5 2.7 / 2.5
Shake-Shake(26 296d) 2.9 2.6 2.0 2.0 2.0 / 2.0
Shake-Shake(26 2112d) 2.8 2.6 1.9 2.0 2.0 / 1.9
PyramidNet+ShakeDrop 2.7 2.3 1.5 1.5 1.8 / 1.7
Table 2: Test set error rate (%) on CIFAR-10.
Model Baseline Cutout devries2017cutout AA cubuk2018autoaugment PBA ho2019pba
Wide-ResNet-40-2 26.0 25.2 20.7 20.7 / 20.6
Wide-ResNet-28-10 18.8 18.4 17.1 16.7 17.3 / 17.3
Shake-Shake(26 296d) 17.1 16.0 14.3 15.3 14.9 / 14.6
PyramidNet+ShakeDrop 14.0 12.2 10.7 10.9 11.9 / 11.7
Table 3: Test set error rate (%) on CIFAR-100.

4.1 CIFAR-10 and CIFAR-100

For both CIFAR-10 and CIFAR-100, we conduct two experiments using Fast AutoAugment: (1) direct search on the full dataset given target network (2) transfer policies found by Wide-ResNet-40-2 on the reduced CIFAR-10 which consists of 4,000 randomly chosen examples. As shown in Table 2 and 3, overall, Fast AutoAugment significantly improves the performances of the baseline and Cutout for any network while achieving comparable performances to those of AutoAugment.

CIFAR-10 Results

In Table 2, we present the test set accuracies according to different models. We examine Wide-ResNet-40-2, Wide-ResNet-28-10 zagoruyko2016wide , Shake-Shake gastaldi2017shake , Shake-Drop yamada2018shakedrop models to evaluate the test set accuracy of Fast AutoAugment. It is shown that, Fast AutoAugment achieves comparable results to AutoAugment and PBA on both experiments. We emphasize that it only takes 3.5 GPU-hours for the policy search on the reduced CIFAR-10. We also estimate the search time via full direct search. By considering the worst case, Pyramid-Net+ShakeDrop requires 780 GPU-hours which is even less than the computation time of AutoAugment (5000 GPU-hours).

CIFAR-100 Results

Results are shown in Table 3. Again, Fast AutoAugment achieves significantly better results than baseline and cutout. However, except Wide-ResNet-40-2, Fast AutoAugment shows slightly worse results than AutoAugment and PBA. Nevertheless, the search costs of the proposed method on CIFAR-100 are same as those on CIFAR-10. We conjecture the performance gaps between other methods and Fast AutoAugment are probably caused by the insufficient policy search in the exploration procedure or the over-training of the model parameters in the proposed algorithm.

4.2 Svhn

Model Baseline Cutout devries2017cutout AA cubuk2018autoaugment PBA ho2019pba Fast AA
Wide-ResNet-28-10 1.5 1.3 1.1 1.2 1.1
Table 4: Test set error rate (%) on SVHN.

We conducted an experiment with the SVHN dataset netzer2011reading with the same settings in AutoAugment. We chose 1,000 examples randomly and applied Fast AutoAugment to find augmentation policies. The obtained policies are applied to an initial model and we obtain the comparable performance to AutoAugment. Results are shown in Table 4 and Wide-ResNet-28-10 Model with the searched policies performs better than Baseline and Cutout and it is comparable with other methods. We emphasize that we use the same settings as CIFAR while AutoAugment tuned several hyperparameters on the validation dataset.

4.3 ImageNet

Model Baseline AutoAugment cubuk2018autoaugment Fast AutoAugment
ResNet-50 23.7 / 6.9 22.4 / 6.2 22.4 / 6.3
ResNet-200 21.5 / 5.8 20.00 / 5.0 19.4 / 4.7
Table 5: Validation set Top-1 / Top-5 error rate (%) on ImageNet.

Following the experiment setting of AutoAugment, we use a reduced subset of the ImageNet train data which is composed of 6,000 samples from randomly selected 120 classes. ResNet-50 he2016deep

on each fold were trained for 90 epochs during policy search phase, and we trained ResNet-50

he2016deep and ResNet-200 he2016identity with the searched augmentation policy. In Table 5, we compare the validation accuracies of Fast AutoAugment with those of baseline and of AutoAugment via ResNet-50 and ResNet-200. In this test, we except the AmoebaNet real2018regularized since its exact implementation is not open to public. As one can see from the table, the proposed method outperforms benchmarks. Furthermore, our search method is 33 times faster than AutoAugment on the same experimental settings (see Table 1). Since extensive data augmentation protects the network from overfitting hernandez2018data , we believe the performance will be improved by reducing the weight decay which is tuned for the model with default augmentation rules.

5 Discussion

Figure 3: Validation error (%) of Wide-ResNet-40-2 and Wide-ResNet-28-10 trained on CIFAR-10 and CIFAR-100 as number of sub-policies used in training.

Similar to AutoAugment, we hypothesize that as we increase the number of sub-policies searched by Fast AutoAugment, the given neural network should show improved generalization performance. We investigate this hypothesis by testing trained models Wide-ResNet-40-2 and Wide-ResNet-28-10 on CIFAR-10 and CIFAR-100. We select sub-policy sets from a pool of 400 searched sub-policies, and train models again with each of these sub-policy sets. Figure

3 shows the relation between average validation error and the number of sub-policies used in training. This result verifies that the performance improves with more sub-policies up to 100-125 sub-policies.

As one can observe in Table 2-3, there are small gaps between the performance of policies from direct search and the transferred policies from the reduced CIFAR-10 with Wide-ResNet-40-2. One can see that those gaps increase as the model capacities increase since the searched augmentation policies by the small model have a limitation to improve the generalization performance for the large model (e.g., Shake-Shake). Nevertheless, transferred policies are better than default augmentations; hence, one can apply those policies to different image recognition tasks.

Taking advantage of the fact that the algorithm is efficient, we experimented with searching for augmentation policies per class in CIFAR-100 with Wide-ResNet-40-2 Model. We changed search depth to 100, and kept other parameters the same. With the 70 best-performing policies per class, we obtained a slightly improved error rate of 17.2% for Wide-ResNet-28-10, respectively. Although it is difficult to see a definite improvement compared to AutoAugment and Fast AutoAugment, we believe that further optimization in this direction may improve performances more. Mainly, it is expected that the effect shoud be greater in the case of a dataset in which the difference between classes such as the object scale is enormous.

One can try tuning the other meta-parameters of Bayesian optimization such as search depth or kernel type in the TPE algorithm in the augmentation search phase. However, this does not significantly help to improve model performance empirically.

6 Conclusion

We propose an automatic process of learning augmentation policies for a given task and a convolutional neural network. Our search method is significantly faster than AutoAugment, and its performances overwhelm the human-crafted augmentation methods.

One can apply Fast AutoAugment to the advanced architectures such as AmoebaNet and consider various augmentation operations in the proposed search algorithm without increasing search costs. Moreover, the joint optimization of NAS and Fast AutoAugment is a a curious area in AutoML. We leave them for future works. We are also going to deal with the application of Fast AutoAugment to various computer vision tasks beyond image classification in the near future.

References