1 Introduction
Deep learning has become a stateoftheart technique for computer vision tasks, including object recognition
yamada2018shakedrop ; real2018regularized ; hu2018squeeze , detection ren2015faster ; liu2016ssd , and segmentation chen2018deeplab ; he2017mask . However, deep learning models with large capacity often suffer from overfitting unless significantly large amounts of labeled data are supported. Data augmentation (DA) has been shown as a useful regularization technique to increase both the quantity and the diversity of training data. Notably, applying a carefully designed set of augmentations rather than naive random transformations in training improves the generalization ability of a network significantly krizhevsky2012imagenet ; paschali2019data . However, in most cases, designing such augmentations has relied on human experts with prior knowledge on the dataset.With the recent advancement of automated machine learning (AutoML), there exist some efforts for designing an automated process of searching for augmentation strategies directly from a dataset. AutoAugment
cubuk2018autoaugmentuses reinforcement learning (RL) to automatically find data augmentation policy when a target dataset and a model are given. It samples an augmentation policy at a time using a controller RNN, trains the model using the policy, and gets the validation accuracy as a reward to update the controller. AutoAugment especially achieves a dramatic improvement in performances on several image recognition benchmarks. However, AutoAugment requires thousands of GPU hours even in a reduced setting, in which the size of the target dataset and the network is small. Recently proposed Population Based Augmentation (PBA)
ho2019pbais a method to deal with this problem, which is based on populationbased training method of hyperparameter optimization. In contrast to previous methods, we propose a new search strategy that does not require any repeated training of child models. Instead, the proposed algorithm directly searches for augmentation policies that maximize the match between the distribution of augmented split and the distribution of another, unaugmented split via a single model.
Dataset  AutoAugment cubuk2018autoaugment  Fast AutoAugment 

CIFAR10  5000  3.5 
SVHN  1000  1.5 
ImageNet  15000  450 
GPU hours comparison of Fast AutoAugment with AutoAugment. We estimate computation cost with an NVIDIA Tesla V100 while AutoAugment measured computation cost in Tesla P100.
In this paper, we propose an efficient search method of augmentation policies, called Fast AutoAugment, motivated by Bayesian DA tran2017bayesian . Our strategy is to improve the generalization performance of a given network by learning the augmentation policies which treat augmented data as missing data points of training data. However, different from Bayesian DA, the proposed method recovers those missing data points by the exploitationandexploration of a family of inferencetime augmentations simonyan2014very ; szegedy2016rethinking via Bayesian optimization in the policy search phase. We realize this by using an efficient density matching algorithm that does not require any backpropagation for network training for each policy evaluation. The proposed algorithm can be easily implemented by making good use of distributed learning frameworks such as Ray ray .
Our experiments show that the proposed method can search augmentation policies significantly faster than AutoAugment (see Table 1
), while retaining comparable performances to AutoAugment on diverse image datasets and networks, especially in two use cases: (a) direct augmentation search on the dataset of interest, (b) transferring learned augmentation policies to new datasets. On ImageNet, we achieve an error rate of 19.4% for ResNet200 trained with our searched policy, which is 0.6% better than 20.0% with AutoAugment.
This paper is organized as follows. First, we introduce related works on automatic data augmentation in Section 2. Then, we present our problem setting to achieve the desired goal and suggest Fast AutoAugment algorithm to solve the objective efficiently in Section 3. Finally, we demonstrate the efficiency of our method through comparison with baseline augmentation methods and AutoAugment in Section 4.
2 Related Work
There are many studies on data augmentation, especially for image recognition. On the benchmark image dataset, such as CIFAR and ImageNet, random crop, flip, rotation, scaling, and color transformation, have been performed as baseline augmentation methods krizhevsky2012imagenet ; Han_2017_CVPR ; sato2015apac . Mixup zhang2017mixup , Cutout devries2017cutout , and CutMix yun2019cutmix have been recently proposed to either replace or mask out the image patches randomly and obtained more improved performances on image recognition tasks. However, these methods are designed manually based on domain knowledge.
Naturally, automatically finding data augmentation methods from data in principle has emerged to overcome the performance limitation that originated from a cumbersome exploration of methods by a human. Smart Augmentation lemley2017smart introduced a network that learns to generate augmented data by merging two or more samples in the same class. shrivastava2017learning employed a generative adversarial network (GAN) goodfellow2014generative to generate images that augment datasets. Bayesian DA tran2017bayesian
combined Monte Carlo expectation maximization algorithm with GAN to generate data by treating augmented data as missing data points on the distribution of the training set.
Due to the remarkable successes of NAS algorithms on various computer vision tasks real2018regularized ; zoph2018learning ; kim2018scalable , several current studies also deal with automated search algorithms to obtain augmentation policies for given datasets and models. The main difference between the previously learned methods and these automated augmentation search methods is that the former methods exploit generative models to create augmented data directly, whereas the latter methods find optimal combinations of predefined transformation functions. AutoAugment cubuk2018autoaugment introduced an RL based search strategy that alternately trained a child model and RNN controller and showed the stateoftheart performances on various datasets with different models. Recently, PBA ho2019pba proposed a new algorithm which generates augmentation policy schedules based on population based training jaderberg2017population . Similar to PBA, our method also employs hyperparameter optimization to search for optimal policies but uses Treestructured Parzen Estimator (TPE) algorithm bergstra2011algorithms for practical implementation.
3 Fast AutoAugment
In this section, we first introduce the search space of the symbolic augmentation operations and formulate a new search strategy, efficient density matching, to find the optimal augmentation policies efficiently. We then describe our implementation based on Bayesian hyperparameter optimization incorporated into a distributed learning framework.
3.1 Search Space
Let be a set of augmentation (image transformation) operations defined on the input image space . Each operation has two parameters: the calling probability and the magnitude which determines the variability of operation. Some operations (e.g. invert, flip) do not use the magnitude. Let be the set of subpolicies where a subpolicy consists of consecutive operations where each operation is applied to an input image sequentially with the probability as follows:
(1) 
Hence, the output of subpolicy can be described by a composition of operations as
where and . Figure 1 shows a specific example of augmented images by . Note that each subpolicy is a random sequence of image transformations which depend on and , and this enables to cover a wide range of data augmentations. Our final policy is a collection of subpolicies and indicates a set of augmented images of dataset transformed by every subpolicies :
Our search space is similar to previous methods except that we use both continuous values of probability and magnitude at which has more possibilities than discretized search space.
3.2 Search Strategy
In Fast AutoAugment, we consider searching the augmentation policy as a density matching between a pair of train datasets. Let
be a probability distribution on
and assume dataset is sampled from this distribution. For a given classification model that is parameterized by , the expected accuracy and the expected loss of on dataset are denoted by and , respectively.3.2.1 Efficient Density Matching for Augmentation Policy Search
For any given pair of and , our goal is to improve the generalization ability by searching the augmentation policies that match the density of with density of augmented . However, it is impractical to compare these two distributions directly for an evaluation of every candidate policy. Therefore, we perform this evaluation by measuring how much one dataset follows the pattern of the other by making use of the model predictions on both datasets. In detail, let us split into and that are used for learning the model parameter and exploring the augmentation policy , respectively. We employ the following objective to find a set of learned augmentation policies
(2) 
where model parameter is trained on . It is noted that in this objective, approximately minimizes the distance between density of and density of from the perspective of maxmizing the performance of both model predictions with the same parameter .
To achieve (2), we propose an efficient strategy for augmentation policy search (see Figure 2). First, we conduct the fold stratified shuffling shahrokh2013effect to split the train dataset into where each consists of two datasets and . As a matter of convenience, we omit in the notation of datasets in the remaining parts. Next, we train model parameter on from scratch without data augmentation. Contrary to previous methods cubuk2018autoaugment ; ho2019pba , our method does not necessarily reduce the given network to child models or proxy tasks.
After training the model parameter, for each step , we explore candidate policies via Bayesian optimization method which repeatedly samples a sequence of subpolicies from search space to construct a policy and tunes corresponding calling probabilities and magnitudes to minimize the expected loss on augmented dataset (see line 6 in Algorithm 1). Note that, during the policy explorationandexploitation procedure, the proposed algorithm does not train model parameter from scratch again, hence the proposed method find augmentation policies significantly faster than AutoAugment. The concrete Bayesian optimization method is explained in Section 3.2.2.
As the algorithm completes the exploration step, we select top policies over and denote them collectively. Finally, we merge every into . See Algorithm 1 for the overall procedure. At the end of the process, we augment the whole dataset with and retrain the model parameter . Through the proposed method, we can expect the performance on augmented dataset is statistically higher than that on :
since augmentation policy works as optimized inferencetime augmentation simonyan2014very ; szegedy2016rethinking to make the model robustly predict correct answers. Consequently, learned augmentation policies approach (2) and improve generalization performance as we desired.
3.2.2 Policy Exploration via Bayesian Optimization
Policy exploration is an essential ingredient in the process of automated augmentation search. Since the evaluation of the model performance for every candidate policies is computationally expensive, we apply Bayesian optimization to the exploration of augmentation strategies. Precisely, at the line 6 in Algorithm 1, we employ the following Expected Improvement (EI) criterion jones2001taxonomy
(3) 
for acquisition function to explore efficiently. Here,
denotes the constant threshold determined by the quantile of observations among previously explored policies. We employ variable kernel density estimation
terrell1992adaptivekde on graphstructured search space to approximate the criterion (3). Practically, since the optimization method is already proposed in treestructured Parzen estimator (TPE) algorithm bergstra2011algorithms , we apply their HyperOpt library for the parallelized implementation.3.3 Implementation
Fast AutoAugment searches desired augmentation policies applying aforementioned Bayesian optimization to distributed train splits. In other words, the overall search process consists of two steps, (1) training model parameters on fold train data with default augmentation rules and (2) explorationandexploitation using HyperOpt to search the optimal augmentation policies. In the below, we describe the practical implementation of the overall steps in Algorithm 1. The following procedures are mostly parallelizable, which makes the proposed method more efficient to be used in actual usage. We utilize Ray ray to implement Fast AutoAugment, which enables us to train models and search policies in a distributed manner.
Shuffle (Line 1): We split training sets while preserving the percentage of samples for each class (stratified shuffling) using StratifiedShuffleSplit method in sklearn scikitlearn .
Train (Line 4): Train models on each training split. We implement this to run parallelly across multiple machines to reduce total running time if the computational resource is enough.
ExploreandExploit (Line 6): We use HyperOpt library from Ray with search numbers and 20 maximum concurrent evaluations. Different from AutoAugment, we do not discretize search spaces since our search algorithm can handle continuous values. We explore one of the possible operations with probability and magnitude . The values of probability and magnitude are uniformly sampled from at the beginning, then HyperOpt modulates the values to optimize the objective .
Merge (Line 79): Select the top best policies for each split and then combine the obtained policies from all splits. This set of final policies is used for retrain.
4 Experiments and Results
In this section, we examine the performance of Fast AutoAugment on the CIFAR10, CIFAR100 cifar , and ImageNet imagenet datasets and compare the results with baseline preprocessing, Cutout devries2017cutout , AutoAugment cubuk2018autoaugment , and PBA ho2019pba . For ImageNet, we only compare the baseline, AutoAugment, and Fast AutoAugment since PBA does not conduct experiments on ImageNet. We follow the experimental setting of AutoAugment for fair comparison, except that an evaluation of the proposed method on AmoebaNetB model real2018regularized is omitted. As in AutoAugment, each subpolicy consists of two operations (), each policy consists of five subpolicies (), and the search space consists of the same 16 operations (ShearX, ShearY, TranslateX, TranslateY, Rotate, AutoContrast, Invert, Equalize, Solarize, Posterize, Contrast, Color, Brightness, Sharpness, Cutout, Sample Pairing). In Fast AutoAugment algorithm, we utilize 5folds stratified shuffling (), 2 search width (), 200 search depth (), and 10 selected policies () for policy evaluation. We increase the batch size and adapt the learning rate accordingly to boost the training you2017large . Otherwise, we set other hyperparameters equal to AutoAugment if possible. For the unknown hyperparameters, we follow values from the original references or we tune them to match baseline performances.
Model  Baseline  Cutout devries2017cutout  AA cubuk2018autoaugment  PBA ho2019pba  

WideResNet402  5.3  4.1  3.7  3.6 / 3.7  
WideResNet2810  3.9  3.1  2.6  2.6  2.7 / 2.7 
ShakeShake(26 232d)  3.6  3.0  2.5  2.5  2.7 / 2.5 
ShakeShake(26 296d)  2.9  2.6  2.0  2.0  2.0 / 2.0 
ShakeShake(26 2112d)  2.8  2.6  1.9  2.0  2.0 / 1.9 
PyramidNet+ShakeDrop  2.7  2.3  1.5  1.5  1.8 / 1.7 
Model  Baseline  Cutout devries2017cutout  AA cubuk2018autoaugment  PBA ho2019pba  

WideResNet402  26.0  25.2  20.7  20.7 / 20.6  
WideResNet2810  18.8  18.4  17.1  16.7  17.3 / 17.3 
ShakeShake(26 296d)  17.1  16.0  14.3  15.3  14.9 / 14.6 
PyramidNet+ShakeDrop  14.0  12.2  10.7  10.9  11.9 / 11.7 
4.1 CIFAR10 and CIFAR100
For both CIFAR10 and CIFAR100, we conduct two experiments using Fast AutoAugment: (1) direct search on the full dataset given target network (2) transfer policies found by WideResNet402 on the reduced CIFAR10 which consists of 4,000 randomly chosen examples. As shown in Table 2 and 3, overall, Fast AutoAugment significantly improves the performances of the baseline and Cutout for any network while achieving comparable performances to those of AutoAugment.
CIFAR10 Results
In Table 2, we present the test set accuracies according to different models. We examine WideResNet402, WideResNet2810 zagoruyko2016wide , ShakeShake gastaldi2017shake , ShakeDrop yamada2018shakedrop models to evaluate the test set accuracy of Fast AutoAugment. It is shown that, Fast AutoAugment achieves comparable results to AutoAugment and PBA on both experiments. We emphasize that it only takes 3.5 GPUhours for the policy search on the reduced CIFAR10. We also estimate the search time via full direct search. By considering the worst case, PyramidNet+ShakeDrop requires 780 GPUhours which is even less than the computation time of AutoAugment (5000 GPUhours).
CIFAR100 Results
Results are shown in Table 3. Again, Fast AutoAugment achieves significantly better results than baseline and cutout. However, except WideResNet402, Fast AutoAugment shows slightly worse results than AutoAugment and PBA. Nevertheless, the search costs of the proposed method on CIFAR100 are same as those on CIFAR10. We conjecture the performance gaps between other methods and Fast AutoAugment are probably caused by the insufficient policy search in the exploration procedure or the overtraining of the model parameters in the proposed algorithm.
4.2 Svhn
Model  Baseline  Cutout devries2017cutout  AA cubuk2018autoaugment  PBA ho2019pba  Fast AA 

WideResNet2810  1.5  1.3  1.1  1.2  1.1 
We conducted an experiment with the SVHN dataset netzer2011reading with the same settings in AutoAugment. We chose 1,000 examples randomly and applied Fast AutoAugment to find augmentation policies. The obtained policies are applied to an initial model and we obtain the comparable performance to AutoAugment. Results are shown in Table 4 and WideResNet2810 Model with the searched policies performs better than Baseline and Cutout and it is comparable with other methods. We emphasize that we use the same settings as CIFAR while AutoAugment tuned several hyperparameters on the validation dataset.
4.3 ImageNet
Model  Baseline  AutoAugment cubuk2018autoaugment  Fast AutoAugment 

ResNet50  23.7 / 6.9  22.4 / 6.2  22.4 / 6.3 
ResNet200  21.5 / 5.8  20.00 / 5.0  19.4 / 4.7 
Following the experiment setting of AutoAugment, we use a reduced subset of the ImageNet train data which is composed of 6,000 samples from randomly selected 120 classes. ResNet50 he2016deep
on each fold were trained for 90 epochs during policy search phase, and we trained ResNet50
he2016deep and ResNet200 he2016identity with the searched augmentation policy. In Table 5, we compare the validation accuracies of Fast AutoAugment with those of baseline and of AutoAugment via ResNet50 and ResNet200. In this test, we except the AmoebaNet real2018regularized since its exact implementation is not open to public. As one can see from the table, the proposed method outperforms benchmarks. Furthermore, our search method is 33 times faster than AutoAugment on the same experimental settings (see Table 1). Since extensive data augmentation protects the network from overfitting hernandez2018data , we believe the performance will be improved by reducing the weight decay which is tuned for the model with default augmentation rules.5 Discussion
Similar to AutoAugment, we hypothesize that as we increase the number of subpolicies searched by Fast AutoAugment, the given neural network should show improved generalization performance. We investigate this hypothesis by testing trained models WideResNet402 and WideResNet2810 on CIFAR10 and CIFAR100. We select subpolicy sets from a pool of 400 searched subpolicies, and train models again with each of these subpolicy sets. Figure
3 shows the relation between average validation error and the number of subpolicies used in training. This result verifies that the performance improves with more subpolicies up to 100125 subpolicies.As one can observe in Table 23, there are small gaps between the performance of policies from direct search and the transferred policies from the reduced CIFAR10 with WideResNet402. One can see that those gaps increase as the model capacities increase since the searched augmentation policies by the small model have a limitation to improve the generalization performance for the large model (e.g., ShakeShake). Nevertheless, transferred policies are better than default augmentations; hence, one can apply those policies to different image recognition tasks.
Taking advantage of the fact that the algorithm is efficient, we experimented with searching for augmentation policies per class in CIFAR100 with WideResNet402 Model. We changed search depth to 100, and kept other parameters the same. With the 70 bestperforming policies per class, we obtained a slightly improved error rate of 17.2% for WideResNet2810, respectively. Although it is difficult to see a definite improvement compared to AutoAugment and Fast AutoAugment, we believe that further optimization in this direction may improve performances more. Mainly, it is expected that the effect shoud be greater in the case of a dataset in which the difference between classes such as the object scale is enormous.
One can try tuning the other metaparameters of Bayesian optimization such as search depth or kernel type in the TPE algorithm in the augmentation search phase. However, this does not significantly help to improve model performance empirically.
6 Conclusion
We propose an automatic process of learning augmentation policies for a given task and a convolutional neural network. Our search method is significantly faster than AutoAugment, and its performances overwhelm the humancrafted augmentation methods.
One can apply Fast AutoAugment to the advanced architectures such as AmoebaNet and consider various augmentation operations in the proposed search algorithm without increasing search costs. Moreover, the joint optimization of NAS and Fast AutoAugment is a a curious area in AutoML. We leave them for future works. We are also going to deal with the application of Fast AutoAugment to various computer vision tasks beyond image classification in the near future.
References
 (1) J. S. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl. Algorithms for hyperparameter optimization. In Advances in Neural Information Processing Systems, pages 2546–2554, 2011.
 (2) L.C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2018.

(3)
E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le.
Autoaugment: Learning augmentation strategies from data.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, 2019.  (4) J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei. Imagenet: A largescale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
 (5) T. DeVries and G. W. Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
 (6) X. Gastaldi. Shakeshake regularization. arXiv preprint arXiv:1705.07485, 2017.
 (7) I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
 (8) D. Han, J. Kim, and J. Kim. Deep pyramidal residual networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
 (9) K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask rcnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
 (10) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
 (11) K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European conference on computer vision, pages 630–645. Springer, 2016.
 (12) A. HernándezGarcía and P. König. Data augmentation instead of explicit regularization. arXiv preprint arXiv:1806.03852, 2018.
 (13) D. Ho, E. Liang, I. Stoica, P. Abbeel, and X. Chen. Population based augmentation: Efficient learning of augmentation policy schedules. In ICML, 2019.
 (14) J. Hu, L. Shen, and G. Sun. Squeezeandexcitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
 (15) M. Jaderberg, V. Dalibard, S. Osindero, W. M. Czarnecki, J. Donahue, A. Razavi, O. Vinyals, T. Green, I. Dunning, K. Simonyan, et al. Population based training of neural networks. arXiv preprint arXiv:1711.09846, 2017.
 (16) D. R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of global optimization, 21(4):345–383, 2001.
 (17) S. Kim, I. Kim, S. Lim, C. Kim, W. Baek, H. Cho, B. Yoon, and T. Kim. Scalable neural architecture search for 3d medical image segmentation. 2018.
 (18) A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
 (19) A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
 (20) J. Lemley, S. Bazrafkan, and P. Corcoran. Smart augmentation learning an optimal data augmentation strategy. IEEE Access, 5:5858–5869, 2017.
 (21) W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016.
 (22) P. Moritz, R. Nishihara, S. Wang, A. Tumanov, R. Liaw, E. Liang, M. Elibol, Z. Yang, W. Paul, M. I. Jordan, et al. Ray: A distributed framework for emerging AI applications. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), pages 561–577, 2018.
 (23) Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. 2011.
 (24) M. Paschali, W. Simson, A. G. Roy, M. F. Naeem, R. Göbl, C. Wachinger, and N. Navab. Data augmentation with manifold exploring geometric transformations for increased performance and robustness. arXiv preprint arXiv:1901.04420, 2019.
 (25) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikitlearn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
 (26) E. Real, A. Aggarwal, Y. Huang, and Q. V. Le. Regularized evolution for image classifier architecture search. arXiv preprint arXiv:1802.01548, 2018.
 (27) S. Ren, K. He, R. Girshick, and J. Sun. Faster rcnn: Towards realtime object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
 (28) I. Sato, H. Nishimura, and K. Yokoi. Apac: Augmented pattern classification with neural networks. arXiv preprint arXiv:1505.03229, 2015.
 (29) M. Shahrokh Esfahani and E. R. Dougherty. Effect of separate sampling on classification accuracy. Bioinformatics, 30(2):242–250, 2013.
 (30) A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2107–2116, 2017.
 (31) K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 79, 2015, Conference Track Proceedings, 2015.
 (32) C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
 (33) G. R. Terrell, D. W. Scott, et al. Variable kernel density estimation. The Annals of Statistics, 20(3):1236–1265, 1992.
 (34) T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid. A bayesian data augmentation approach for learning deep models. In Advances in Neural Information Processing Systems, pages 2797–2806, 2017.
 (35) Y. Yamada, M. Iwamura, T. Akiba, and K. Kise. Shakedrop regularization for deep residual learning. arXiv preprint arXiv:1802.02375, 2018.
 (36) Y. You, I. Gitman, and B. Ginsburg. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888, 2017.
 (37) S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. arXiv preprint arXiv:1905.04899, 2019.
 (38) S. Zagoruyko and N. Komodakis. Wide residual networks. In British Machine Vision Conference 2016. British Machine Vision Association, 2016.
 (39) H. Zhang, M. Cisse, Y. N. Dauphin, and D. LopezPaz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
 (40) B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697–8710, 2018.