Recent years have seen remarkable success of deep neural networks in various computer vision applications. Driven by large amounts of labeled data, the performances of networks have reached an amazing level. A common issue in learning deep networks is that the training process is prone to overfitting, with the sharp contrast between the huge model capacity and the limited dataset. Data augmentation, which applies randomized modifications to the training samples, have been shown to be an effective way to mitigate this problem. Simple image transforms, including random crop, horizontal flipping, rotation and translation, are utilized to create label-preserved new training data to promote the network generalization and accuracy performance. Recent efforts such as[8, 34] further develop the transform strategy and boost the model to state-of-the-art accuracy on CIFAR-10 and ImageNet datasets.
However, it is nontrivial to find a good augmentation policy for a given task, as it varies significantly across datasets and tasks. It often requires extensive experience combined with tremendous efforts to devise a reasonable policy. Instinctively, automatically finding the process of data augmentation strategy for a target dataset turns into an alternative. Cubuk 
described aa procedure to automatically search augmentation strategy from data. The search process involves sampling hundreds and thousands of policies, each trained on a child model to measure the performance and further update the augmentation policy distribution represented by a controller. Despite its promising empirical performance, the whole process is computationally expensive and time consuming. Particularly, it takes 15000 policy samples, each trained with 120 epochs. This immense amount of computing limits its applicability in practical environment. Thus subsampled the dataset to and for CIFAR-10 and ImageNet respectively. An inherent cause of this computational bottleneck is the fact that augmentation strategy is searched in an offline manner.
In this paper, we present a new approach that considers auto-augmentation as a hyper-parameter optimization problem and drastically improves the efficiency of the autoaugmentation strategy. Specifically, we formulate augmentation policy as a probability distribution. The parameters of the distribution are regarded as hyper-parameters. We further propose a bilevel framework which allows the distribution parameters to be optimized along with network training. In this bilevel setting, the inner objective is the minimization of vanilla train loss w.r.t network parameters, while the outer objective is the maximization of validation accuracy w.r.t augmentation policy distribution parameters utilizing REINFORCE . These two objectives are optimized simultaneously, so that the augmentation distribution parameters are tuned alongside the network parameters in an online manner. As the whole process eliminates the need of retraining, the computational cost is exceedingly reduced. Our proposed OHL-Auto-Aug remarkably improves the search efficiency while achieving significant accuracy improvements over baseline models. This ensures our method can be easily performed on large scale dataset including CIFAR-10 and ImageNet without any data reduction.
Our main contribution can be summaried as follows: 1) We propose an online hyper-parameter learning approach for auto-augmentation strategy which treats each augmentation policy as a parameterized probability distribution. 2) We introduce a bilevel framework to train the distribution parameters along with the network parameters, thus eliminating the need of re-training during search. 3) Our proposed OHL-Auto-Aug dramatically improves the efficiency while achieving significant performance improvements. Compared to the previous state-of-the-art auto-augmentation approach , our OHL-Auto-Aug achieves 60 faster on CIFAR-10 and 24 faster on ImageNet with comparable accuracies.
2 Related Work
2.1 Data Augmentation
Data augmentation lies at the heart of all successful applications of deep neural networks. In order to improve the network generalization, substantial domain knowledge is leveraged to design suitable data transformations.  applied various affine transforms, including horizontal and vertical translation, squeezing, scaling, and horizontal shearing to improve their model’s accuracy. 
exploited principal component analysis on ImageNet to randomly adjust colour and intensity values. further utilized a wide range of colour casting, vignetting, rotation, and lens distortion to train Deep Image on ImageNet.
There also exists attempts to learn data augmentations strategy, including Smart Augmentation , which proposed a network that automatically generates augmented data by merging two or more samples from the same class, and  which used a Bayesian approach to generate data based on the distribution learned from the training set. Generative adversarial networks have also been used for the purpose of generating additional data [22, 35, 1]. The work most closely related to our proposed method is 
, which formulated the auto-augmentation search as a discrete search problem and exploited a reinforcement learning framework to search the policy consisting of possible augmentation operations. Our proposed method optimizes the distribution on the discrete augmentation operations but is much more computationally economical benefiting from the proposed hyper-parameter optimization formulation.
2.2 Hyper-parameter Optimization
Hyper-parameter optimization was historically accomplished by selecting several values for each hyper-parameter, computing the Cartesian product of all these values, then running a full training for each set of values.  showed that performing a random search was more efficient than a grid search by avoiding excessive training with the hyper-parameters that were set to a poor value.  later refined this random search process by using quasi random search.  further utilized Gaussian processes to model the validation error as a function of the hyper-parameters. Each training further refines this function to minimize the number of sets of hyper-parameters to try. All these methods are “black-box” methods since they assume no knowledge about the internal training procedure to optimize hyper-parameters. Specifically, they do not have access to the gradient of the validation loss w.r.t. the hyper-parameters. To address this issue, [21, 9] explicitly utilized the parameter learning process to obtain such a gradient by proposing a Lagrangian formulation associated with the parameter optimization dynamics. 
used a reverse-mode differentiation approach, where the dynamics corresponds to stochastic gradient descent with momentum. The theoretical evidence of the hyper-parameter optimization in our auto-augmentation is, where the hyper-parameters are changed after a certain number of parameter updates in a forward manner. The forward-mode procedure is suitable for the auto-augmentation search process with a drastically reduced cost.
2.3 Auto Machine Learning and Neural Architecture Search
Auto Machine Learning (AutoML) aims to free human practitioners and researchers from these menial tasks. Recently many advances focus on automatically searching neural network architectures. One of the first attempts[37, 36]
was utilizing reinforcement learning to train a controller that represents a policy to generate a sequence of symbols representing the network architecture. The generation of a neural architecture was formulated as the controller’s action, whose space is identical to the architecture search space. An alternative to reinforcement learning is evolutionary algorithms, that evolved the topology of architectures by mutating the best architectures found so far[24, 32, 25]. There also exists surrogate model based search methods  that utilized sequential model-based optimization as a technique for parameter optimization. However, all the above methods require massive computation during the search, particularly thousands of GPU days. Recent efforts such as [19, 20, 23], utilized several techniques trying to reduce the search cost.  introduced a real-valued architecture parameter which was jointly trained with weight parameters.  embedded architectures into a latent space and performed optimization before decoding.  utilized architectural sharing among sampled models instead of training each of them individually. Several methods attempt to automatically search architectures with fast inference speed, either explicitly took the inference latency as a constrain , or implicitly encoded the topology information . Note that  utilized a similar controller inspired by , whose training is time consuming, to guide the augmentation policy search. Our auto-augmentation strategy is much more efficient and economical compared to these methods.
In this section, we first present the formulation of hyper-parameter optimization for auto-augmentation strategy. Then we introduce the framework of solving the optimization in an online manner. Finally we detail the search space and training pipeline. The pipeline of our OHL-Auto-Aug is shown in Figure 2.
3.1 Problem Formulation
The purpose of auto-augmentation strategy is to automatically find a set of augmentation operations performed on training data to improve the model generalization according to the dataset. In this paper, we formulate the augmentation strategy as , a probability distribution on augmentation operations. Suppose we have candidate data augmentation operations , each of which is selected under a probability of . Given a network model parameterized by , train dataset , and validation dataset , the purpose of data augmentation is to maximum the validation accuracy with respect to , and the weights associated with the model are obtained by minimizing the training loss:
denotes the loss function and the input training data is augmented by the selected candidate operations.
This refers to a bilevel optimization problem , where augmentation distribution parameters are regarded as hyper-parameters. At the outer level we look for a augmentation distribution parameter , under which we can obtain the best performed model , where is the solution of the inner level problem. Previous state-of-theart methods resort to sampling augmentation strategies with a surrogate model, and then solve the inner optimization problem exactly for each sampled strategy, which raises the time-consuming issue. To tackle this problem, we propose to train the augmentation distribution parameters alongside with the training of network weights, getting rid of training thousands of networks from scratch.
3.2 Online Optimization framework
Inspired by the forward hyper-gradient proposed in , we propose an online optimization framework that trains the hyper-parameters and the network weights simultaneously. To clarify, we use to denote the update steps of the outer loop, and to denote the update iterations of the inner loop. This indicates that between two adjacent outer optimization updates, the inner loop updates for steps. Each steps train the network weights with a batch size of images. In this section, we focus on the process of the -th period of the outer loop.
For each image, a augmentation operation is sampled according to the current augmentation policy distribution . We use trajectory to denote all the augmentation operations sampled in the -th period. For the iterations of the inner loop, we have
where is learning rate for the network parameters, denotes the batched loss which takes three inputs: the augmentation operations for the -th batch, the current inner loop parameters and a mini-batched data , , and returns an average loss on the -th batch.
As illustrated in Equation 3.2, is affect by the trajectory . Since is only influenced by the operations in trajectory ,
should be regarded as a random variable, with probability, where denotes the probability of sampled by , calculated as . We update the augmentation distribution parameters by taking one step of optimization for the outer objective,
where is learning rate for the augmentation distribution parameters. As the outer objective is a maximization, here the update is ‘gradient ascent’ instead of ‘gradient descent’. The above process iteratively continues until the network model is converged.
The motivation of this online framework is that we approximate using only steps of training, instead of completely solving the inner optimization in Equation 1 until convergence. At the outer level, we would like to find the augmentation distribution that can train a network to a high validation accuracy given the -step optimized network weights. A similar approach has been exploited in meta-learning for neural architecture search . Although the convergence are not guaranteed, practically the optimization is able to reach a fixed point as our experiments have demonstrated, as well in .
It is a tricky problem to calculate the gradient of validation accuracy with respect to in Equation 3. This is mainly due to two facts: 1. the validation accuracy is non-differentiable with respect to . 2. Analytically calculating the integral over is intractable. To address these two issues, we utilize REINFORCE  to approximate the gradient in Equation 3 by Monte-Carlo sampling. This is achieved by training networks in parallel for the inner loop, which are treated as sampled trajectories. We calculate the average gradient of them based on the REINFORCE algorithm. We have
where is the -th trajectory. By substituting to Equation 4,
In practice, we also utilize the baseline trick 
where denotes the function that normalizes the accuracy returned by each sample trajectory to zero mean and unit variance.
3.3 Search Space and Training Pipeline
In this paper, we formulate the auto-augmentation strategy as distribution optimization. For each input train image, we sample an augmentation operation from the search space and apply to it. Each augmentation operation consists of two augmentation elements. The candidate augmentation elements are listed in Table 1.
|Elements Name||range of magnitude|
The total number of the elements is . A single operation is defined as the combination of two elements, so the search space has about operations, some operation may have the same affect to data. In our work, the distribution of augmentation operations is formulated as a joint probability of two elements,
The whole training pipeline is described in Algorithm 1.
4.1 Implementation Details
|Model||Baseline||Cutout ||Auto-Augment ||OHL-Auto-Aug||
|AmoebaNet-B(6, 128) ||
We perform our OHL-Auto-Aug on two classification datasets: CIFAR-10 and ImageNet. Here we describe the experimental details that we used to search the augmentation policies.
CIFAR-10 CIFAR-10 dataset  consists of natural images with resolution 3232. There are totally 60,000 images in 10 classes, with 6000 images per class. The train and test sets contain 50,000 and 10,000 images respectively. For our OHL-Auto-Aug on CIFAR-10, we use a validation set of 5,000 images, which is randomly splitted from the training set, to calculate the validation accuracy during the training for the augmentation distribution parameters.
In the training phase, the basic pre-processing follows the convention for state-of-the-art CIFAR-10 models: standardizing the data, random horizontal flips with 50% probability, zero-padding and random crops, and finally Cutout with 1616 pixels. Our OHL-Auto-Aug strategy is applied in addition to the basic pre-processing. For each training input data, we first perfom basic pre-processing, then our OHL-Auto-Aug strategy, and finally Cutout.
All the networks are trained from scartch on CIFAR-10. For the training of network parameters, we use a mini-batch size of 256 and a standard SGD optimizer. The momentum rate is set to 0.9 and weight decay is set to 0.0005. The cosine learning rate scheme is utilized with the initial learning rate of 0.2. The total training epochs is set to 300. For the AmoebaNet-B , there are several hyper-parameters modifications. The initial learning rate is set to 0.024. We also use an additional learning rate warmup stage  before training. The learning rate is linearly increased from 0.005 to the initial learning rate 0.024 in 40 epochs.
ImageNet ImageNet dataset  contains 1.28 million training images and 50,000 validation images from 1000 classes. As our experiments on CIFAR-10, we set aside an additional validation set of 50,000 images splitted from the training dataset.
For basic data pre-processing, we follow the standard practice and perform the random size crop to 224224 and random horizontal flipping. The practical mean channel substraction is adopted to normalize the input images for both training and testing. Our OHL-Auto-Aug operations are performed following this basic data pre-processing.
For the training on ImageNet, we use synchronous SGD with a Nestrov momentum of 0.9 and 1e-4 weight decay. The mini-batch size is set to 2048 and the base learning rate is set to 0.8. The cosine learning rate scheme is utilized with a warmup stage of 2 epochs. The total training epoch is 150.
Augmentation Distribution Parameters For the training of augmentation policy parameters, we use Adam optimizer with a learning rate of , and . On CIFAR-10 dataset, the number of trajectory sample is set to 8. This number is reduced to 4 for ImageNet, due to the large computation cost.Having finished the outer update for the distribution parameters, we broadcast the network parameters with the best validation accuracy to other sampled trajectories using multiprocess utensils.
CIFAR-10 Results: On CIFAR-10, we perform our OHL-Auto-Aug on three popular network architectures: ResNet , WideResNet , DualPathNet  and AmoebaNet-B . For ResNet, a 18-layer ResNet with 11.17M parameters is used. For WideResNet, we use a 28 layer WideResNet with a widening factor of 10 which has 36.48M parameters. For Dual Path Network, we choose DualPathNet-92 with 34.23M parameters for CIFAR-10 classification. AmoebaNet-B is an auto-searched architecture by a regularized evolution approach proposed in . We use the AmoebaNet-B (6,128) setting for fair comparison with , whose parameters is 33.4M. The WideReNet-28, DualPathNet-92 and AmoebaNet-B are relatively heavy architectures for CIFAR-10 classification.
We illustrate the test set error rates of our OHL-Auto-Aug for different network architectures in Table 2. All the experiments follow the same setting described in Section 4.1. We compare the empirical results with standard augmentation (Baseline), standard augmentation with Cutout (Cutout), an augmentation strategy proposed in  (Auto-Augment). For ResNet-18 and DualPathNet-92, our OHL-Auto-Aug achieves a significant improvement than Cutout and Auto-Augment. Compare to the Baseline model, the error rate is reduced about ResNet-18, about for DualPathNet-92 and about for WideResNet-28-10. For AmoebaNet-B, although we have not reproduce the accuracy reported in , our OHL-Auto-Aug still achieves a 1.09% error rate redunction compared to Baseline model. Compared to the baseline we implemented, our OHL-Auto-Aug achieve an error rate of which is about error rate reduction. For all the models, our OHL-Auto-Aug exhibits a significantly error rate drop compared with Cutout.
We also find that the results of our OHL-Auto-Aug outperform that of Auto-Augment on all the networks except AmoebaNet-B. We think the main reason is that we could not train the Baseline AmoebaNet-B to a comparable accuracy with the original paper. By applying our OHL-Auto-Aug, the accuracy gap can be narrowed from 0.42% ((Baseline comparing  and our implementation) to 0.14% (comparing OHL-Auto-Aug with Auto-Augment), which could demonstrate the effectiveness of our approach.
ImageNet Results: On ImageNet dataset, we perform our method on two networks: ResNet-50  and SE-ResNeXt101  to show the effectiveness of our augmentation strategy search. ResNet-50 contains 25.58M parameters and SE-ResNeXt101 contains 48.96M parameters. We regard ResNet-50 as a medium architecture and SE-ResNeXt101 as a heavy architecture to illustrate the generalization of our OHL-Auto-Aug.
|Method||ResNet-50 ||SE-ResNeXt-101 |
|Auto-Augment ||22.37/6.18||20.03/5.23 (our impl.)|
|No Need Retrain|
In Table 3, we show the Top-1 and Top-5 error rates on validation set. We notice there exists another augmentation stategy mixup  which trains network on convex combinations of pairs of examples and their labels and achieves performance improvement on ImageNet dataset. We report the results of mixup together with Auto-Augment for comparision. All the experiments are conducted following the same setting described in Section 4.1. Since the original paper of  does not perform experiments on SE-ResNeXt101, we train it with the searched augmentation policy provided in . As can be seen from the table, OHL-Auto-Aug achieves state-of-the-art accuracy on both models. For the medium model ResNet-50, our OHL-Auto-Aug increases by 3.7% points over the Baseline, 1.37% points over Auto-Augment, which is a significant improvement. For the heavy model SE-ResNeXt101, our OHL-Auto-Aug still boosts the performance by 1.4% even with a very high baseline. Both of the experiments demonstrate the effectiveness of our OHL-Auto-Aug.
Search Cost: The principle motivation of our OHL-Auto-Aug is to improve the efficiency of the search of auto-augmentation strategy. To demonstrate the efficiency of our OHL-Auto-Aug, we illustrate the search cost of our method together with Auto-Augment in Table 4. For fair comparison, we compute the total training iterations with conversion under a same batch size 1024 and denote as ‘’. For example, Auto-Augment perfoms searching on CIFAR-10 by sampling 15,000 child models, each trained on a ‘reduced CIFAR-10’ with 4,000 randomly chosen images for 120 epochs. So the of Auto-Augment on CIFAR-10 could be calulated as . Similarly, Auto-Augment samples child models and trains them each on a ‘reduced ImageNet’ of 6,000 images for 200 epochs, which equals to . Our OHL-Auto-Aug trains on the whole CIFAR-10 for 300 epochs with 8 trajectory samples, which equals to . For ImageNet, the is .
As illustrated in Table 4, even training on the whole dataset, our OHL-Auto-Aug achieves 60 faster on CIFAR-10 and 24 faster on ImageNet than Auto-Augment. Instead of training on a reduced dataset, the high efficiency enables our OHL-Auto-Aug to train on the whole dataset, a guarantee for better performance. This also ensures our OHL-Auto-Aug algorithm could be easily performed on large dataset such as ImageNet without any bias on input data. Our OHL-Auto-Aug also gets rid of the need of retraining from scratch, further saving computation resources.
In this section, we perform several analysis to illustrate the effectiveness of our proposed OHL-Auto-Aug.
Comparisons with different augmentation distribution learning rates: We conduct our OHL-Auto-Aug with different augmentation distribution learning rates: . All the experiments use ResNet-18 on CIFAR-10 following the same setting as described in Section 4.1 except . The final accuracies are illustrated in Figure 3. Note that our OHL-Auto-Aug chooses .
Comparing the result of with other results, we find that updating the augmentation distribution could produce better performed network than uniformly sampling from the whole search space. As can be observed, too large will make the distribution parameters difficult to converge, while too small will slow down the convergency process. The chosen of is a trade-off for convergence speed and performance.
Comparisons with different number of trajectory samples: We further conduct an analysis on the sensetivity of the number of number of trajectory samples. We produce experiments with ResNet-18 on CIFAR-10 with different number of trajectories: . All the experiments follows the same setting as described in Section 4.1 except . The final accuracies are illustrated in Figure 4.
Principly, a larger number of trajectories will benefit the training for the augmentation distribution parameters due to the more accurate gradient calculated by Equation 4. Meanwhile, larger number of trajectories increases the memory requisition and the computation cost. As obseved in Figure 4, increasing the number of trajectories from 1 to 8 steadily improves the performance of the network. When the number comes to , the accuracy improvement is minor. We select on CIFAR-10 as a trade-off between computation cost and performance.
Analysis of augmentation distribution parameters:
We visualize the augmentation distribution parameters of the whole training stage. We calculate the marginal distribution parameters of the first element in our augmentation operation, which is computed by summing up each row vector offollowing normalization. The results are illustrated in Figure 5(a) and Figure 5(b) for CIFAR-10 and ImageNet respectively. On CIFAR-10, we choose the network of ResNet-18 and show the results for every 5 epochs. On ImageNet we choose ResNet-50 and show the results for every 2 epochs. For both the two settings, we find that during training, our augmentation distribution parameters has converged. Some augmentation operations have been discarded, while the probabilities of some others have increased. We also notice that for different dataset, the distributions of augmentation operations are different. For instance, on CIFAR-10, the 36th operation, which is the ‘Color Invert’, is discarded, while on ImageNet, this operation is preserved with high probability. This further demonstrates our OHL-Auto-Aug could be easily performed on different datasets to search different augmentation policies.
In this paper, we have proposed and online hyper-parameter learning method for auto-augmentation strategy, which formulates the augmentation policy as a parameterized probability distribution. Benefitting from the proposed bilevel framework, our OHL-Auto-Aug is able to optimize the distribution parameters iteratively with the network parameters in an online manner. Experimental results illustrate that our proposed OHL-Auto-Aug achieves remarkable auto-augmentation search efficiency, while establishes significantly accuracy improvements over baseline models, on large scale dataset including CIFAR-10 and ImageNet without any data reduction.
-  A. Antoniou, A. Storkey, and H. Edwards. Data augmentation generative adversarial networks. arXiv preprint arXiv:1711.04340, 2017.
-  J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281–305, 2012.
-  O. Bousquet, S. Gelly, K. Kurach, O. Teytaud, and D. Vincent. Critical hyper-parameters: No random, no cry. arXiv preprint arXiv:1706.03200, 2017.
-  Y. Chen, J. Li, H. Xiao, X. Jin, S. Yan, and J. Feng. Dual path networks. In NIPS, pages 4467–4475, 2017.
-  B. Colson, P. Marcotte, and G. Savard. An overview of bilevel optimization. Annals of operations research, 153(1):235–256, 2007.
-  E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009.
-  T. DeVries and G. W. Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
L. Franceschi, M. Donini, P. Frasconi, and M. Pontil.
Forward and reverse gradient-based hyperparameter optimization.In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1165–1173. JMLR. org, 2017.
-  P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
-  M. Guo, Z. Zhong, W. Wu, D. Lin, and J. Yan. Irlas: Inverse reinforcement learning for architecture search. arXiv preprint arXiv:1812.05285, 2018.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
J. Hu, L. Shen, and G. Sun.
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
-  A. Krizhevsky, V. Nair, and G. Hinton. The cifar-10 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html, 2014.
A. Krizhevsky, I. Sutskever, and G. E. Hinton.
Imagenet classification with deep convolutional neural networks.In Advances in neural information processing systems, pages 1097–1105, 2012.
-  Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
-  J. Lemley, S. Bazrafkan, and P. Corcoran. Smart augmentation learning an optimal data augmentation strategy. IEEE Access, 5:5858–5869, 2017.
-  C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy. Progressive neural architecture search. In ECCV, September 2018.
-  H. Liu, K. Simonyan, and Y. Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
-  R. Luo, F. Tian, T. Qin, E. Chen, and T.-Y. Liu. Neural architecture optimization. In Advances in Neural Information Processing Systems, pages 7827–7838, 2018.
-  D. Maclaurin, D. Duvenaud, and R. Adams. Gradient-based hyperparameter optimization through reversible learning. In International Conference on Machine Learning, pages 2113–2122, 2015.
-  L. Perez and J. Wang. The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621, 2017.
-  H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018.
-  E. Real, A. Aggarwal, Y. Huang, and Q. V. Le. Regularized evolution for image classifier architecture search. arXiv preprint arXiv:1802.01548, 2018.
-  S. Saxena and J. Verbeek. Convolutional neural fabrics. In NIPS, pages 4053–4061, 2016.
-  D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
-  J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951–2959, 2012.
-  M. Tan, B. Chen, R. Pang, V. Vasudevan, and Q. V. Le. Mnasnet: Platform-aware neural architecture search for mobile. arXiv preprint arXiv:1807.11626, 2018.
-  T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid. A bayesian data augmentation approach for learning deep models. In Advances in Neural Information Processing Systems, pages 2797–2806, 2017.
-  R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992.
-  R. Wu, S. Yan, Y. Shan, Q. Dang, and G. Sun. Deep image: Scaling up image recognition. arXiv preprint arXiv:1501.02876, 2015.
-  L. Xie and A. L. Yuille. Genetic cnn. In ICCV, pages 1388–1397, 2017.
-  S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
-  H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
-  X. Zhu, Y. Liu, Z. Qin, and J. Li. Data augmentation in emotion classification using generative adversarial networks. arXiv preprint arXiv:1711.00648, 2017.
-  B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
-  B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition. arXiv preprint arXiv:1707.07012, 2(6), 2017.