Designing reasonable network architecture for specific problems is a challenging task. Better designed network architectures usually lead to significant performance improvement. In recent years, neural architecture search (NAS) [39, 40, 2, 8, 21, 23, 28, 24] has demonstrated success in designing neural architectures automatically. Many architectures produced by NAS methods have achieved higher accuracy than those manually designed in tasks such as image classification 5], semantic segmentation [3, 20] and object detection . NAS methods not only boost the model performance, but also liberate human experts from the tedious architecture tweaking work.
So far, there exist three basic frameworks that have gained a growing interest, i.e.
, evolutionary algorithm (EA)-based NAS[22, 28, 35]
, reinforcement learning (RL)-based NAS[39, 40, 27], and gradient-based NAS [23, 36, 7]. In both EA-based and RL-based approaches, their searching procedures require the validation accuracy of numerous architecture candidates, which is computationally expensive. For example, the reinforcement learning method [39, 40]
trains and evaluates more than 20,000 neural networks across 500 GPUs over 4 days. These approaches use a large amount of computational resources, which is inefficient and unaffordable.
To eliminate such deficiency, gradient-based NAS methods [23, 36, 7, 4] such as DARTS  and GDAS  are recently presented. They construct a super network and relax the architecture representation by assigning continuous weights to the candidate operations. In DARTS, a computation cell is searched as the building block of the final architecture and each cell is represented as a directed acyclic graph (DAG) consisting of an ordered sequence of N nodes. Then the concrete search space is relaxed into a continuous one, so that network and architecture parameters can be well-optimized by gradient descent. It achieved comparable performance to EA-based  and RL-based  methods while only requiring a search cost of a few GPU-days. In order to further accelerate the searching procedure, GDAS  samples one sub-graph according to the architecture weights in a differentiable way at each training iteration.
Existing methods select the candidate operations based on their architecture weights to derive the target architecture. Stand-alone models are constructed to generate weights for all possible architectures in the search space. However, architecture weights cannot accurately reflect the importance of each operation. To illustrate this issue, the obtained accuracy of stand-alone model is compared with the corresponding architecture weights. Their correlation is plotted in Figure 1. We can see that the operation with the highest architecture weight dose not achieves the best accuracy. Furthermore, the architecture weights of candidate operations are often close to each other; in this case, it is difficult to decide which candidate operation is the optimal one. Figure 2 illustrates the procedure of NAS.
Given the limitation of architecture weights, it is natural to ask the question: will we be able to improve architecture search performance if we apply a more effective indicator to guide the model search? To this end, we propose a simple yet effective solution to neural architecture search, termed as exploiting operation importance for effective neural architecture search (EoiNAS). The main idea of our method has two parts:
1) It is well-recognized that operation A is better than operation B if A has fewer training epochs and higher validation accuracy during the search process. According to this criterion,a new indicator is proposed to fully exploit the operation importance and guide the model search. Training iterations and validation accuracy for each operation can be recorded in the search space.
2) Based on this new indicator, we propose a gradual operation pruning strategy to further improve the search efficiency and accuracy. We denote the training of every k epochs as a step. In each step, we prune the most inferior operation according to the new indicator. This process continues until only one operation remains; this operation can be regarded as the best operation to derive the final architecture. Owing to the gradual operation pruning strategy, our super network exhibits fast convergence.
The effectiveness of EoiNAS is verified on the standard vision setting, i.e., searching on CIFAR-10, and evaluating on both CIFAR-10/100 and ImageNet datasets. We achieve state-of-the-art performance of 2.50% test error on CIFAR-10 using 3.4M parameters. When transferred to ImageNet, it achieves top-1/5 errors of 25.6%/8.3% respectively, comparable to the state-of-the-art performance under the mobile setting.
The remainder of this paper is organized as follows: In Section 2, we review the related work of recent neural architecture search algorithms and describe our search method in Section 3. After experiments are shown in Section 4, we conclude this paper in Section 5.
2 Related Work
With the rapid development of deep learning, significant gain in performance has been brought to a wide range of computer vision problems, most of which owed to manually designed network architectures[11, 15, 19, 33, 34, 10]. Recently, a new research field named neural architecture search (NAS) [39, 40, 2, 8, 21]
has been attracting increasing attentions. The goal is to find automatic ways of designing neural architectures to replace conventional handcrafted ones. According to the heuristics to explore the large architecture space, existing NAS approaches can be roughly divided into three categories, namely, evolutionary algorithm-based approaches[22, 28, 35], reinforcement learning-based approaches [39, 40, 27] and gradient-based approaches [23, 36, 7].
Reinforcement learning based NAS. A reinforcement learning based approach has been proposed by Zoph et al. [39, 40] for neural architecture search. They use a recurrent network as a controller to generate the model description of a child neural network designated for a given task. The resulted architecture (NASNet) improved over the existing hand-crafted network models at its time.
Evolutionary algorithm-based NAS. An alternative search technique has been proposed by Real et al. 
where an evolutionary (genetic) algorithm has been used to find a neural architecture tailored for a given task. The evolved neural network (AmoebaNet), further improved the performance over NASNet. Although these works achieved state-of-the-art results on various classification tasks, their main disadvantage is the large amount of computational resources they demand.
Gradient-based NAS. Contrary to treating architecture search as a black-box optimization problem, gradient based neural architecture search methods [23, 36, 7] utilized the gradient obtained in the training process to optimize neural architecture. DARTS  relaxed the search space to be continuous, so that the architecture can be optimized with respect to its validation set performance by gradient descent. Therefore, gradient-based approaches successfully accelerate the architecture search procedure, only several GPU days are required. Because DARTS optimized the entire super network during the search process, it may suffer from discrepancy between the continuous architecture encoding and the derived discrete architecture. GDAS  suggested an alternative method to alleviate this discrepancy. GDAS approaches the search problem as sampling from a distribution of architectures, where the distribution itself is learned in a continuous way. The distribution is expressed via slack softened one-hot variables that multiply the operations and make the sampling procedure differentiable. SNAS  applied a similar technique to constrain the architecture parameters to be one-hot to tackle the inconsistency in optimizing objectives between search and evaluation scenarios. In order to bridge the depth gap between search and evaluation scenarios, PDARTS  divide the search process into multiple stages and progressively increase the network depth at the end of each stage. In addition, MdeNAS  propose a multinomial distribution learning method for extremely effective NAS, which considers the search space as a joint multinomial distribution and the distribution is optimized to have high expectation of the performance.
We first describe our search space and continuous relaxation in general form in Section 3.1, where the computation procedure for an architecture is represented as a directed acyclic graph. We then propose a new indicator to fully exploit the importance of each operation in Section 3.2. Finally, we design an gradual operation pruning strategy to make the super network exhibit fast convergence and high training accuracy in Section 3.3.
3.1 Search Space and Continuous Relaxation
In this work, we leverage GDAS  as our baseline framework. Our goal is to search a robust cell and apply it to a network of L cells. As shown in Figure 3, a cell is defined as a directed acyclic graph (DAG) of N nodes, , where each node is a network layer, i.e., performing a specific mathematical function. We denote the operation space as , in which each element represents a candidate operation . An edge represents the information flow connecting node and , which consists of a set of operations weighted by the architecture weights , and is thus formulated as:
where is the o-th element of an O
-dimensional learnable vector, and encodes the sampling distribution of the function between node and , as we will discuss below. Intuitively, a well learned could represent the relative importance of the operation o for transforming the feature map . Similar to GDAS, between node and , we sample one operation from according to a discrete probability distribution which is characterized by Eq. (2). During the search, we calculate each node in a cell as:
where is sampled from .
Since the operation is sampled from a discrete probability distribution, we cannot back-propagate gradients to optimize . To allow back-propagation, we use the Gumbel-Max trick [10, 25] and softmax function  to re-formulate Eq. (3) to Eq. (4), which provides an efficient way to draw samples from a discrete probability distribution in a differentiable way.
Here are i.i.d samples drawn from Gumbel (0,1); indicates the o-th function in O; is the o-th element of ; is the weight of for the transformation function between node and ; T is the temperature parameter , which controlls the Gumbel-Softmax distribution. As the parameter T approaches zero, the Gumbel-Softmax distribution becomes equivalent to the discrete probability distribution. The temperature parameter is annealed from 5.0 to 0.0 during our search.
Our candidate operation set contains the following 8 operations: (1) identity, (2) zero, (3) separable convolutions, (4) dilated separable convolutions, (5) separable convolutions, (6) dilated separable convolutions, (7) average pooling, (8) max pooling. We search for two kinds of cells, i.e., the normal cell and the reduction cell. When searching the normal cell, each operation in
has the stride of 1. For the reduction cell, the stride of operations on 2 input nodes is 2. Once we discover the best normal cell and reduction cell, we stack copies of these best cells to make up a neural network.
3.2 Operation Importance Indicator
Architecture Weights Deviation. In previous algorithms, operation importance is ranked by the architecture weights , which is supposed to represent the relative importance of a candidate operation verse the others. When the search process is over, they select the most important operation and prune other inferior operations according to the value of the architecture weights.
However, architecture weights cannot accurately reflect the importance of each operation. As shown in Figure 4, the dark dashed curve and the red solid curve indicates the architecture distribution in ideal and real cases respectively. In ideal cases, the deviation of architecture weights is large enough, so that we can clearly judge which operation is more important. However, this requirement may not always hold and it might lead to unexpected results. In Figure 5, statistical information collected from the cell of DARTS and GDAS demonstrates this analysis. The small circles and ’’ show each observation in this weight distribution, and the solid curves denote the Kernel Distribution Estimate (KDE) 
, which is a non-parametric way to estimate the probability density function of a random variable. As shown in Figure 5, there is a large quantity of operations whose architecture weights are distributed on a small interval, which makes it difficult to distinguish the important operations from the others. Figure 2 (c) also illustrates this issue: a change in ranking occurs between architecture weight ranking and the true one.
The Proposed Indicator. It is well-recognized that operation A is better than operation B if A has fewer training epochs and higher validation accuracy during the search process. Therefore, For each operation, the ratio of training iterations and validation accuracy can be used to determine the operation importance. This ratio is represented by
where and is the validation accuracy and training iterations of each operation on each edge respectively. The value of accuracy parameters might also close to each other, which will affect importance judgement; in this case, we consider the operation with higher architecture weights will be more important. Therefore, we combine accuracy parameters with architecture weights to obtain an effective indicator I as Eq. (7), which can fully exploit the importance of each operation.
where is the architecture weights of the o-th operation between node and , is a parameter to control the balance between the two parts, which is set to 0.5 in this work.
Compared to previous methods [23, 7] that judge the operation importance directly by architecture weights, our proposed indicator can effectively reflect the operation importance, which can help to select the optimal operation, so as to achieve the highest accuracy. Apply this effective indicator can be able to improve architecture search performance significantly.
Based on this new indicator I, gradual operation pruning strategy is proposed during the search process to further improve the search efficiency and accuracy, as we will discuss next.
|Type||Architecture||GPUs||Times||Params||Test Error||Search Method|
|ResNet + CutOut ||1.7||4.61||22.10||manual|
|NASNet-A + CutOut ||450||3-4||3.3||2.65||RL|
|ENAS + CutOut ||1||0.45||4.6||2.89||RL|
|AmoebaNet-A + CutOut ||450||7.0||3.2||3.34||18.93||evolution|
|AmoebaNet-B + CutOut ||450||7.0||2.8||2.55||evolution|
|Hierarchical NAS ||200||1.5||61.3||3.63||evolution|
|Progressive NAS ||100||1.5||3.2||3.63||19.53||SMBO|
|DARTS (1st) + CutOut ||1||0.38||3.3||3.00||17.76||gradient-based|
|DARTS (2nd) + CutOut ||1||1.0||3.4||2.82||17.54||gradient-based|
|SNAS + CutOut ||1||1.5||2.9||2.98||gradient-based|
|GDAS + CutOut ||1||1.0||3.4||2.93||18.38||gradient-based|
|MdeNAS + CutOut ||1||0.16||3.61||2.55||MDL|
|Random Search + CutOut ||1||4.0||3.2||3.29||random|
|EoiNAS + CutOut||1||0.6||3.4||2.50||17.3||gradient-based|
3.3 Gradual Operation Pruning Strategy
In existing methods, all candidate operations are always kept during the search process and unimportant operations are removed directly by the architecture weights until the search is over to derive the final architecture. However, for some unimportant operations, we do not need to waste time and computation resources to sample and train.
Therefore, we propose a gradual operation pruning strategy to further improve the search efficiency and accuracy. We denote the training of every k epochs as a step. In a step, we prune the most inferior operation according to the new indicator. In the next step, we judge the most inferior operation in the remaining operations and prune it. This process continues until only one operation remains; this operation can be regarded as the best operation to derive the final architecture. Owing to the gradual operation pruning strategy, our super network exhibits fast convergence.
Our definite searching algorithm is presented in Algorithm 1. At the initialization of the search process, we perform gradient-descent based optimization over the network parameters in the first 20 epochs. It helps obtaining balanced architecture weights between parameterized operations (e.g. convolution operation) and non-parameterized operations (e.g. skip-connect operation). Then, we perform a gradient-descent based optimization for the architecture parameters and network parameters in an alternating manner. Specifically, we optimize the operation weights by descending on the training set, and optimize the architecture parameters by descending on the validation set. An operation will be pruned after 20 epochs if its corresponding operation importance indicator , which is updated along training iterations, is the lowest. When the search procedure is finished, we decode the discrete cell architecture by first retaining the two strongest predecessors for each node (with the strength from node and , being ), and then choose the most likely operation by taking the argmax.
We conduct experiments on three popular image classification datasets, including CIFAR-10, CIFAR-100  and ImageNet . Architecture search is performed on CIFAR-10, and the discovered architectures are evaluated on all three datasets.
Both CIFAR-10 and CIFAR-100 have 50K training and 10K testing RGB images with a fixed spatial resolution of . These images are equally distributed over 10 classes and 100 classes in CIFAR-10 and CIFAR-100 respectively. In the architecture search scenario, the training set is equally split into two subsets, one for updating network parameters and the other for updating the architecture parameters. In the evaluation scenario, the standard training/testing split is used.
We use ImageNet to test the transferability of the architectures discovered on CIFAR-10. Specificaly, we use a subset of ImageNet, namely ILSVRC2012, which contains 1,000 object categories and 1.28M training and 50K validation images. Following the conventions [40, 23], we apply the mobile setting where the input image size is .
|Type||Architecture||GPUs||Times||Params||MAdds||Test Error (%)||Search Method|
|Progressive NAS ||100||1.5||5.1||588||25.8||8.1||SMBO|
|DARTS (2nd) ||1||1.0||4.9||595||26.9||9.0||gradient-based|
4.2 Implementation Details
Following the pipeline in GDAS , our experiments consist of three stages. First, EoiNAS is applied to search for the best normal/reduction cells on CIFAR-10. Then, a larger network is constructed by stacking the learned cells and retrained on both CIFAR-10 and CIFAR-100. The performance of EoiNAS is compared with other state-of-the-art NAS methods. Finally, we transfer the cells learned on CIFAR-10 to ImageNet to evaluate their performance on larger datasets.
Network Configrations. The neural cells for CNN are searched on CIFAR-10 following [23, 36, 7]. The candidate function set O has 8 different functions as introduced in Section 3.1. By default, we train a small network of 8 cells for 160 epochs in total and set the number of initial channels in the first convolution layer C as 16. Cells located at the 1/3 and 2/3 of the total depth of the network are reduction cells, in which all the operations adjacent to the input nodes are of stride two.
Parameter Settings. For network parameters w, we use the SGD optimization. We start with a learning rate of 0.025 and anneal it down to 0.001 following a cosine schedule. We use the momentum of 0.9 and the weight decay of 0.0003. For architecture parameters ,we use zero initialization which implies equal amount of attention over all possible operations. And we use the Adam optimization  with the learning rate of 0.0003, momentum (0.5; 0.999) and the weight decay of 0.001. To control the temperature parameter T of the Gumbel Softmax in Eq. (5), we use an exponentially decaying schedule. The T is initialized as 5 and finally reduced to 0. Following 
, we run EoiNAS 4 times with different random seeds and pick the best cell based on its validation performance. This procedure can reduce the high variance of the searched results.
Our EoiNAS takes about 0.6 GPU days to finish the search procedure on a single NVIDIA 1080Ti GPU. The best cells searched by EoiNAS is shown in Figure 6.
4.3 Results on CIFAR-10 and CIFAR-100
For CIFAR, we built a network with 20 cells and 36 input channels, and trained it by 600 epochs with batch size 128. Cutout regularization  of length 16, drop-path of probability 0.3 and auxiliary towers of weight 0.4  are applied. A standard SGD optimizer with a weight decay of 0.0003 and a momentum of 0.9 is used. The initial learning rate is 0.025, which is decayed to 0 following the cosine rule.
Evaluation results and comparison with state-of-the-art approaches are summarized in Table 1. As demonstrated in Table 1, EoiNAS achieves test errors of 2.50% and 17.3% on CIFAR-10 and CIFAR-100, respectively, with a search cost of only 0.6 GPU-days. To obtain the same performance, AmoebaNet  spent four orders of magnitude more computational resources (0.6 GPU-days vs 3150 GPU-days). Our EoiNAS also outperforms GDAS  and SNAS  by a large margin. Notably, architectures discovered by EoiNAS outperform MdeNAS , the previously most efficient approach, while with fewer parameters. In addition, we compare our method to random search (RS) , which is considered as a very strong baseline. Note that the accuracy of the model searched by EoiNAS is 0.7% higher than that of RS.
4.4 Results on ImageNet
The ImageNet dataset is used to test the transferability of architectures discovered on CIFAR-10. We adopt the same network configurations as GDAS , i.e.
, a network of 14 cells and 48 input channels. The network is trained by 250 epochs with batch size 128 on a single NVIDIA 1080Ti GPU, which takes 12 days with the PyTorch implementation. The network parameters are optimized using an SGD optimizer with an initial learning rate of 0.1 (decayed linearly after each epoch), a momentum of 0.9 and a weight decay of . Additional enhancements including label smoothing and auxiliary loss tower are applied during training.
Evaluation results and comparison with state-of-the-art approaches are summarized in Table 2. Architecture discovered by EoiNAS outperforms that by GDAS by a large margin in terms of classification accuracy and model size. It demonstrates the transfer capability of the discovered architecture from small dataset to large dataset.
4.5 Ablation Studies
In addition, we have conducted a series of ablation studies that validate the importance and effectiveness of the proposed operation importance indicator as well as gradual operation pruning strategy incorporated in the design of EoiNAS.
In Table 3, we show ablation studies on CIFAR-10. GDAS  is our baseline frame work. The GOP means the gradual operation pruning strategy, the OII means the proposed operation importance indicator. All architectures are trained by 600 epochs. As the results show, our super network exhibits fast convergence and high training accuracy owing to the gradual operation pruning strategy. The structure of the best cells discovered on CIFAR-10 is shown in Figure 7. Through prune inferior operations gradually during the search process, we achieve much improvement in performance while using less search times.
Table 3 also demonstrated the effectiveness of the proposed operation importance indicator. The proposed indicator can better judge the importance of each operation and achieve higher accuracy. Such results reveal the necessity of the operation importance indicator.
4.6 Searched Architecture Analysis
In differentiable NAS methods, architecture weights is not able to accurately reflect the importance of each operation as discussed in Section 1, because the accuracy of the fully trained stand-alone model and their corresponding architecture weights have low correlation. The proposed operation importance indicator can better decide which operation should be keep on each edge and which edges should be the input of each node, especially for the selection of skip-connect.
The skip-connect operation plays an important role in cell structure. As well studied in [12, 31], including a reasonable number and location of skip connections would make the gradient flows easier and optimization of deep neural network more stable. Compared the searched results in Figure 6 and Figure 7, architecture discovered by EoiNAS on CIFAR-10 tend to preserve the skip-connect operations in a hierarchical way, which can facilitate gradient back propagation and make the network have a better convergence
Besides, compared with Figure 6 and Figure 7, we can see that EoiNAS encourages connections in a cell to cascade more levels, in other words, there are more layers in the cell, making the evaluation network further deeper and achieving better classification performance.
Finally, the combination of the operation importance indicator with the gradual operation pruning strategy can further enhance each other. The indicator is able to accurately represent the importance of operation and determine the remaining and pruning operations. Meanwhile, through gradually prune inferior operations, we can obtain more accurate indicator.
In this paper, we presented EoiNAS, a simple yet efficient architecture search algorithm for convolutional networks, in which a new indicator was proposed to fully exploit the operation importance to guide the model search. A gradual operation pruning strategy was proposed during the search process to further improve the search efficiency. By gradually pruning the inferior operations based on the proposed operation importance indicator, EoiNAS drastically reduced the computation consumption while achieving excellent model accuracies on CIFAR-10/100 and ImageNet, which outperformed the human-designed networks and other state-of-the-art NAS methods.
-  Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167, 2016.
Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang.
Efficient architecture search by network transformation.
Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
-  Liang-Chieh Chen, Maxwell Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian Schroff, Hartwig Adam, and Jon Shlens. Searching for efficient multi-scale architectures for dense image prediction. In Advances in Neural Information Processing Systems, pages 8699–8710, 2018.
-  Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. arXiv preprint arXiv:1904.12760, 2019.
-  Xiangxiang Chu, Bo Zhang, Hailong Ma, Ruijun Xu, Jixiang Li, and Qingyuan Li. Fast, accurate and lightweight super-resolution with neural architecture search. arXiv preprint arXiv:1901.07261, 2019.
-  Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
Xuanyi Dong and Yi Yang.
Searching for a robust neural architecture in four gpu hours.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1761–1770, 2019.
-  Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. arXiv preprint arXiv:1808.05377, 2018.
-  Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Nas-fpn: Learning scalable feature pyramid architecture for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7036–7045, 2019.
-  Emil Julius Gumbel. Statistical theory of extreme values and some practical applications: a series of lectures, volume 33. US Government Printing Office, 1948.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pages 630–645. Springer, 2016.
-  Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. arXiv preprint arXiv:1905.02244, 2019.
-  Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
-  Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
-  Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
-  Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.
Imagenet classification with deep convolutional neural networks.In Advances in neural information processing systems, pages 1097–1105, 2012.
-  Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille, and Li Fei-Fei. Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 82–92, 2019.
-  Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In Proceedings of the European Conference on Computer Vision (ECCV), pages 19–34, 2018.
-  Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436, 2017.
-  Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
-  Renqian Luo, Fei Tian, Tao Qin, Enhong Chen, and Tie-Yan Liu. Neural architecture optimization. In Advances in neural information processing systems, pages 7816–7827, 2018.
-  Chris J. Maddison, Daniel Tarlow, and Tom Minka. A* sampling. Advances in Neural Information Processing Systems, pages 1–10, 2014.
-  Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
-  Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018.
Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le.
Regularized evolution for image classifier architecture search.In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4780–4789, 2019.
-  Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
-  Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510–4520, 2018.
-  Karthik A Sankararaman, Soham De, Zheng Xu, W Ronny Huang, and Tom Goldstein. The impact of neural network overparameterization on gradient confusion and stochastic gradient descent. arXiv preprint arXiv:1904.06963, 2019.
-  Bernard W Silverman. Density estimation for statistics and data analysis. Routledge, 2018.
-  Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
-  Lingxi Xie and Alan Yuille. Genetic cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 1379–1388, 2017.
-  Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: stochastic neural architecture search. arXiv preprint arXiv:1812.09926, 2018.
-  Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6848–6856, 2018.
-  Xiawu Zheng, Rongrong Ji, Lang Tang, Baochang Zhang, Jianzhuang Liu, and Qi Tian. Multinomial distribution learning for effective neural architecture search. arXiv preprint arXiv:1905.07529, 2019.
-  Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
-  Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697–8710, 2018.