Given a dataset, Neural architecture search (NAS) aims to discover high-performance convolution architectures with a searching algorithm in a tremendous search space. NAS has achieved much success in automated architecture engineering for various deep learning tasks, such as image classification[18, 31], language modeling [19, 30] and semantic segmentation [17, 6]. As mentioned in 
, NAS methods consist of three parts: search space, search strategy, and performance estimation. A conventional NAS algorithm samples a specific convolutional architecture by a search strategy and estimates the performance, which can be regarded as an objective to update the search strategy. Despite the remarkable progress, conventional NAS methods are prohibited by intensive computation and memory costs. For example, the reinforcement learning (RL) method in trains and evaluates more than 20,000 neural networks across 500 GPUs over 4 days. Recent work in  improves the scalability by formulating the task in a differentiable manner where the search space is relaxed to a continuous space, so that the architecture can be optimized with the performance on a validation set by gradient descent. However, differentiable NAS still suffers from the issued of high GPU memory consumption, which grows linearly with the size of the candidate search set.
Indeed, most NAS methods [31, 17] perform the performance estimation using standard training and validation over each searched architecture, typically, the architecture has to be trained to converge to get the final evaluation on validation set, which is computationally expensive and limits the search exploration. However, if the evaluation of different architectures can be ranked within a few epochs, why do we need to estimate the performance after the neural network converges? Consider an example in Fig. 1, we randomly sample different architectures (LeNet , AlexNet , ResNet-18  and DenseNet ) with different layers, the performance ranking in the training and testing is consistent (i.e, the performance ranking is ResNet-18 DenseNet-BC AlexNet LeNet on different networks and training epochs). Based on this observation, we state the following hypothesis for performance ranking:
Performance Ranking Hypothesis. If Cell A has higher validation performance than Cell B on a specific network and a training epoch, Cell A tends to be better than Cell B on different networks after the trainings of these netwoks converge.
Here, a cell is a fully convolutional directed acyclic graph (DAG) that maps an input tensor to an output tensor, and the final network is obtained through stacking different numbers of cells, the details of which are described in Sec.3.
The hypothesis illustrates a simple yet important rule in neural architecture search. The comparison of different architectures can be finished at early stages, as the ranking of different architectures is sufficient, whereas the final results are unnecessary and time-consuming. Based on this hypothesis, we propose a simple yet effective solution to neural architecture search, termed as Multinomial distribution for efficient Neural Architecture Search (MdeNAS), which directly formulates NAS as a distribution learning process. Specifically, the probabilities of operation candidates between two nodes are initialized equally, which can be considered as a multinomial distribution. In the learning procedure, the parameters of the distribution are updated through the current performance in every epoch, such that the probability of a bad operation is transferred to better operations. With this search strategy, MdeNAS is able to fast and effectively discover high-performance architectures with complex graph topologies within a rich search space.
In our experiments, the convolutional cells designed by MdeNAS achieve strong quantitative results. The searched model reaches 2.4% test error on CIFAR-10 with less parameters. On ImageNet, our model achieves 75.2% top-1 accuracy under MobileNet settings (MobileNet V1/V2 [11, 25]), while being 1.2 faster with measured GPU latency. The contributions of this paper are summarized as follows:
We introduce a novel algorithm for network architecture search, which is applicable to various large-scale datasets as the memory and computation costs are similar to common neural network training.
We propose a performance ranking hypothesis, which can be incorporated into the existing NAS algorithms to speed up its search.
The proposed method achieves remarkable search efficiency, e.g., 2.4% test error on CIFAR-10 in 4 hours with 1 GTX1080Ti (6.0 faster compared with state-of-the-art algorithms), which is attributed to using our distribution learning that is entirely different from RL-based [2, 31] methods and differentiable methods [19, 28].
2 Related Work
As first proposed in [30, 31], automatic neural network search in a predefined architecture space has received significant attention in the last few years. To this end, many search algorithms have been proposed to find optimal architectures using specific search strategies. Since most hand-crafted CNNs are built by stacked reduction (i.e., the spatial dimension of the input is reduced) and norm (i.e. the spatial dimensionality of the input is preserved) cells [13, 10, 12], the works in [30, 31] proposed to search networks under the same setting to reduce the search space. The works in [30, 31, 2] use reinforcement learning as a meta-controller, to explore the architecture search space. The works in [30, 31]
employ a recurrent neural network (RNN) as the policy to sequentially sample a string encoding a specific neural architecture. The policy network can be trained with the policy gradient algorithm or the proximal policy optimization. The works in[3, 4, 18] regard the architecture search space as a tree structure for network transformation, i.e., the network is generated by a farther network with some predefined operations, which reduces the search space and speeds up the search. An alternative to RL-based methods is the evolutionary
approach, which optimizes the neural architecture by evolutionary algorithms[27, 23].
However, the above architecture search algorithms are still computation-intensive. Therefore some recent works are proposed to accelerate NAS by one-shot setting, where the network is sampled by a hyper representation graph, and the search process can be accelerated by parameter sharing . For instance, DARTS  optimizes the weights within two node in the hyper-graph jointly with a continuous relaxation. Therefore, the parameters can be updated via standard gradient descend. However, one-shot methods suffer from the issue of large GPU memory consumption. To solve this problem, ProxylessNAS 
explores the search space without a specific agent with path binarization. However, since the search procedure of ProxylessNAS is still within the framework of one-shot methods, it may have the same complexity, i.e., the benefit gained in ProxylessNAS is a trade-off between exploration and exploitation. That is to say, more epochs are needed in the search procedure. Moreover, the search algorithm in  is similar to previous work, either differential or RL based methods [19, 31].
Different from the previous methods, we encode the path/operation selection as a distribution sampling, and achieve the optimization of the controller/proxy via distribution learning. Our learning process further integrates the proposed hypothesis to estimate the merit of each operation/path, which achieves an extremely efficient NAS search.
3 Architecture Search Space
In this section, we describe the architecture search space and the method to build the network. We follow the same settings as in previous NAS works [19, 18, 31] to keep the consistency. As illustrated in Fig. 2, the network is defined in different scales: network, cell, and node.
Nodes are the fundamental elements that compose cells. Each node is a specific tensor (e.g.
, a feature map in convolutional neural networks) and each directed edgedenotes an operation sampled from the operation search space to transform node to another node , as illustrated in Fig. 2(c). There are three types of nodes in a cell: input node , intermediate node , and output node . Each cell takes the previous output tensor as an input node, and generates the intermediate nodes by applying sampled operations to the previous nodes ( and ). The concatenation of all intermediate nodes is regarded as the final output node.
Following  set of possible operations, denoted as , consists of the following 8 operations: (1) max pooling. (2) no connection (zero). (3) average pooling. (4) skip connection (identity). (5) dilated convolution with rate 2. (6) dilated convolution with rate 2. (7) depth-wise separable convolution. (8) depth-wise separable convolution.
We simply employ element-wise addition at the input of a node with multiple operations (edges). For example, in Fig. 2(b), has three operations, the results of which are added element-wise and then considered as .
A cell is defined as a tiny convolutional network mapping an tensor to another
. There are two types of cells, norm cell and reduction cell. A norm cell uses the operations with stride 1, and thereforeand . A reduction cell uses the operations with stride 2, so and . For the numbers of filters and
, a common heuristic in most human designed convolutional neural networks[10, 13, 15, 26] is to double whenever the spatial feature map is halved. Therefore, for stride 1, and for stride 2.
As illustrated in Fig. 2(b), the cell is represented by a DAG with 7 nodes (two input nodes and , four intermediate nodes that apply sampled operations on the input and upper nodes, and an output node that concatenates the intermediate nodes). The edge between two nodes denote a possible operation according to a multinomial distribution in the search space. In training, the input of an intermediate node is obtained by element-wise addition when it has multiple edges (operations). In testing, we select the top K probabilities to generate the final cells. Therefore, the size of the whole search space is , where is the set of possible edges with intermediate nodes. In our case with , the total number of cell structures is , which is an extremely large space to search, and thus requires efficient optimization methods.
As illustrated in Fig. 2(a)
, a network consists of a predefined number of stacked cells, which can be either norm cells or reduction cells each taking the output of two previous cells as input. At the top of the network, global average pooling followed by a softmax layer is used for final output. Based on thePerformance Ranking Hypothesis, we train a small (e.g., 6 layers) stacked model on the relevant dataset to search for norm and reduction cells, and then generate a deeper network (e.g., 20 layers) for evaluation. The overall CNN construction process and the search space are identical to . But note that our search algorithm is different.
In this section, our NAS method is presented. We first describe how to sample the network mentioned in Sec. 3 to reduce GPU memory consumption during training. Then, we present a multinomial distribution learning to effectively optimize the distribution parameters using the proposed hypothesis.
As mentioned in Sec. 3.1, the diversity of network structures is generated by different selections of M possible paths (in this work, ) for every two nodes. Here we initialize the probabilities of these paths as in the beginning for exploration. In the sampling stage, we follow the work in  and transform the M real-valued probabilities with binary gates :
The final operation between nodes and is obtained by:
As illustrated in the previous equations, we sample only one operation at run-time, which effectively reduces the memory cost compared with .
4.2 Multinomial Distribution Learning
Previous NAS methods are time and memory consuming. The use of reinforcement learning further prohibits the methods with the delay reward in network training, i.e., the evaluation of a structure is usually finished after the network training converges. On the other hand, as mentioned in Sec. 1, according to the Performance Ranking Hypothesis, we can perform the evaluation of a cell when training the network. As illustrated in Fig. 3, the training epochs and accuracy for every operation in the search space are recorded. Operations is better than , if operation has fewer training epochs and higher accuracy.
Formally, for a specific edge between two nodes, we define the operation probability as , the training epoch as , and the accuracy as
, each of which is a real-valued column vector of length. To clearly illustrate our learning method, we further define the differential of epoch as:
and the differential of accuracy as:
where is a column vector with length 8 and all its elements being , and are matrices, where . After one epoch training, the corresponding variables , , and are calculated by the evaluation results. The parameters of the multinomial distribution can be updated through:
where is a hyper-parameter, and denotes as the indicator function that equals to one if its condition is true.
As we can see in Eq. 5, the probability of a specific operation is enhanced with fewer epochs () and higher performance (). At the same time, the probability is reduced with more epochs () and lower performance (). Since Eq. 5 is applied after every training epoch, the probability in the search space can be effectively converge and stabilize after a few epochs. Together with the proposed performance ranking hypothesis (demonstrated latter in Section 5), our multinomial distribution learning algorithm for NAS is extremely efficient, and achieves a better performance compared with other state-of-the-art methods under the same settings. Considering the performance ranking is consisted of different layers according to the hypothesis, to further improve the search efficiency, we replace the search network in  with another shallower one (only 6 layers), which takes only 4 GPU hours of searching on CIFAR-10.
To generate the final network, we first select the operations with highest probabilities in all edges. For nodes with multi-input, we employ element-wise addition with top probabilities. The final network consists of a predefined number of stacked cells, using either norm or reduction cells. Our multinomial distribution learning algorithm is presented in Alg. 1.
In this section, we first conduct some experiments on the CIFAR-10 to demonstrate the proposed hypothesis. Then, we compare our method with state-of-the-art methods on both search effectiveness and efficiency on two widely-used classification datasets including CIFAR-10 and ImageNet.
5.1 Experiment Settings
in their experiment datasets and evaluation metrics. In particular, we conduct most experiments on CIFAR-10 which has training images and testing images. In architecture search, we randomly select images in the training set as the validation set to evaluate the architecture. The color image size is with classes. All the color intensities of the images are normalized to . To further evaluate the generalization, after discovering a good cell on CIFAR-10, the architecture is transferred into a deeper network, and therefore we also conduct classification on ILSVRC 2012 ImageNet . This dataset consists of classes, which has 1.28 million training images and validation images. Here we consider the mobile setting where the input image size is and the number of multiply-add operations in the model is restricted to be less than 600M.
5.1.2 Implementation Details
In the search process, according to the hypothesis, the layer number is irrelevant to the evaluation of a cell structures. We therefore consider in total cells in the network, where the reduction cells are inserted in the second and third layers, and nodes for a cell. The network is trained for 100 epoches, with a batch size as 512 (due to the shallow network and few operation sampling), and the initial number of channels as 16. We use SGD with momentum to optimize the network weights , with an initial learning rate of 0.025 (annealed down to zero following a cosine schedule), a momentum of 0.9, and a weight decay of . The learning rate of the multinomial parameters is set to 0.01. The search takes only 4 GPU hours with only one NVIDIA GTX 1080Ti on CIFAR-10.
In the architecture evaluation step, the experimental setting is similar to [19, 31, 22]. A large network of 20 cells is trained for 600 epochs with a batch size of 96, with additional regularization such as cutout , and path dropout of probability of 0.3 
. All the experiments and models of our implementation are in PyTorch.
On ImageNet, we keep the same search hyper-parameters as on CIFAR-10. In the training procedure, The network is trained for 120 epochs with a batch size of 1024, a weight decay of , and an initial SGD learning rate of 0.4 (annealed down to zero following a cosine schedule).
. For NAS networks, we classify them according to different search methods, such as RL (NASNet, ENAS  and Path-level NAS ), evolutional algorithms (AmoebaNet ), Sequential Model Based Optimization (SMBO) (PNAS ), and gradient-based (DARTS ). We further compare our method under the mobile setting on ImageNet to demonstrate the generalization. The best architecture generated by our algorithm on CIFAR-10 is transferred to ImageNet, which follows the same experimental setting as the works mentioned above. Since our algorithm takes less time and memory, we also directly search on ImageNet, and compare it with another similar baseline (low computation consumption) of proxy-less NAS .
|Architecture||Test Error||Params||Search Cost||Search|
|Path-level NAS ||2.49||5.7||8.3||RL|
|DARTS(first order) ||2.94||3.1||1.5||gradient-based|
|DARTS(second order) ||2.83||3.4||4||gradient-based|
|Random Sample ||3.49||3.1||-||-|
5.2 Evaluation of the Hypothesis
We first conduct experiments to verify the correctness of the proposed performance ranking hypothesis. To get some intuitive sense of the hypothesis, we introduce the Kendall rank correlation coefficient, a.k.a. Kendall’s . Given two different ranks of items, the Kendall’s is computed as follows:
where is the number of pairs that are concordant (in the same order in both rankings) and denotes the number of pairs that are discordant (in the reverse order). , with 1 meaning the rankings are identical and -1 meaning a rank is in reverse of another. The probability of a pair in two ranks being consistent is . Therefore, a means that of the pairs are concordant.
We randomly sample different network architectures in the search space, and report the loss, accuracy and Kendall’s of different epochs on the testing set. The performance ranking in every epoch is compared with the final performance ranking of different network architectures. As illustrated in Fig. 4, the accuracy and loss are hardly distinguished due to the homogeneity of the sampled networks, i.e., all the networks are generated from the same space. On the other hand, the Kendall coefficient keeps a high value () in most epochs, generally approaching 1 as the number of epochs increases. It indicates that the architecture evaluation ranking has highly convincing probabilities in every epoch and generally becomes more close to the final ranking. Note that, the mean value of Kendall’s for each epoch is 0.474. Therefore, the hypothesis holds with a probability of 0.74. Moreover, we discover that the combination of the hypothesis with the multinomial distribution learning can enhance each other. The hypothesis guarantees the high expectation when selecting a good architecture, and the distribution learning decreases the probability of sampling a bad architecture.
|Architecture||Accuracy (%)||Params||Search Cost||Search|
|ShuffleNetV1 2x (V1) ||70.9||90.8||5||-||manual|
|ShuffleNetV2 2x (V2) ||73.7||-||5||-||manual|
5.3 Results on CIFAR-10
We start by finding the optimal cell architecture using the proposed method. In particular, we first search neural architectures on an over-parameterized network, and then we evaluate the best architecture with a deeper network. To eliminate the random factor, the algorithm is run for several times. We find that the architecture performance is only slightly different with different times, as well as comparing to the final performance in the deeper network (0.2), which indicates the stability of the proposed method. The best architecture is illustrated in Fig. 5.
The summarized results for convolutional architectures on CIFAR-10 are presented in Tab. 1. It is worth noting that the proposed method outperforms the state-of-the-art [31, 19], while with extremely less computation consumption (only 0.16 GPU days 1,800 in ). Since the performance highly depends on different regularization methods (e.g., cutout ) and layers, the network architectures are selected to compare equally under the same settings. Moreover, other works search the networks using either differential-based or black-box optimization. We attribute our superior results based on our novel way to solve the problem with distribution learning, was well as the fast learning procedure: The network architecture can be directly obtained from the distribution when the distribution converges. On the contrary, previous methods  evaluate architectures only when the training process is done, which is highly inefficient. Another notable phenomena observed in Tab. 1 is that, even with randomly sampling in the search space, the test error rate in  is only 3.49%, which is comparable with the previous methods in the same search space. We can therefore reasonable conclude that, the high performance in the previous methods is partially due to the good search space. At the same time, the proposed method quickly explores the search space and generates a better architecture. We also report the results of hand-crafted networks in Tab. 1. Clearly, our method shows a notable enhancement, which indicates its superiority in both resource consumption and test accuracy.
|Model||Top-1||Search time||GPU latency|
|Proxyless (GPU) ||74.8||4||5.1ms|
|Proxyless (CPU) ||74.1||4||7.4ms|
5.4 Results on ImageNet
We also run our algorithm on the ImageNet dataset . Following existing works, we conduct two experiments with different search datasets, and test on the same dataset. As reported in Tab. 1, the previous works are time consuming on CIFAR-10, which is impractical to search on ImageNet. Therefore, we first consider a transferable experiment on ImageNet, i.e., the best architecture found on CIFAR-10 is directly transferred to ImageNet, using two initial convolution layers of stride 2 before stacking 14 cells with scale reduction (reduction cells) at 1, 2, 6 and 10. The total number of flops is decided by choosing the initial number of channels. We follow the existing NAS works to compare the performance under the mobile setting, where the input image size is and the model is constrained to less than 600M FLOPS. We set the other hyper-parameters by following [19, 31], as mentioned in Sec. 5.1.2. The results in Tab. 2 show that the best cell architecture on CIFAR-10 is transferable to ImageNet. Note that, the proposed method achieves comparable accuracy with state-of-the-art methods, while using much less computation resource.
The extremely minimal time and GPU memory consumption makes our algorithm on ImageNet feasible. Therefore, we further conduct a search experiment on ImageNet. We follow  to design network setting and the search space. In particular, we allow a set of mobile convolution layers with various kernels and expanding ratios . To further accelerate the search, we directly use the network with the CPU and GPU structure obtained in . In this way, the zero and identity layer in the search space is abandoned, and we only search the hyper-parameters related to the convolutional layers. The results are reported in Tab. 3, where we have found that our MdeNAS achieves superior performance compared to both human-designed and automatic architecture search methods, with less computation consumption. The best architecture is illustrated in Fig. 6.
In this paper, we have presented MdeNAS, the first distribution learning-based architecture search algorithm for convolutional networks. Our algorithm is deployed based on a novel performance rank hypothesis that is able to further reduce the search time which compares the architecture performance in the early training process. Benefiting from our hypothesis, MdeNAS can drastically reduce the computation consumption while achieving excellent model accuracies on CIFAR-10 and ImageNet. Furthermore, MdeNAS can directly search on ImageNet, which outperforms the human-designed networks and other NAS methods.
-  H. Abdi. The kendall rank correlation coefficient. Encyclopedia of Measurement and Statistics. Sage, Thousand Oaks, CA, pages 508–510, 2007.
-  B. Baker, O. Gupta, N. Naik, and R. Raskar. Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167, 2016.
H. Cai, T. Chen, W. Zhang, Y. Yu, and J. Wang.
Efficient architecture search by network transformation.
Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
-  H. Cai, J. Yang, W. Zhang, S. Han, and Y. Yu. Path-level network transformation for efficient architecture search. arXiv preprint arXiv:1806.02639, 2018.
-  H. Cai, L. Zhu, and S. Han. Proxylessnas: Direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332, 2018.
-  L.-C. Chen, M. Collins, Y. Zhu, G. Papandreou, B. Zoph, F. Schroff, H. Adam, and J. Shlens. Searching for efficient multi-scale architectures for dense image prediction. In Advances in Neural Information Processing Systems, pages 8713–8724, 2018.
-  M. Courbariaux, Y. Bengio, and J.-P. David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems, pages 3123–3131, 2015.
-  T. DeVries and G. W. Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
-  T. Elsken, J. H. Metzen, and F. Hutter. Neural architecture search: A survey. arXiv preprint arXiv:1808.05377, 2018.
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
-  J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
-  G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
A. Krizhevsky and G. Hinton.
Convolutional deep belief networks on cifar-10.Unpublished manuscript, 40(7), 2010.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
-  Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
-  C. Liu, L.-C. Chen, F. Schroff, H. Adam, W. Hua, A. Yuille, and L. Fei-Fei. Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. arXiv preprint arXiv:1901.02985, 2019.
-  C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy. Progressive neural architecture search. In Proceedings of the European Conference on Computer Vision, pages 19–34, 2018.
-  H. Liu, K. Simonyan, and Y. Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
-  N. Ma, X. Zhang, H.-T. Zheng, and J. Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European Conference on Computer Vision, pages 116–131, 2018.
-  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
-  H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018.
-  E. Real, A. Aggarwal, Y. Huang, and Q. V. Le. Regularized evolution for image classifier architecture search. arXiv preprint arXiv:1802.01548, 2018.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
-  M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510–4520, 2018.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  L. Xie and A. Yuille. Genetic cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 1379–1388, 2017.
-  S. Xie, H. Zheng, C. Liu, and L. Lin. Snas: stochastic neural architecture search. arXiv preprint arXiv:1812.09926, 2018.
-  X. Zhang, X. Zhou, M. Lin, and J. Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6848–6856, 2018.
-  B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
-  B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697–8710, 2018.