NAS evaluation is frustratingly hard

12/28/2019
by   Antoine Yang, et al.
HUAWEI Technologies Co., Ltd.
0

Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a method's relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macro-structure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls. The code used is available at https://github.com/antoyang/NAS-Benchmark.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/23/2020

NASTransfer: Analyzing Architecture Transferability in Large Scale Neural Architecture Search

Neural Architecture Search (NAS) is an open and challenging problem in m...
08/07/2020

A Surgery of the Neural Architecture Evaluators

Neural architecture search (NAS) recently received extensive attention d...
03/22/2021

Prioritized Architecture Sampling with Monto-Carlo Tree Search

One-shot neural architecture search (NAS) methods significantly reduce t...
03/03/2020

BATS: Binary ArchitecTure Search

This paper proposes Binary ArchitecTure Search (BATS), a framework that ...
05/06/2020

Learning Architectures from an Extended Search Space for Language Modeling

Neural architecture search (NAS) has advanced significantly in recent ye...
06/18/2019

Prune and Replace NAS

While recent NAS algorithms are thousands of times faster than the pione...
03/31/2020

MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning

We propose to incorporate neural architecture search (NAS) into general-...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As the deep learning revolution helped us move away from hand crafted features

(Krizhevsky et al., 2012) and reach new heights (He et al., 2016; Szegedy et al., 2017), so does Neural Architecture Search (NAS) hold the promise of freeing us from hand-crafted architectures

, which requires tedious and expensive tuning for each new task or dataset. Identifying the optimal architecture is indeed a key pillar of any Automated Machine Learning (AutoML) pipeline. Research in the last two years has proceeded at a rapid pace and many search strategies have been proposed, from Reinforcement Learning

(Zoph and Le, 2017; Pham et al., 2018)

, to Evolutionary Algorithms

(Real et al., 2017), to Gradient-based methods (Liu et al., 2019). Still, it remains unclear which approach and search algorithm is preferable. Typically, methods have been evaluated on accuracy alone, even though accuracy is influenced by many other factors besides the search algorithm. Comparison between published search algorithms for NAS is therefore either very difficult (complex training protocols with no code available) or simply impossible (different search spaces), as previously pointed out (Li and Talwalkar, 2019; Sciuto et al., 2019; Lindauer and Hutter, 2019).

NAS methods have been typically decomposed into three components (Elsken et al., 2019; Li and Talwalkar, 2019): search space, search strategy and model evaluation strategy. This division is important to keep in mind, as an improvement in any of these elements will lead to a better final performance. But is a method with a more (manually) tuned search space a better AutoML algorithm? If the key idea behind NAS is to find the optimal architecture, without human intervention, why are we devoting so much energy to infuse expert knowledge into the pipeline? Furthermore, the lack of ablation studies in most works makes it harder to pinpoint which components are instrumental to the final performance, which can easily lead to Hypothesizing After the Results are Known (HARKing; Gencoglu et al., 2019).

Paradoxically, the huge effort invested in finding better search spaces and training protocols, has led to a situation in which any randomly sampled architecture performs almost as well as those obtained by the search strategies. Our findings suggest that most of the gains in accuracy in recent contributions to NAS have come from manual improvements in the training protocol, not in the search algorithms.

As a step towards understanding which methods are more effective, we have collected code for reasonably fast (search time of less than days) NAS algorithms, and benchmarked them on well known CV datasets. Using a simple metric—the relative improvement over the average architecture of the search space—we find that most NAS methods perform very similarly and rarely substantially above this baseline. The methods used are DARTS, StacNAS, PDARTS, MANAS, CNAS, NSGANET, ENAS and NAO. The datasets used are CIFAR10, CIFAR100, SPORT8, MIT67 and FLOWERS102.

Through a number of additional experiments on the widely used DARTS search space (Liu et al., 2019), we will show that: (a) how you train your model has a much bigger impact than the actual architecture chosen; (b) different architectures from the same search space perform very similarly, so much so that (c)hyperparameters, like the number of cells, or the seed itself have a very significant effect on the ranking; and (d) the specific operations themselves have less impact on the final accuracy than the hand-designed macro-structure of the network. Notably, we find that the architectures sampled from this search space (available from the link in the abstract) are all within a range of one percentage point (top-1 accuracy) after a standard full training on CIFAR10. Finally, we include some observations on how to foster reproducibility and a discussion on how to potentially avoid some of the encountered pitfalls.

2 Related work

As mentioned, NAS methods have the potential to truly revolutionize the field, but to do so it is crucial that future research avoids common mistakes. Some of these concerns have been recently raised by the community.

For example, Li and Talwalkar (2019) highlight that most NAS methods a) fail to compare against an adequate baseline, such as a properly implemented random search strategy, b) are overly complex, with no ablation to properly assign credit to the important components, and c) fail to provide all details needed for successfully reproducing their results. In our paper we go one step further and argue that the relative improvement over the average (randomly sampled) architecture is an useful tool to quantify the effectiveness of a proposed solution and compare it with competing methods. To partly answer their second point, and understand how much the final accuracy depends on the specific architecture, we implement an in-depth study of the widely employed DARTS (Liu et al., 2019) search space and perform an ablation on the commonly used training techniques (e.g. Cutout, DropPath, AutoAugment).

In addition, Sciuto et al. (2019) also took the important step of systematically using fair baselines, and suggest random search with early stopping, averaged over multiple seeds, as an extremely competitive baseline. They find that the search spaces of three methods investigated (DARTS, ENAS, NAO) have been expertly engineered to the extent that any randomly selected architecture performs very well. In contrast, we show that even random sampling (without search) provides an incredibly competitive baseline. Our relative improvement metric allows us to isolate the contribution of the search strategy from the effects of the search space and training pipeline. Thus, we further confirm the authors’ claim, showing that indeed the average architecture performs extremely well and that how you train a model has more impact than any specific architecture.

3 NAS Benchmark

In this section we present a systematic evaluation of 8 methods on 5 datasets using a strategy that is designed to reveal the quality of each method’s search strategy, removing the effect of the manually-engineered training protocol and search space. The goal is to find general trends and highlight common features rather than just pin-pointing the most accurate algorithm.

Understanding why methods are effective is not an easy task: most introduce variations to previous search spaces, search strategies, and training protocols—with ablations disentangling the contribution of each component often incomplete or missing. In other words, how can we be sure that a new state-of-the-art method is not so simply due to a better engineered search space or training protocol? To address this issue we compare a set of 8 methods with randomly sampled architectures from their respective search spaces, and trained with the same protocol as the searched architectures.

The ultimate goal behind NAS should be to return the optimal model for any dataset given, at least within the limits of a certain task, and we feel that the current practices of searching almost exclusively on CIFAR10 go against this principle. Indeed, to avoid the very concrete risk of overfitting to this set of data, NAS methods should be tested on a variety of tasks. For this reason we run experiments on different datasets.

3.1 Methodology

Criteria for dataset selection.

We selected datasets to cover a variety of subtasks within image classification. In addition to the standard CIFAR10 we select CIFAR100 for a more challenging object classification problem (Krizhevsky, 2009); SPORT8 for action classification (Li and Fei-Fei, 2007); MIT67

for scene classification

(Quattoni and Torralba, 2009); and FLOWERS102 for fine-grained object classification (Zisserman, 2008). More details are given in the Appendix.

Criteria for method selection.

We selected methods which (a) have open-source code, or provided it upon request, and (b) have a reasonable running time, specifically a search time under

GPU-days on CIFAR10. The selected methods are: DARTS (Liu et al., 2019), StacNAS (Li et al., 2019), PDARTS (Chen, 2019), MANAS (Carlucci et al., 2019), CNAS (Weng et al., 2019), NSGANET (Lu et al., 2018), ENAS (Pham et al., 2018), and NAO (Luo et al., 2018). With the exception of NAO and NSGANET, all methods are DARTS variants and use weight sharing.

Evaluation protocol.

NAS algorithms usually consist of two phases: (i) search, producing the best architecture according to the search algorithm used; (ii) augmentation, consisting in training from scratch the best model found in the search phase. We evaluate methods as follows: Sample 8 architectures from the search space, uniformly at random, and use the method’s code to augment these architectures (same augment seed for all); Use the method’s code to search for 8 architectures and augment them (different search seed, same augment seed);

Report mean and standard deviation of the top-1 test accuracy, obtained at the end of the augmentation, for both the randomly sampled and the searched architectures;

Since both learned and randomly sampled architectures share the same search space and training protocol, calculating a relative improvement over this random baseline as can offer insights into the quality of the search strategy alone. and represent the top-1 accuracy of the search method and random sampling strategies, respectively. A good, general-purpose NAS method is expected to yield consistently over different searches and across different subtasks. We emphasize that the comparison is not against random search, but rather against random sampling, i.e., the average

architecture of the search space. For example, in the DARTS search space, for each edge in the graph that defines a cell we select one out of eight possible operations (e.g. pooling or convolutions) with uniform probability

.
Hyperparameters are optimized on CIFAR10, according to the values reported by the corresponding authors. Since most methods do not include their optimization as part of the search routine, we assumed them to be robust and generalizable to other tasks. As such, aside from scaling down the architecture depending on dataset size, experiments on other datasets use the same hyperparameters. Other training details and references are given in the Appendix.

Figure 1: Comparison of search methods and random sampling from their respective search spaces. Methods lying in the diagonal perform the same as the average architecture, while methods above the diagonal outperform it. See also Table 1.
figurePerformance and computational cost of the search phase on CIFAR10.
Table 1: Relative improvement metric, (in %), where and are the accuracies of the search method and random sampling baseline, respectively.
C10 C100 S8 M67 F102
DARTS 0.32 0.23 -0.13 0.10 0.25
PDARTS 0.52 1.20 0.51 1.19 0.20
NSGANET -0.48 1.37 0.43 2.00 1.47
ENAS 0.01 -3.44 0.67 0.13 0.47
CNAS 0.74 -0.89 -1.06 -0.66 -2.48
MANAS 0.18 -0.20 0.33 1.48 0.70
StacNAS 0.43 2.87 0.38 0.05 -0.16
NAO 0.44 -0.01 -2.05 -1.53 -0.13

3.2 Results

Figure 1 shows the evaluation results on the datasets, from which we draw two main conclusions. First, the improvements over random sampling tend to be small. In some cases the average performance of a method is even below the average randomly sampled architecture, which suggests that the search methods are not converging to desirable architectures. Second, the small range of accuracies obtained hints at narrow search spaces, where even the worst architectures perform reasonably well. See Section 5 for more experiments corroborating this conclusion.

We observe also that, on CIFAR10, the top half of best-performing methods (PDARTS, MANAS, DARTS, StacNAS) all perform similarly and positively in relation to their respective search spaces, but more variance is seen on the other datasets. This could be explained by the fact that most methods’ hyperparameters have been optimized on CIFAR10 and might not generalize as well on different datasets. As a matter of fact, we found that all NAS methods neglect to report the time needed to optimize hyperparameters. In addition, Table 

1 shows the relative improvement metric (see intro to Section 3) for each method and dataset.

The computational cost of searching for architectures is a limiting factor in their applicability and, therefore, an important variable in the evaluation of NAS algorithms. Figure 1 shows the performance as well and the computational cost of the search phase on CIFAR10.

4 Comparison of training protocols

Figure 2: Comparison of different augmentation protocols for the DARTS search space on CIFAR10. Same colored dots represent minimum and maximum accuracies in the runs.

In this section we attempt to shed some light on the surprising results of the previous section. We noticed that there was a much larger differences between the random baselines of different methods than the actual increase in performance of each approach. We hypothesized that how a network is trained (the training protocol) has a larger impact on the final accuracy than which architecture is trained, for each search space. To test this, we performed sensitivity analysis using the most common performance-boosting training protocols.

4.1 Methodology

We decided to evaluate architectures from the commonly used DARTS search space (Liu et al., 2019) on the CIFAR10 dataset. We use the following process: 1) sample random architectures, 2) train them with different training protocols (details below) and 3) report mean, standard deviation and maximum of the top-1 test accuracy at the end of the training process.

Training protocols.

The simplest training protocol, which we will call Base is similar to the one used in DARTS, but with all tricks disabled: the model is simply trained for epochs. On the other extreme, our full protocol uses several tricks which have been used in recent works (Xie et al., 2019b; Nayman et al., 2019): Auxiliary Towers (A), DropPath (D; Shakhnarovich, 2017), Cutout (C; Taylor, 2017), AutoAugment (AA; Cubuk et al., 2018), extended training for epochs (1500E), and increased number of channels (50C). In between these two extremes, by selectively enabling and disabling each component, we evaluated a further intermediate training protocols. When active, DropPath probability is , cutout length is , auxiliary tower weight is , and AutoAugment combined with Cutout are used after standard data pre-processing techniques previously described, as in Popien (2019).

4.2 Results

As shown in Figure 2, a large difference of over percentage points (p.p.) exists between the simplest and the most advanced training protocols. Indeed, this is much higher than any improvement over random sampling observed in the previous section: for example, on CIFAR10, the best improvement observed was p.p. In other words, the training protocol is often far more important than the architecture used. Note that the best accuracy of the random architectures training with the best protocol is , which is only p.p. below state-of-the-art (Nayman et al., 2019).

To summarize, it seems that most recent state-of-the-art results, though impressive, cannot always be attributed to superior search strategies. Rather, they are often the result of expert knowledge applied to the evaluation protocol. In Figure 9 (Appendix A.3.1) we show similar results when training a ResNet-50 (He et al., 2016) with the same protocols.

5 Study of DARTS’ search space

5.1 Distribution of the Random Sampling within DARTS search space

To better understand the results from the previous section, we sampled a considerable number of architectures () from the most commonly used search space (Liu et al., 2019) and fully trained them with the matching training protocol (Cutout+DropPath+Auxiliary Towers). This allows us to get a sense of how much variance exists between the different models (training statistics are made available at the link in the abstract).

As we can observe from Figure 4, architectures sampled from this search space all perform similarly, with a mean of . The worst architecture we found had an accuracy of , while the best achieved .

To put this into perspective, many methods using the same training protocol, fall within (or very close to) the standard deviation of the average architecture. Furthermore, as we can observe in Figure 6, the number of cells (a human-picked hyperparameter) has a much larger impact on the final accuracy.

In Figure 4 we used the training statistics of the models to plot the correlation between test accuracies at different epochs: it grows slowly in an almost linear fashion. We note that using the moving average of the accuracies yields a stronger correlation, which could be useful for methods using early stopping.

Figure 3: Training curves for the randomly sampled architectures. Inset plot shows the histogram of accuracies at different epochs.
Figure 4: Correlation between accuracies at different epochs and final accuracy, using raw and smoothed accuracies over a window .

5.2 Operations

To test whether the results from the previous section were due to the choice of available operations, we developed an intentionally sub-optimal search space containing plain convolutions (, , , ), max pooling operators (, ) plus the none and skip connect operations. This proposed search space is clearly more parameter inefficient compared to the commonly used DARTS one (which uses both dilated and separable ones), and we expect it to perform worse.

We sampled architectures from this new search space and trained them with the DARTS training protocol (Cutout+DropPath+Auxiliary Towers), for fair comparison with the results from the previous section. Figure 6 shows the resulting histogram, together with the one obtained from the classical DARTS space of operations. The two distributions are only shifted by accuracy points. Given the minor difference in performance, the specific operations are not a key ingredient in the success of this search space. Very likely, it’s the well engineered cell structure that allows the model to perform as well as it does.

Figure 5: Histograms of the final accuracies ( epochs) for architectures sampled from the DARTS search space ( models) and our modified version ( models).
Figure 6: Performance of randomly sampled architectures with different numbers of cells. Error bars represent standard deviation.

5.3 Does changing seed and number of cells affect ranking?

A necessary practice for many weight-sharing methods (Liu et al., 2019; Pham et al., 2018) is to restart the training from scratch after the search phase, with a different number of cells. Recent works have warned that this procedure might negatively affect ranking; similarly, the role of the seed has been previously recognized as as a fundamental element in reproducibility (Li and Talwalkar, 2019; Sciuto et al., 2019).

To test the impact of seed, we randomly sampled architectures and trained them with two different seeds (Figure 8). Ranking is heavily influenced, as the Kendall tau correlation between the two sets of training is . On average, the test accuracy changes by (max change is ), which is substantial considering the small gap between random architectures and NAS methods.

To test the depth-gap we trained another with different number of cells (Figure 8). The correlation between the two different depths is not very strong as measured by Kendall Tau (), with architectures shifting up and down the rankings by up to positions (out of ). Methods employing weight sharing (WS) would see an even more pronounced effect as the architectures normally chosen at cells would have been training sub-optimally due to the WS itself (Sciuto et al., 2019).

These findings point towards two issues. The first is that since the seed has such a large effect on ranking, it stands to reason that the final accuracy reported should be averaged over multiple seeds. The second is that, if the lottery ticket hypothesis holds—so that specific sub-networks are better mainly due to their lucky initialization; Carbin., 2018—together with our findings, this could be an additional reason why methods searching on a different number of cells than the final model, struggle to significantly improve on the average randomly sampled architecture.

Figure 7: Changes in the ranking of different architecture when trained with two different seeds (A and B).
Figure 8: Changes in the ranking of different architecture when trained with different numbers of cells (same seed).

6 Discussion and best practices

In this section we offer some suggestions on how to mitigate the issues in NAS research.

Augmention tricks: while achieving higher accuracies is clearly a desirable goal, we have shown in section 4, that using well engineered training protocols can hide the contribution of the search algorithm. We therefore suggest that both results, with and without training tricks, should be reported. An example of best practice is found in Hundt et al. (2019).

Search Space: it is difficult to evaluate the effectiveness of any given proposed method without a measure of how good randomly sampled architectures are. This is not the same thing as performing a random search which is a search strategy in itself; random sampling is simply used to establish how good the average model is. A simple approach to measure the variability of any new given search space could be to randomly sample architectures and report mean and standard deviation. We hope that future works will attempt to develop more expressive search spaces, capable of producing both good and bad network designs. Restricted search spaces, while guaranteeing good performance and quick results, will inevitably be constrained by the bounds of expert knowledge (local optima) and will be incapable of reaching more truly innovative solutions (closer to the global optima). As our findings in section 5.2 suggest, the overall wiring (the macro-structure) is an extremely influential component in the final performance. As such, future research could investigate the optimal wiring at a global level: an interesting work in this direction is Xie et al. (2019a).

Multiple datasets: as the true goal of AutoML is to minimize the need for human experts, focusing the research efforts on a single dataset will inevitably lead to algorithmic overfitting and/or methods heavily dependent on hyperparameter tuning. The best solution for this is likely to test NAS algorithms on a battery of datasets, with different characteristics: image sizes, number of samples, class granularity and learning task.

Investigating hidden components: as our experiments in Sections 4 and 5.2 show, the DARTS search space is not only effective due to specific operations that are being chosen, but in greater part due to the overall macro-structure and the training protocol used. We suggest that proper ablation studies can lead to better understanding of the contributions of each element of the pipeline.

The importance of reproducibility: reproducibility is of extreme relevance in all sciences. To this end, it is very important that authors release not only their best found architecture but also the corresponding seed (if they did not average over multiple ones), as well as the code and the detailed training protocol (including hyperparameters). To this end, NAS-Bench-101 (Ying et al., 2019), a dataset mapping architectures to their accuracy, can be extremely useful, as it allows the quality of search strategies to be assessed in isolation from other NAS components (e.g. search space, training protocol) in a quick and reproducible fashion. The code for this paper is open-source (link in the abstract). We also open-source the trained architectures used in Section 5.

Hyperparameter tuning cost: tuning hyperparameters in NAS is an extremely costly component. Therefore, we argue that either (i) hyperparameters are general enough so that they do not require tuning for further tasks, or (2) the cost is included in the search budget.

7 Conclusions

AutoML, and NAS in particular, have the potential to truly democratize the use of machine learning for all, and could bring forth very notable improvements on a variety of tasks. To truly step forward, a principled approach, with a focus on fairness and reproducibility is needed.

In this paper we have shown that, for many NAS methods, the search space has been engineered such that all architectures perform similarly well and that their relative ranking can easily shift. We have furthermore showed that the training protocol itself has a higher impact on the final accuracy than the actual network. Finally, we have provided some suggestions on how to make future research more robust to these issues.

We hope that our findings will help the community focus their efforts towards a more general approach to automated neural architecture design. Only then can we expect to learn from NAS-generated architectures as opposed to the current paradigm where search spaces are heavily influenced by our current (human) expert knowledge.

References

  • M. Carbin. (2018) The lottery ticket hypothesis: finding sparse, trainable neural networks. In arXiv:1803.03635, Cited by: §5.3.
  • F. Carlucci, P. M. Esperança, M. Singh, A. Yang, V. Gabillon, H. Xu, Z. Chen, and J. Wang (2019) MANAS: Multi-Agent Neural Architecture Search. arxiv:1909.01051. Cited by: §3.1.
  • X. Chen (2019) Progressive differentiable architecture search. In

    International Conference on Computer Vision (ICCV)

    ,
    Cited by: §3.1.
  • E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le (2018) Autoaugment: learning augmentation policies from data. arXiv:1805.09501. Cited by: §4.1.
  • T. Elsken, J. H. Metzen, and F. Hutter (2019) Neural architecture search: a survey.. Journal of Machine Learning Research 20 (55), pp. 1–21. Cited by: §1.
  • O. Gencoglu, M. van Gils, E. Guldogan, C. Morikawa, M. Süzen, M. Gruber, J. Leinonen, and H. Huttunen (2019) HARK side of deep learning–from grad student descent to automated machine learning. arXiv:1904.07633. Cited by: §1.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Computer Vision and Pattern Recognition (CVPR)

    ,
    pp. 770–778. Cited by: §1, §4.2.
  • G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger (2016) Deep networks with stochastic depth. In European conference on computer vision, pp. 646–661. Cited by: Figure 9.
  • A. Hundt, V. Jain, and G. D. Hager (2019) sharpDARTS: faster and more accurate differentiable architecture search. arXiv:1903.09900. Cited by: §6.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1.
  • A. Krizhevsky (2009) Learning multiple layers of features from tiny images. Technical report University of Toronto. Cited by: §A.2, §A.2, §3.1.
  • G. Li, X. Zhang, Z. Wang, Z. Li, and T. Zhang (2019) StacNAS: Towards stable and consistent optimization for differentiable Neural Architecture Search. arXiv:1909.11926. Cited by: §3.1.
  • L. Li and L. Fei-Fei (2007)

    What, where and who? classifying events by scene and object recognition

    .
    In International Conference on Computer Vision (ICCV), pp. 1–8. Cited by: §A.2, §3.1.
  • L. Li and A. Talwalkar (2019) Random search and reproducibility for neural architecture search. arXiv:1902.07638. Cited by: §1, §1, §2, §5.3.
  • M. Lindauer and F. Hutter (2019) Best practices for scientific research on neural architecture search. arXiv:1909.02453. Cited by: §1.
  • H. Liu, K. Simonyan, and Y. Yang (2019) DARTS: differentiable architecture search. In International Conference on Learning Representations (ICLR), Cited by: §1, §1, §2, §3.1, §4.1, §5.1, §5.3.
  • I. Loshchilov and F. Hutter (2017)

    Sgdr: stochastic gradient descent with warm restarts

    .
    In International Conference on Learning Representations (ICLR), Cited by: §A.1.
  • Z. Lu, I. Whalen, V. Boddeti, Y. Dhebar, K. Deb, E. Goodman, and W. Banzhaf (2018)

    NSGA-net: a multi-objective genetic algorithm for neural architecture search

    .
    In GECCO-2019, Cited by: §3.1.
  • R. Luo, F. Tian, T. Qin, E. Chen, and T. Liu (2018) Neural architecture optimization. In Advances in Neural Information Processing Systems (NIPS), pp. 7816–7827. Cited by: §3.1.
  • N. Nayman, A. Noy, T. Ridnik, I. Friedman, R. Jin, and L. Zelnik-Manor (2019) XNAS: neural architecture search with expert advice. arXiv:1906.08031. Cited by: §4.1, §4.2.
  • Y. E. Nesterov (1983) A method for solving the convex programming problem with convergence rate o (1/k^ 2). In Dokl. akad. nauk Sssr, Vol. 269, pp. 543–547. Cited by: §A.1.
  • H. Pham, M. Guan, B. Zoph, Q. Le, and J. Dean (2018) Efficient neural architecture search via parameter sharing. In International Conference on Machine Learning (ICML), pp. 4092–4101. Cited by: §1, §3.1, §5.3.
  • P. Popien (2019) AutoAugment. Note: GitHub repository, https://github.com/DeepVoltaire/AutoAugment Cited by: §4.1.
  • A. Quattoni and A. Torralba (2009) Recognizing indoor scenes. In Computer Vision and Pattern Recognition (CVPR), pp. 413–420. Cited by: §A.2, §3.1.
  • E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, Q. V. Le, and A. Kurakin (2017) Large-scale evolution of image classifiers. In International Conference on Machine Learning (ICML), pp. 2902–2911. Cited by: §1.
  • C. Sciuto, K. Yu, M. Jaggi, C. Musat, and M. Salzmann (2019) Evaluating the search phase of neural architecture search. arXiv:1902.08142. Cited by: §1, §2, §5.3, §5.3.
  • G. Shakhnarovich (2017) Fractalnet: ultra-deep neural networks without residuals. In International Conference on Learning Representations (ICLR), Cited by: Figure 9, §A.1, §A.1, §A.1, §A.1, §A.1, §4.1.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §A.1.
  • J. Sun (2015) Delving deep into rectifiers: surpassing human level performance on imagenet classification. In Computer Vision and Pattern Recognition (CVPR), Cited by: §A.1.
  • C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi (2017)

    Inception-v4, Inception-ResNet and the impact of residual connections on learning

    .
    In

    AAAI Conference on Artificial Intelligence

    ,
    Cited by: §1.
  • G. W. Taylor (2017) Improved regularization of convolutional neural networks with cutout. arXiv:1708.04552. Cited by: §A.1, §A.1, §A.1, §A.1, §A.1, §4.1.
  • Y. Weng, T. Zhou, L. Liu, and C. Xia (2019) Automatic convolutional neural architecture search for image classification under different scenes. IEEE Access 7, pp. 38495–38506. Cited by: §3.1.
  • S. Xie, A. Kirillov, R. Girshick, and K. He (2019a) Exploring randomly wired neural networks for image recognition. arXiv preprint arXiv:1904.01569. Cited by: §6.
  • S. Xie, H. Zheng, C. Liu, and L. Lin (2019b) SNAS: stochastic neural architecture search. In International Conference on Learning Representations (ICLR), Cited by: §4.1.
  • C. Ying, A. Klein, E. Christiansen, E. Real, K. Murphy, and F. Hutter (2019) NAS-Bench-101: towards reproducible neural architecture search. In International Conference on Machine Learning (ICML 2019), pp. 7105–7114. Cited by: §6.
  • A. Zisserman (2008) Automated flower classification over a large number of classes. In Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722–729. Cited by: §A.2, §3.1.
  • B. Zoph and Q. Le (2017) Neural architecture search with reinforcement learning. In International Conference on Learning Representations (ICLR), Cited by: §1.

Appendix A Appendix

This section details the datasets and the hyperparameters used for each method on each dataset. Search spaces were naturally left unchanged. Hyperparameters were chosen as close as possible to the original paper and occasionally updated to more recent implementations. The network size was tuned similarly for all methods for SPORT8, MIT67 and FLOWERS102. All experiments were run on NVIDIA Tesla V100 GPUs.

Method
DARTS StacNAS PDARTS MANAS CNAS
batch size S
batch size A
init channels S
init channels A
epochs S + ++
epochs A
optimizer S/A SGD SGD SGD SGD Adam
learning rates S
learning rates A
weight decay S/A
optimizer arch Adam Adam Adam Adam
learning rates arch
weight decay arch
nb cells S CIFAR
nb cells A CIFAR
nb cells S other datasets
nb cells A other datasets
nb intermediate nodes
Table 2: Hyperparameters for different NAS methods. “S” denotes search stage and “A” denotes augmentation stage. For learning rates, means cosine annealing from to .

a.1 Methods and hyperparameters

During search, models are trained/validated on the training/validation subsets, respectively During the final evaluation, the model is trained on the training+validation subsets and tested on the test subset.

Common hyperparameters. All methods share a common number of hyperparameters precised here. When SGD optimizer is used, momentum is while when Adam is used, momentum is

. Gradient clipping is set at

.

DARTS, StacNAS, PDARTS, MANAS, CNAS common hyperparameters. These methods are inspired by DARTS code-wise and consequently share a common number of hyperparameters, which we precise in table 1.

DARTS. We used the following repository : https://github.com/khanrc/pt.darts.

It notably updates the official implementation to a pytorch version posterior to 0.4. Additional enhancements include cutout of size

(Taylor, 2017), path dropout of probability (Shakhnarovich, 2017), and auxiliary tower with weight .

StacNAS. We used an unofficial implementation provided by the authors. The search process consists of stages, of which the details are given in table 1. Additional enhancements are the same as DARTS.

PDARTS. We used the official implementation : https://github.com/chenxin061/pdarts. The search process consists of stages, of which general details are given in table 1. At stage , and respectively, the number of operations decreases from to to , and the dropout probability on skip-connect increases from to to for CIFAR10, SPORT8, MIT67 and FLOWERS102 ( to to for CIFAR100). Discovered cells are restricted to keep at most skip-connect operations. Additional enhancements include cutout of size (Taylor, 2017), DropPath of probability (Shakhnarovich, 2017) and auxiliary tower with weight .

MANAS. We used an unofficial implementation provided by the authors. The reward baseline is , gamma is ( for SPORT8, for FLOWERS102, for MIT67) and the Boltzmann temperature decays from to ( to for SPORT8, to for MIT67). Additional enhancements are the same as DARTS.

CNAS. We used the official implementation : https://github.com/tianbaochou/CNAS. Label Smoothing is used with epsilon . Other additional enhancements include cutout of size (Taylor, 2017), DropPath of probability (Shakhnarovich, 2017).

NSGANET. We used the official implementation: https://github.com/ianwhale/nsga-net. The search is done in the micro search space (with cells to search, operations in the search space, blocks in each cell) on layers networks. The population size is , the number of generations is and the number of offsprings created by generation is . Networks are trained for epochs, with batch size , and the initial number of channels is . For architecture evaluation, the network is composed of cells for CIFAR10 and CIFAR100, and cells for SPORT8, MIT67 and FLOWERS102. The final selected models are trained for epochs with batch size and the initial number of channels . Momentum SGD is used with initial learning rate (annealed down to zero following a cosine schedule), and weight decay . The filter increment is set to , and squeeze and excitation is used. Additional enhancements include cutout of size (Taylor, 2017), DropPath of probability (Shakhnarovich, 2017) and auxiliary tower with weight .

NAO. We used the official Pytorch implementation: https://github.com/renqianluo/NAO_pytorch. The LSTM model used to encode architecture has a token embedding size of and a hidden state size of . The LSTM model used to decode architecture has a hidden state size of . The encoder and decoder are trained using Adam for epochs with a learning rate of . The trade-off parameters is . The step size to perform continuous optimization is . The number of nodes is fixed to , normal cell is stacked ( for SPORT8, MIT67 and FLOWERS102) times to form the CNN architecture, which corresponds to a ( for SPORT8, MIT67 and FLOWERS102) cells network and the initial number of channels is . Networks are trained for epochs with batch size for training, and validated for epochs with batch size . For architecture evaluation, the final CNN architecture is a cells network ( cells network). This network is trained for epochs with batch size for the training set ( for both for SPORT8, MIT67 and FLOWERS102), for the validation set ( for both for SPORT8, MIT67 and FLOWERS102) and the initial number of channels is . Momentum SGD is used with initial learning rate (annealed down to zero following a cosine schedule) and weight decay . Additional enhancements include cutout of size (Taylor, 2017), DropPath of probability (Shakhnarovich, 2017), dropout of probability (Srivastava et al., 2014), and auxiliary tower with weight .

ENAS.

We used the official tensorflow implementation for experiments on CIFAR10 and CIFAR100:

https://github.com/melodyguan/enas Experiments on SPORT8, MIT67 and FLOWERS102 are done using a more recent version in Pytorch for the search : https://github.com/MengTianjian/enas-pytorch. Because only the search process is implemented there, the evaluation code used is the same as DARTS. During the search, networks are composed of cells. The shared parameters

are trained with Nesterov momentum

(Nesterov, 1983), weight decay , gradient clipping , batch size , output filters, and a cosine learning rate schedule with , , , (Loshchilov and Hutter, 2017). Each architecture search is run for epochs. are initialized with He initialization (Sun, 2015). The policy parameters are initialized uniformly in [-0.1, 0.1], and trained with Adam at a learning rate of . A tanh constant of and a temperature of

is applied to the controller’s logits, and the controller entropy is added to the reward with weight

. For the evaluation, the architecture searched is extended to cells ( for SPORT8, MIT67 and FLOWERS102), trained for epochs, with batch size , and a cosine learning rate schedule with , , , .

a.2 Datasets

We present here the datasets used and how they are pre-processed.

CIFAR10. The CIFAR10 dataset (Krizhevsky, 2009) is a dataset of classes and consists of training images and test images of size .

CIFAR100. The CIFAR100 dataset (Krizhevsky, 2009) is a dataset of classes and consists of training images and test images of size .

Each of these datasets is split into a training, validation and testing subsets of size , and

respectively. For both these datasets, we use standard data pre-processing and augmentation techniques, i.e. subtracting the channel mean and dividing by the channel standard deviation; centrally padding the training images to

and randomly cropping them back to ; and randomly clipping them horizontally.

SPORT8. This is an action recognition dataset containing sport event categories and a total of images (Li and Fei-Fei, 2007). The tiny size of this dataset stresses the generalization capabilities of any NAS method applied to it.

MIT67. This is a dataset of classes representing different indoor scenes and consists of images of different sizes (Quattoni and Torralba, 2009).

FLOWERS102. This is a dataset of classes representing different species of flowers and consists of images of different sizes (Zisserman, 2008).

Each of these datasets is split into a training, validation and testing subsets with proportions (%). For each one, we use use standard data pre-processing and augmentation techniques, i.e. subtracting the channel mean and dividing the channel standard deviation, cropping the training images to random size and aspect ratio, resizing them to , and randomly changing their brightness, contrast, and saturation, while resizing test images to and cropping them at the center.

a.3 Additional results

a.3.1 Different training protocols for ResNet-50

Figure 9: Extension of Figure 2, including results obtained by training a ResNet-50 on CIFAR10. Bars with darker shade are for ResNet-50 and bars with lighter shade are for DARTS (same as Figure 2). Result are for runs of each training protocol. For ResNet-50, the auxiliary tower was added after layer 2. As DropPath (Shakhnarovich, 2017) would not have been straightforward to apply, we instead implemented Stocastic Depth (Huang et al., 2016), to a similar effect.