DeepAI AI Chat
Log In Sign Up

NAS evaluation is frustratingly hard

by   Antoine Yang, et al.
HUAWEI Technologies Co., Ltd.

Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a method's relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macro-structure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls. The code used is available at


page 1

page 2

page 3

page 4


BLOX: Macro Neural Architecture Search Benchmark and Algorithms

Neural architecture search (NAS) has been successfully used to design nu...

NASTransfer: Analyzing Architecture Transferability in Large Scale Neural Architecture Search

Neural Architecture Search (NAS) is an open and challenging problem in m...

On Redundancy and Diversity in Cell-based Neural Architecture Search

Searching for the architecture cells is a dominant paradigm in NAS. Howe...

Towards Less Constrained Macro-Neural Architecture Search

Networks found with Neural Architecture Search (NAS) achieve state-of-th...

Are Neural Architecture Search Benchmarks Well Designed? A Deeper Look Into Operation Importance

Neural Architecture Search (NAS) benchmarks significantly improved the c...

Accuracy Prediction for NAS Acceleration using Feature Selection and Extrapolation

Predicting the accuracy of candidate neural architectures is an importan...

On Network Design Spaces for Visual Recognition

Over the past several years progress in designing better neural network ...