DeepAI AI Chat
Log In Sign Up

Are Neural Architecture Search Benchmarks Well Designed? A Deeper Look Into Operation Importance

by   Vasco Lopes, et al.
Universidade da Beira Interior

Neural Architecture Search (NAS) benchmarks significantly improved the capability of developing and comparing NAS methods while at the same time drastically reduced the computational overhead by providing meta-information about thousands of trained neural networks. However, tabular benchmarks have several drawbacks that can hinder fair comparisons and provide unreliable results. These usually focus on providing a small pool of operations in heavily constrained search spaces – usually cell-based neural networks with pre-defined outer-skeletons. In this work, we conducted an empirical analysis of the widely used NAS-Bench-101, NAS-Bench-201 and TransNAS-Bench-101 benchmarks in terms of their generability and how different operations influence the performance of the generated architectures. We found that only a subset of the operation pool is required to generate architectures close to the upper-bound of the performance range. Also, the performance distribution is negatively skewed, having a higher density of architectures in the upper-bound range. We consistently found convolution layers to have the highest impact on the architecture's performance, and that specific combination of operations favors top-scoring architectures. These findings shed insights on the correct evaluation and comparison of NAS methods using NAS benchmarks, showing that directly searching on NAS-Bench-201, ImageNet16-120 and TransNAS-Bench-101 produces more reliable results than searching only on CIFAR-10. Furthermore, with this work we provide suggestions for future benchmark evaluations and design. The code used to conduct the evaluations is available at


page 1

page 6

page 8

page 9

page 10

page 13


Differential Evolution for Neural Architecture Search

Neural architecture search (NAS) methods rely on a search strategy for d...

On Redundancy and Diversity in Cell-based Neural Architecture Search

Searching for the architecture cells is a dominant paradigm in NAS. Howe...

NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy

The release of tabular benchmarks, such as NAS-Bench-101 and NAS-Bench-2...

BenchENAS: A Benchmarking Platform for Evolutionary Neural Architecture Search

Neural architecture search (NAS), which automatically designs the archit...

Approximate Neural Architecture Search via Operation Distribution Learning

The standard paradigm in Neural Architecture Search (NAS) is to search f...

Towards a Robust Differentiable Architecture Search under Label Noise

Neural Architecture Search (NAS) is the game changer in designing robust...

The Nonlinearity Coefficient - A Practical Guide to Neural Architecture Design

In essence, a neural network is an arbitrary differentiable, parametrize...