NAS-Bench-x11 and the Power of Learning Curves

11/05/2021
by   Shen Yan, et al.
0

While early research in neural architecture search (NAS) required extreme computational resources, the recent releases of tabular and surrogate benchmarks have greatly increased the speed and reproducibility of NAS research. However, two of the most popular benchmarks do not provide the full training information for each architecture. As a result, on these benchmarks it is not possible to run many types of multi-fidelity techniques, such as learning curve extrapolation, that require evaluating architectures at arbitrary epochs. In this work, we present a method using singular value decomposition and noise modeling to create surrogate benchmarks, NAS-Bench-111, NAS-Bench-311, and NAS-Bench-NLP11, that output the full training information for each architecture, rather than just the final validation accuracy. We demonstrate the power of using the full training information by introducing a learning curve extrapolation framework to modify single-fidelity algorithms, showing that it leads to improvements over popular single-fidelity algorithms which claimed to be state-of-the-art upon release. Our code and pretrained models are available at https://github.com/automl/nas-bench-x11.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/31/2022

NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy

The release of tabular benchmarks, such as NAS-Bench-101 and NAS-Bench-2...
02/07/2022

B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search

The early pioneering Neural Architecture Search (NAS) works were multi-t...
11/25/2020

aw_nas: A Modularized and Extensible NAS framework

Neural Architecture Search (NAS) has received extensive attention due to...
01/20/2021

Zero-Cost Proxies for Lightweight NAS

Neural Architecture Search (NAS) is quickly becoming the standard method...
03/19/2021

HW-NAS-Bench:Hardware-Aware Neural Architecture Search Benchmark

HardWare-aware Neural Architecture Search (HW-NAS) has recently gained t...
09/14/2021

HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO

To achieve peak predictive performance, hyperparameter optimization (HPO...
08/20/2021

Lessons from the Clustering Analysis of a Search Space: A Centroid-based Approach to Initializing NAS

Lots of effort in neural architecture search (NAS) research has been ded...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.