NAS-HPO-Bench-II: A Benchmark Dataset on Joint Optimization of Convolutional Neural Network Architecture and Training Hyperparameters

10/19/2021
by   Yoichi Hirose, et al.
12

The benchmark datasets for neural architecture search (NAS) have been developed to alleviate the computationally expensive evaluation process and ensure a fair comparison. Recent NAS benchmarks only focus on architecture optimization, although the training hyperparameters affect the obtained model performances. Building the benchmark dataset for joint optimization of architecture and training hyperparameters is essential to further NAS research. The existing NAS-HPO-Bench is a benchmark for joint optimization, but it does not consider the network connectivity design as done in modern NAS algorithms. This paper introduces the first benchmark dataset for joint optimization of network connections and training hyperparameters, which we call NAS-HPO-Bench-II. We collect the performance data of 4K cell-based convolutional neural network architectures trained on the CIFAR-10 dataset with different learning rate and batch size settings, resulting in the data of 192K configurations. The dataset includes the exact data for 12 epoch training. We further build the surrogate model predicting the accuracies after 200 epoch training to provide the performance data of longer training epoch. By analyzing NAS-HPO-Bench-II, we confirm the dependency between architecture and training hyperparameters and the necessity of joint optimization. Finally, we demonstrate the benchmarking of the baseline optimization algorithms using NAS-HPO-Bench-II.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/28/2020

NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search

One-shot neural architecture search (NAS) has played a crucial role in m...
research
02/25/2019

NAS-Bench-101: Towards Reproducible Neural Architecture Search

Recent advances in neural architecture search (NAS) demand tremendous co...
research
10/30/2020

AgEBO-Tabular: Joint Neural Architecture and Hyperparameter Search with Autotuned Data-Parallel Training for Tabular Data

Developing high-performing predictive models for large tabular data sets...
research
02/09/2023

Light and Accurate: Neural Architecture Search via Two Constant Shared Weights Initialisations

In recent years, zero-cost proxies are gaining ground in neural architec...
research
03/08/2022

UENAS: A Unified Evolution-based NAS Framework

Neural architecture search (NAS) has gained significant attention for au...
research
03/09/2020

How to Train Your Super-Net: An Analysis of Training Heuristics in Weight-Sharing NAS

Weight sharing promises to make neural architecture search (NAS) tractab...
research
06/21/2023

Balanced Mixture of SuperNets for Learning the CNN Pooling Architecture

Downsampling layers, including pooling and strided convolutions, are cru...

Please sign up or login with your details

Forgot password? Click here to reset