DeepAI AI Chat
Log In Sign Up

Can weight sharing outperform random architecture search? An investigation with TuNAS

by   Gabriel Bender, et al.

Efficient Neural Architecture Search methods based on weight sharing have shown good promise in democratizing Neural Architecture Search for computer vision models. There is, however, an ongoing debate whether these efficient methods are significantly better than random search. Here we perform a thorough comparison between efficient and random search methods on a family of progressively larger and more challenging search spaces for image classification and detection on ImageNet and COCO. While the efficacies of both methods are problem-dependent, our experiments demonstrate that there are large, realistic tasks where efficient search methods can provide substantial gains over random search. In addition, we propose and evaluate techniques which improve the quality of searched architectures and reduce the need for manual hyper-parameter tuning. Source code and experiment data are available at


page 1

page 2

page 3

page 4


Neural Architecture Search using Progressive Evolution

Vanilla neural architecture search using evolutionary algorithms (EA) in...

Neural Architecture Search via Bregman Iterations

We propose a novel strategy for Neural Architecture Search (NAS) based o...

Neural Predictor for Neural Architecture Search

Neural Architecture Search methods are effective but often use complex a...

EfficientNetV2: Smaller Models and Faster Training

This paper introduces EfficientNetV2, a new family of convolutional netw...

DARTS for Inverse Problems: a Study on Hyperparameter Sensitivity

Differentiable architecture search (DARTS) is a widely researched tool f...

Neural Architecture Search for Visual Anomaly Segmentation

This paper presents AutoPatch, the first application of neural architect...

Neural Prompt Search

The size of vision models has grown exponentially over the last few year...