PredNAS: A Universal and Sample Efficient Neural Architecture Search Framework

10/26/2022
by   Liuchun Yuan, et al.
0

In this paper, we present a general and effective framework for Neural Architecture Search (NAS), named PredNAS. The motivation is that given a differentiable performance estimation function, we can directly optimize the architecture towards higher performance by simple gradient ascent. Specifically, we adopt a neural predictor as the performance predictor. Surprisingly, PredNAS can achieve state-of-the-art performances on NAS benchmarks with only a few training samples (less than 100). To validate the universality of our method, we also apply our method on large-scale tasks and compare our method with RegNet on ImageNet and YOLOX on MSCOCO. The results demonstrate that our PredNAS can explore novel architectures with competitive performances under specific computational complexity constraints.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2020

Differentiable Neural Architecture Transformation for Reproducible Architecture Improvement

Recently, Neural Architecture Search (NAS) methods are introduced and sh...
research
01/17/2019

EAT-NAS: Elastic Architecture Transfer for Accelerating Large-scale Neural Architecture Search

Neural architecture search (NAS) methods have been proposed to release h...
research
06/18/2020

DrNAS: Dirichlet Neural Architecture Search

This paper proposes a novel differentiable architecture search method by...
research
04/04/2020

A Generic Graph-based Neural Architecture Encoding Scheme for Predictor-based NAS

This work proposes a novel Graph-based neural ArchiTecture Encoding Sche...
research
05/19/2022

Incremental Learning with Differentiable Architecture and Forgetting Search

As progress is made on training machine learning models on incrementally...
research
03/16/2022

Learning Where To Look – Generative NAS is Surprisingly Efficient

The efficient, automated search for well-performing neural architectures...
research
10/02/2022

Siamese-NAS: Using Trained Samples Efficiently to Find Lightweight Neural Architecture by Prior Knowledge

In the past decade, many architectures of convolution neural networks we...

Please sign up or login with your details

Forgot password? Click here to reset