DSNAS: Direct Neural Architecture Search without Parameter Retraining

02/21/2020
by   Shoukang Hu, et al.
0

If NAS methods are solutions, what is the problem? Most existing NAS methods require two-stage parameter optimization. However, performance of the same architecture in the two stages correlates poorly. In this work, we propose a new problem definition for NAS, task-specific end-to-end, based on this observation. We argue that given a computer vision task for which a NAS method is expected, this definition can reduce the vaguely-defined NAS evaluation to i) accuracy of this task and ii) the total computation consumed to finally obtain a model with satisfying accuracy. Seeing that most existing methods do not solve this problem directly, we propose DSNAS, an efficient differentiable NAS framework that simultaneously optimizes architecture and parameters with a low-biased Monte Carlo estimate. Child networks derived from DSNAS can be deployed directly without parameter retraining. Comparing with two-stage methods, DSNAS successfully discovers networks with comparable accuracy (74.4 on ImageNet in 420 GPU hours, reducing the total time by more than 34

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2020

Efficient Neural Architecture Search for End-to-end Speech Recognition via Straight-Through Gradients

Neural Architecture Search (NAS), the process of automating architecture...
research
02/16/2021

EPE-NAS: Efficient Performance Estimation Without Training for Neural Architecture Search

Neural Architecture Search (NAS) has shown excellent results in designin...
research
11/06/2021

TND-NAS: Towards Non-differentiable Objectives in Progressive Differentiable NAS Framework

Differentiable architecture search has gradually become the mainstream r...
research
11/05/2018

You Only Search Once: Single Shot Neural Architecture Search via Direct Sparse Optimization

Recently Neural Architecture Search (NAS) has aroused great interest in ...
research
12/15/2022

Colab NAS: Obtaining lightweight task-specific convolutional neural networks following Occam's razor

The current trend of applying transfer learning from CNNs trained on lar...
research
09/22/2020

Deep Learning based NAS Score and Fibrosis Stage Prediction from CT and Pathology Data

Non-Alcoholic Fatty Liver Disease (NAFLD) is becoming increasingly preva...
research
09/13/2021

DHA: End-to-End Joint Optimization of Data Augmentation Policy, Hyper-parameter and Architecture

Automated machine learning (AutoML) usually involves several crucial com...

Please sign up or login with your details

Forgot password? Click here to reset