Faster Gradient-based NAS Pipeline Combining Broad Scalable Architecture with Confident Learning Rate

09/18/2020
by   Zixiang Ding, et al.
0

In order to further improve the search efficiency of Neural Architecture Search (NAS), we propose B-DARTS, a novel pipeline combining broad scalable architecture with Confident Learning Rate (CLR). In B-DARTS, Broad Convolutional Neural Network (BCNN) is employed as the scalable architecture for DARTS, a popular differentiable NAS approach. On one hand, BCNN is a broad scalable architecture whose topology achieves two advantages compared with the deep one, mainly including faster single-step training speed and higher memory efficiency (i.e. larger batch size for architecture search), which are all contributed to the search efficiency improvement of NAS. On the other hand, DARTS discovers the optimal architecture by gradient-based optimization algorithm, which benefits from two superiorities of BCNN simultaneously. Similar to vanilla DARTS, B-DARTS also suffers from the performance collapse issue, where those weight-free operations are prone to be selected by the search strategy. Therefore, we propose CLR, that considers the confidence of gradient for architecture weights update increasing with the training time of over-parameterized model, to mitigate the above issue. Experimental results on CIFAR-10 and ImageNet show that 1) B-DARTS delivers state-of-the-art efficiency of 0.09 GPU day using first order approximation on CIFAR-10; 2) the learned architecture by B-DARTS achieves competitive performance using state-of-the-art composite multiply-accumulate operations and parameters on ImageNet; and 3) the proposed CLR is effective for performance collapse issue alleviation of both B-DARTS and DARTS.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/16/2020

Geometry-Aware Gradient Algorithms for Neural Architecture Search

Many recent state-of-the-art methods for neural architecture search (NAS...
research
01/18/2020

Efficient Neural Architecture Search: A Broad Version

Efficient Neural Architecture Search (ENAS) achieves novel efficiency fo...
research
11/23/2020

ROME: Robustifying Memory-Efficient NAS via Topology Disentanglement and Gradients Accumulation

Single-path based differentiable neural architecture search has great st...
research
08/25/2021

iDARTS: Improving DARTS by Node Normalization and Decorrelation Discretization

Differentiable ARchiTecture Search (DARTS) uses a continuous relaxation ...
research
11/15/2021

Stacked BNAS: Rethinking Broad Convolutional Neural Network for Neural Architecture Search

Different from other deep scalable architecture based NAS approaches, Br...
research
10/16/2021

GradSign: Model Performance Inference with Theoretical Insights

A key challenge in neural architecture search (NAS) is quickly inferring...
research
11/30/2021

Improving Differentiable Architecture Search with a Generative Model

In differentiable neural architecture search (NAS) algorithms like DARTS...

Please sign up or login with your details

Forgot password? Click here to reset