Towards Improving the Consistency, Efficiency, and Flexibility of Differentiable Neural Architecture Search

01/27/2021
by   Yibo Yang, et al.
0

Most differentiable neural architecture search methods construct a super-net for search and derive a target-net as its sub-graph for evaluation. There exists a significant gap between the architectures in search and evaluation. As a result, current methods suffer from an inconsistent, inefficient, and inflexible search process. In this paper, we introduce EnTranNAS that is composed of Engine-cells and Transit-cells. The Engine-cell is differentiable for architecture search, while the Transit-cell only transits a sub-graph by architecture derivation. Consequently, the gap between the architectures in search and evaluation is significantly reduced. Our method also spares much memory and computation cost, which speeds up the search process. A feature sharing strategy is introduced for more balanced optimization and more efficient search. Furthermore, we develop an architecture derivation method to replace the traditional one that is based on a hand-crafted rule. Our method enables differentiable sparsification, and keeps the derived architecture equivalent to that of Engine-cell, which further improves the consistency between search and evaluation. Besides, it supports the search for topology where a node can be connected to prior nodes with any number of connections, so that the searched architectures could be more flexible. For experiments on CIFAR-10, our search on the standard space requires only 0.06 GPU-day. We further have an error rate of 2.22 extended space. We can also directly perform the search on ImageNet with topology learnable and achieve a top-1 error rate of 23.8

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2020

Explicitly Learning Topology for Differentiable Neural Architecture Search

Differentiable neural architecture search (DARTS) has gained much succes...
research
10/13/2020

ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse Coding

Neural architecture search (NAS) aims to produce the optimal sparse solu...
research
07/12/2019

PC-DARTS: Partial Channel Connections for Memory-Efficient Differentiable Architecture Search

Differentiable architecture search (DARTS) provided a fast solution in f...
research
07/08/2020

NASGEM: Neural Architecture Search via Graph Embedding Method

Neural Architecture Search (NAS) automates and prospers the design of ne...
research
09/23/2019

Scheduled Differentiable Architecture Search for Visual Recognition

Convolutional Neural Networks (CNN) have been regarded as a capable clas...
research
10/02/2020

DOTS: Decoupling Operation and Topology in Differentiable Architecture Search

Differentiable Architecture Search (DARTS) has attracted extensive atten...
research
07/09/2021

Mutually-aware Sub-Graphs Differentiable Architecture Search

Differentiable architecture search is prevalent in the field of NAS beca...

Please sign up or login with your details

Forgot password? Click here to reset