DOTS: Decoupling Operation and Topology in Differentiable Architecture Search

10/02/2020
by   Yu-Chao Gu, et al.
5

Differentiable Architecture Search (DARTS) has attracted extensive attention due to its efficiency in searching for cell structures. However, DARTS mainly focuses on the operation search, leaving the cell topology implicitly depending on the searched operation weights. Hence, a problem is raised: can cell topology be well represented by the operation weights? The answer is negative because we observe that the operation weights fail to indicate the performance of cell topology. In this paper, we propose to Decouple the Operation and Topology Search (DOTS), which decouples the cell topology representation from the operation weights to make an explicit topology search. DOTS is achieved by defining an additional cell topology search space besides the original operation search space. Within the DOTS framework, we propose group annealing operation search and edge annealing topology search to bridge the optimization gap between the searched over-parameterized network and the derived child network. DOTS is efficient and only costs 0.2 and 1 GPU-day to search the state-of-the-art cell architectures on CIFAR and ImageNet, respectively. By further searching for the topology of DARTS' searched cell, we can improve DARTS' performance significantly. The code will be publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2020

Explicitly Learning Topology for Differentiable Neural Architecture Search

Differentiable neural architecture search (DARTS) has gained much succes...
research
01/11/2021

Unchain the Search Space with Hierarchical Differentiable Architecture Search

Differentiable architecture search (DAS) has made great progress in sear...
research
08/26/2019

Customizable Architecture Search for Semantic Segmentation

In this paper, we propose a Customizable Architecture Search (CAS) appro...
research
11/21/2019

AutoShrink: A Topology-aware NAS for Discovering Efficient Neural Architecture

Resource is an important constraint when deploying Deep Neural Networks ...
research
01/27/2021

Towards Improving the Consistency, Efficiency, and Flexibility of Differentiable Neural Architecture Search

Most differentiable neural architecture search methods construct a super...
research
08/30/2019

Learning Digital Circuits: A Journey Through Weight Invariant Self-Pruning Neural Networks

Recently, in the paper "Weight Agnostic Neural Networks" Gaier & Ha util...
research
10/16/2020

G-DARTS-A: Groups of Channel Parallel Sampling with Attention

Differentiable Architecture Search (DARTS) provides a baseline for searc...

Please sign up or login with your details

Forgot password? Click here to reset