DeepAI AI Chat
Log In Sign Up

DOTS: Decoupling Operation and Topology in Differentiable Architecture Search

by   Yu-Chao Gu, et al.

Differentiable Architecture Search (DARTS) has attracted extensive attention due to its efficiency in searching for cell structures. However, DARTS mainly focuses on the operation search, leaving the cell topology implicitly depending on the searched operation weights. Hence, a problem is raised: can cell topology be well represented by the operation weights? The answer is negative because we observe that the operation weights fail to indicate the performance of cell topology. In this paper, we propose to Decouple the Operation and Topology Search (DOTS), which decouples the cell topology representation from the operation weights to make an explicit topology search. DOTS is achieved by defining an additional cell topology search space besides the original operation search space. Within the DOTS framework, we propose group annealing operation search and edge annealing topology search to bridge the optimization gap between the searched over-parameterized network and the derived child network. DOTS is efficient and only costs 0.2 and 1 GPU-day to search the state-of-the-art cell architectures on CIFAR and ImageNet, respectively. By further searching for the topology of DARTS' searched cell, we can improve DARTS' performance significantly. The code will be publicly available.


page 1

page 2

page 3

page 4


Explicitly Learning Topology for Differentiable Neural Architecture Search

Differentiable neural architecture search (DARTS) has gained much succes...

Unchain the Search Space with Hierarchical Differentiable Architecture Search

Differentiable architecture search (DAS) has made great progress in sear...

Customizable Architecture Search for Semantic Segmentation

In this paper, we propose a Customizable Architecture Search (CAS) appro...

AutoShrink: A Topology-aware NAS for Discovering Efficient Neural Architecture

Resource is an important constraint when deploying Deep Neural Networks ...

Towards Improving the Consistency, Efficiency, and Flexibility of Differentiable Neural Architecture Search

Most differentiable neural architecture search methods construct a super...

Learning Digital Circuits: A Journey Through Weight Invariant Self-Pruning Neural Networks

Recently, in the paper "Weight Agnostic Neural Networks" Gaier & Ha util...

G-DARTS-A: Groups of Channel Parallel Sampling with Attention

Differentiable Architecture Search (DARTS) provides a baseline for searc...