RankNAS: Efficient Neural Architecture Search by Pairwise Ranking

by   Chi Hu, et al.

This paper addresses the efficiency challenge of Neural Architecture Search (NAS) by formulating the task as a ranking problem. Previous methods require numerous training examples to estimate the accurate performance of architectures, although the actual goal is to find the distinction between "good" and "bad" candidates. Here we do not resort to performance predictors. Instead, we propose a performance ranking method (RankNAS) via pairwise ranking. It enables efficient architecture search using much fewer training examples. Moreover, we develop an architecture selection method to prune the search space and concentrate on more promising candidates. Extensive experiments on machine translation and language modeling tasks show that RankNAS can design high-performance architectures while being orders of magnitude faster than state-of-the-art NAS systems.


DARTS: Differentiable Architecture Search

This paper addresses the scalability challenge of architecture search by...

Angle-based Search Space Shrinking for Neural Architecture Search

In this work, we present a simple and general search space shrinking met...

Ranking architectures using meta-learning

Neural architecture search has recently attracted lots of research effor...

Efficient Differentiable Neural Architecture Search with Meta Kernels

The searching procedure of neural architecture search (NAS) is notorious...

Learning Architectures from an Extended Search Space for Language Modeling

Neural architecture search (NAS) has advanced significantly in recent ye...

Neural Architecture Search From Task Similarity Measure

In this paper, we propose a neural architecture search framework based o...

AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing

Architecture performance predictors have been widely used in neural arch...

Please sign up or login with your details

Forgot password? Click here to reset