RelativeNAS: Relative Neural Architecture Search via Slow-Fast Learning

09/14/2020
by   Hao Tan, et al.
14

Despite the remarkable successes of Convolutional Neural Networks (CNNs) in computer vision, it is time-consuming and error-prone to manually design a CNN. Among various Neural Architecture Search (NAS) methods that are motivated to automate designs of high-performance CNNs, the differentiable NAS and population-based NAS are attracting increasing interests due to their unique characters. To benefit from the merits while overcoming the deficiencies of both, this work proposes a novel NAS method, RelativeNAS. As the key to efficient search, RelativeNAS performs joint learning between fast-learners (i.e. networks with relatively higher accuracy) and slow-learners in a pairwise manner. Moreover, since RelativeNAS only requires low-fidelity performance estimation to distinguish each pair of fast-learner and slow-learner, it saves certain computation costs for training the candidate architectures. The proposed RelativeNAS brings several unique advantages: (1) it achieves state-of-the-art performance on ImageNet with top-1 error rate of 24.88 outperforming DARTS and AmoebaNet-B by 1.82 spends only nine hours with a single 1080Ti GPU to obtain the discovered cells, i.e. 3.75x and 7875x faster than DARTS and AmoebaNet respectively; (3) it provides that the discovered cells obtained on CIFAR-10 can be directly transferred to object detection, semantic segmentation, and keypoint detection, yielding competitive results of 73.1 Cityscapes, and 68.5 RelativeNAS is available at https://github.com/EMI-Group/RelativeNAS

READ FULL TEXT

page 3

page 5

page 6

page 7

page 8

page 9

page 10

page 13

research
05/28/2019

Dynamic Distribution Pruning for Efficient Network Architecture Search

Network architectures obtained by Neural Architecture Search (NAS) have ...
research
05/18/2019

Multinomial Distribution Learning for Effective Neural Architecture Search

Architectures obtained by Neural Architecture Search (NAS) have achieved...
research
06/08/2020

Neural Architecture Search without Training

The time and effort involved in hand-designing deep neural networks is i...
research
02/16/2021

EPE-NAS: Efficient Performance Estimation Without Training for Neural Architecture Search

Neural Architecture Search (NAS) has shown excellent results in designin...
research
08/13/2021

EEEA-Net: An Early Exit Evolutionary Neural Architecture Search

The goals of this research were to search for Convolutional Neural Netwo...
research
10/08/2021

Accelerating Multi-Objective Neural Architecture Search by Random-Weight Evaluation

For the goal of automated design of high-performance deep convolutional ...
research
12/24/2019

Computation Reallocation for Object Detection

The allocation of computation resources in the backbone is a crucial iss...

Please sign up or login with your details

Forgot password? Click here to reset