ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware

12/02/2018
by   Han Cai, et al.
0

Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. 10^4 GPU hours) makes it difficult to directly search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize proxy tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on target task. In this paper, we present ProxylessNAS that can directly learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6× fewer parameters. On ImageNet, our model achieves 3.1% better top-1 accuracy than MobileNetV2, while being 1.2× faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.

READ FULL TEXT
research
03/27/2020

DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search

Efficient search is a core issue in Neural Architecture Search (NAS). It...
research
05/18/2019

Multinomial Distribution Learning for Effective Neural Architecture Search

Architectures obtained by Neural Architecture Search (NAS) have achieved...
research
12/09/2018

FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search

Designing accurate and efficient ConvNets for mobile devices is challeng...
research
11/27/2020

Multi-objective Neural Architecture Search with Almost No Training

In the recent past, neural architecture search (NAS) has attracted incre...
research
06/17/2019

Hardware Aware Neural Network Architectures using FbNet

We implement a differentiable Neural Architecture Search (NAS) method in...
research
12/23/2019

Progressive DARTS: Bridging the Optimization Gap for NAS in the Wild

With the rapid development of neural architecture search (NAS), research...
research
07/17/2020

Standing on the Shoulders of Giants: Hardware and Neural Architecture Co-Search with Hot Start

Hardware and neural architecture co-search that automatically generates ...

Please sign up or login with your details

Forgot password? Click here to reset