Searching for Fast Model Families on Datacenter Accelerators

02/10/2021
by   Sheng Li, et al.
0

Neural Architecture Search (NAS), together with model scaling, has shown remarkable progress in designing high accuracy and fast convolutional architecture families. However, as neither NAS nor model scaling considers sufficient hardware architecture details, they do not take full advantage of the emerging datacenter (DC) accelerators. In this paper, we search for fast and accurate CNN model families for efficient inference on DC accelerators. We first analyze DC accelerators and find that existing CNNs suffer from insufficient operational intensity, parallelism, and execution efficiency. These insights let us create a DC-accelerator-optimized search space, with space-to-depth, space-to-batch, hybrid fused convolution structures with vanilla and depthwise convolutions, and block-wise activation functions. On top of our DC accelerator optimized neural architecture search space, we further propose a latency-aware compound scaling (LACS), the first multi-objective compound scaling method optimizing both accuracy and latency. Our LACS discovers that network depth should grow much faster than image size and network width, which is quite different from previous compound scaling results. With the new search space and LACS, our search and scaling on datacenter accelerators results in a new model series named EfficientNet-X. EfficientNet-X is up to more than 2X faster than EfficientNet (a model series with state-of-the-art trade-off on FLOPs and accuracy) on TPUv3 and GPUv100, with comparable accuracy. EfficientNet-X is also up to 7X faster than recent RegNet and ResNeSt on TPUv3 and GPUv100.

READ FULL TEXT
09/07/2021

ISyNet: Convolutional Neural Networks design for AI accelerator

In recent years Deep Learning reached significant results in many practi...
09/04/2020

S3NAS: Fast NPU-aware Neural Architecture Search Methodology

As the application area of convolutional neural networks (CNN) is growin...
02/11/2020

Best of Both Worlds: AutoML Codesign of a CNN and its Hardware Accelerator

Neural architecture search (NAS) has been very successful at outperformi...
06/17/2021

RHNAS: Realizable Hardware and Neural Architecture Search

The rapidly evolving field of Artificial Intelligence necessitates autom...
04/26/2022

GPUNet: Searching the Deployable Convolution Neural Networks for GPUs

Customizing Convolution Neural Networks (CNN) for production use has bee...
04/09/2022

Searching for Efficient Neural Architectures for On-Device ML on Edge TPUs

On-device ML accelerators are becoming a standard in modern mobile syste...
05/29/2020

DC-NAS: Divide-and-Conquer Neural Architecture Search

Most applications demand high-performance deep neural architectures cost...