Searching for Fast Model Families on Datacenter Accelerators

02/10/2021
by   Sheng Li, et al.
0

Neural Architecture Search (NAS), together with model scaling, has shown remarkable progress in designing high accuracy and fast convolutional architecture families. However, as neither NAS nor model scaling considers sufficient hardware architecture details, they do not take full advantage of the emerging datacenter (DC) accelerators. In this paper, we search for fast and accurate CNN model families for efficient inference on DC accelerators. We first analyze DC accelerators and find that existing CNNs suffer from insufficient operational intensity, parallelism, and execution efficiency. These insights let us create a DC-accelerator-optimized search space, with space-to-depth, space-to-batch, hybrid fused convolution structures with vanilla and depthwise convolutions, and block-wise activation functions. On top of our DC accelerator optimized neural architecture search space, we further propose a latency-aware compound scaling (LACS), the first multi-objective compound scaling method optimizing both accuracy and latency. Our LACS discovers that network depth should grow much faster than image size and network width, which is quite different from previous compound scaling results. With the new search space and LACS, our search and scaling on datacenter accelerators results in a new model series named EfficientNet-X. EfficientNet-X is up to more than 2X faster than EfficientNet (a model series with state-of-the-art trade-off on FLOPs and accuracy) on TPUv3 and GPUv100, with comparable accuracy. EfficientNet-X is also up to 7X faster than recent RegNet and ResNeSt on TPUv3 and GPUv100.

READ FULL TEXT
research
09/07/2021

ISyNet: Convolutional Neural Networks design for AI accelerator

In recent years Deep Learning reached significant results in many practi...
research
09/04/2020

S3NAS: Fast NPU-aware Neural Architecture Search Methodology

As the application area of convolutional neural networks (CNN) is growin...
research
02/11/2020

Best of Both Worlds: AutoML Codesign of a CNN and its Hardware Accelerator

Neural architecture search (NAS) has been very successful at outperformi...
research
08/25/2021

Lightweight Monocular Depth with a Novel Neural Architecture Search Method

This paper presents a novel neural architecture search method, called Li...
research
04/26/2022

GPUNet: Searching the Deployable Convolution Neural Networks for GPUs

Customizing Convolution Neural Networks (CNN) for production use has bee...
research
04/09/2022

Searching for Efficient Neural Architectures for On-Device ML on Edge TPUs

On-device ML accelerators are becoming a standard in modern mobile syste...
research
11/04/2021

RT-RCG: Neural Network and Accelerator Search Towards Effective and Real-time ECG Reconstruction from Intracardiac Electrograms

There exists a gap in terms of the signals provided by pacemakers (i.e.,...

Please sign up or login with your details

Forgot password? Click here to reset