LETI: Latency Estimation Tool and Investigation of Neural Networks inference on Mobile GPU

10/06/2020
by   Evgeny Ponomarev, et al.
2

A lot of deep learning applications are desired to be run on mobile devices. Both accuracy and inference time are meaningful for a lot of them. While the number of FLOPs is usually used as a proxy for neural network latency, it may be not the best choice. In order to obtain a better approximation of latency, research community uses look-up tables of all possible layers for latency calculation for the final prediction of the inference on mobile CPU. It requires only a small number of experiments. Unfortunately, on mobile GPU this method is not applicable in a straight-forward way and shows low precision. In this work, we consider latency approximation on mobile GPU as a data and hardware-specific problem. Our main goal is to construct a convenient latency estimation tool for investigation(LETI) of neural network inference and building robust and accurate latency prediction models for each specific task. To achieve this goal, we build open-source tools which provide a convenient way to conduct massive experiments on different target devices focusing on mobile GPU. After evaluation of the dataset, we learn the regression model on experimental data and use it for future latency prediction and analysis. We experimentally demonstrate the applicability of such an approach on a subset of popular NAS-Benchmark 101 dataset and also evaluate the most popular neural network architectures for two mobile GPUs. As a result, we construct latency prediction model with good precision on the target evaluation subset. We consider LETI as a useful tool for neural architecture search or massive latency evaluation. The project is available at https://github.com/leti-ai

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/04/2021

Generalized Latency Performance Estimation for Once-For-All Neural Architecture Search

Neural Architecture Search (NAS) has enabled the possibility of automate...
research
10/06/2022

Inference Latency Prediction at the Edge

With the growing workload of inference tasks on mobile devices, state-of...
research
11/01/2021

One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search

Convolutional neural networks (CNNs) are used in numerous real-world app...
research
08/04/2019

MoGA: Searching Beyond MobileNetV3

The evolution of MobileNets has laid a solid foundation for neural netwo...
research
04/26/2022

GPUNet: Searching the Deployable Convolution Neural Networks for GPUs

Customizing Convolution Neural Networks (CNN) for production use has bee...
research
06/26/2020

Making DensePose fast and light

DensePose estimation task is a significant step forward for enhancing us...
research
04/27/2022

MAPLE-Edge: A Runtime Latency Predictor for Edge Devices

Neural Architecture Search (NAS) has enabled automatic discovery of more...

Please sign up or login with your details

Forgot password? Click here to reset