Neural Network Inference on Mobile SoCs

08/24/2019
by   Siqi Wang, et al.
0

The ever-increasing demand from mobile Machine Learning (ML) applications calls for evermore powerful on-chip computing resources. Mobile devices are empowered with Heterogeneous Multi-Processor Systems on Chips (HMPSoCs) to process ML workloads such as Convolutional Neural Network (CNN) inference. HMPSoCs house several different types of ML capable components on-die, such as CPU, GPU, and accelerators. These different components are capable of independently performing inference but with very different power-performance characteristics. In this article, we provide a quantitative evaluation of the inference capabilities of the different components on HMPSoCs. We also present insights behind their respective power-performance behaviour. Finally, we explore the performance limit of the HMPSoCs by synergistically engaging all the components concurrently.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/14/2021

Bandwidth Utilization Side-Channel on ML Inference Accelerators

Accelerators used for machine learning (ML) inference provide great perf...
11/06/2019

MLPerf Inference Benchmark

Machine-learning (ML) hardware and software system demand is burgeoning....
12/08/2014

MLitB: Machine Learning in the Browser

With few exceptions, the field of Machine Learning (ML) research has lar...
12/03/2015

MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems

MXNet is a multi-language machine learning (ML) library to ease the deve...
12/03/2020

MLPerf Mobile Inference Benchmark: Why Mobile AI Benchmarking Is Hard and What to Do About It

MLPerf Mobile is the first industry-standard open-source mobile benchmar...
01/19/2018

Mobile Machine Learning Hardware at ARM: A Systems-on-Chip (SoC) Perspective

Machine learning is playing an increasingly significant role in emerging...
06/20/2019

Improving Branch Prediction By Modeling Global History with Convolutional Neural Networks

CPU branch prediction has hit a wall--existing techniques achieve near-p...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.