Understanding Cache Boundness of ML Operators on ARM Processors

02/01/2021
by   Bernhard Klein, et al.
0

Machine Learning compilers like TVM allow a fast and flexible deployment on embedded CPUs. This enables the use of non-standard operators, which are common in ML compression techniques. However, it is necessary to understand the limitations of typical compute-intense operators in ML workloads to design a proper solution. This is the first in-detail analysis of dense and convolution operators, generated with TVM, that compares to the fundamental hardware limits of embedded ARM processors. Thereby it explains the gap between computational peak performance, theoretical and measured, and real-world state-of-the-art results, created with TVM and openBLAS. Instead, one can see that single-precision general matrix multiply (GEMM) and convolutions are bound by L1-cache-read bandwidth. Explorations of 8-bit and bit-serial quantized operators show that quantization can be used to achieve relevant speedups compared to cache-bound floating-point operators. However, the performance of quantized operators highly depends on the interaction between data layout and bit packing.

READ FULL TEXT
research
09/19/2023

DeepliteRT: Computer Vision at the Edge

The proliferation of edge devices has unlocked unprecedented opportuniti...
research
04/24/2020

Quantization of Deep Neural Networks for Accumulator-constrained Processors

We introduce an Artificial Neural Network (ANN) quantization methodology...
research
12/01/2018

NTX: An Energy-efficient Streaming Accelerator for Floating-point Generalized Reduction Workloads in 22nm FD-SOI

Specialized coprocessors for Multiply-Accumulate (MAC) intensive workloa...
research
10/25/2018

Automating Generation of Low Precision Deep Learning Operators

State of the art deep learning models have made steady progress in the f...
research
09/29/2022

Tuning of Mixture-of-Experts Mixed-Precision Neural Networks

Deep learning has become a useful data analysis method, however mainstre...
research
06/03/2019

Efficient 8-Bit Quantization of Transformer Neural Machine Language Translation Model

In this work, we quantize a trained Transformer machine language transla...
research
11/09/2022

LiCo-Net: Linearized Convolution Network for Hardware-efficient Keyword Spotting

This paper proposes a hardware-efficient architecture, Linearized Convol...

Please sign up or login with your details

Forgot password? Click here to reset