MERIT: Tensor Transform for Memory-Efficient Vision Processing on Parallel Architectures

11/07/2019
by   Yu-Sheng Lin, et al.
0

Computationally intensive deep neural networks (DNNs) are well-suited to run on GPUs, but newly developed algorithms usually require the heavily optimized DNN routines to work efficiently, and this problem could be even more difficult for specialized DNN architectures. In this paper, we propose a mathematical formulation which can be useful for transferring the algorithm optimization knowledge across computing platforms. We discover that data movement and storage inside parallel processor architectures can be viewed as tensor transforms across memory hierarchies, making it possible to describe many memory optimization techniques mathematically. Such transform, which we call Memory Efficient Ranged Inner-Product Tensor (MERIT) transform, can be applied to not only DNN tasks but also many traditional machine learning and computer vision computations. Moreover, the tensor transforms can be readily mapped to existing vector processor architectures. In this paper, we demonstrate that many popular applications can be converted to a succinct MERIT notation on GPUs, speeding up GPU kernels up to 20 times while using only half as many code tokens. We also use the principle of the proposed transform to design a specialized hardware unit called MERIT-z processor. This processor can be applied to a variety of DNN tasks as well as other computer vision tasks while providing comparable area and power efficiency to dedicated DNN ASICs.

READ FULL TEXT

page 1

page 2

page 7

page 9

page 14

research
07/11/2023

PowerFusion: A Tensor Compiler with Explicit Data Movement Description and Instruction-level Graph IR

Deep neural networks (DNNs) are of critical use in different domains. To...
research
09/15/2019

TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks

The use of lower precision has emerged as a popular technique to optimiz...
research
03/28/2023

Tetra-AML: Automatic Machine Learning via Tensor Networks

Neural networks have revolutionized many aspects of society but in the e...
research
09/27/2021

Efficient Computer Vision on Edge Devices with Pipeline-Parallel Hierarchical Neural Networks

Computer vision on low-power edge devices enables applications including...
research
11/25/2021

A Dense Tensor Accelerator with Data Exchange Mesh for DNN and Vision Workloads

We propose a dense tensor accelerator called VectorMesh, a scalable, mem...
research
11/19/2018

Modeling Deep Learning Accelerator Enabled GPUs

The efficacy of deep learning has resulted in its use in a growing numbe...
research
02/27/2021

Acceleration of probabilistic reasoning through custom processor architecture

Probabilistic reasoning is an essential tool for robust decision-making ...

Please sign up or login with your details

Forgot password? Click here to reset