PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-efficient ReRAM

12/24/2019
by   Aayush Ankit, et al.
0

The wide adoption of deep neural networks has been accompanied by ever-increasing energy and performance demands due to the expensive nature of training them. Numerous special-purpose architectures have been proposed to accelerate training: both digital and hybrid digital-analog using resistive RAM (ReRAM) crossbars. ReRAM-based accelerators have demonstrated the effectiveness of ReRAM crossbars at performing matrix-vector multiplication operations that are prevalent in training. However, they still suffer from inefficiency due to the use of serial reads and writes for performing the weight gradient and update step. A few works have demonstrated the possibility of performing outer products in crossbars, which can be used to realize the weight gradient and update step without the use of serial reads and writes. However, these works have been limited to low precision operations which are not sufficient for typical training workloads. Moreover, they have been confined to a limited set of training algorithms for fully-connected layers only. To address these limitations, we propose a bit-slicing technique for enhancing the precision of ReRAM-based outer products, which is substantially different from bit-slicing for matrix-vector multiplication only. We incorporate this technique into a crossbar architecture with three variants catered to different training algorithms. To evaluate our design on different types of layers in neural networks (fully-connected, convolutional, etc.) and training algorithms, we develop PANTHER, an ISA-programmable training accelerator with compiler support. Our evaluation shows that PANTHER achieves up to 8.02×, 54.21×, and 103× energy reductions as well as 7.16×, 4.02×, and 16× execution time reductions compared to digital accelerators, ReRAM-based accelerators, and GPUs, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2017

Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability

Tartan (TRT), a hardware accelerator for inference with Deep Neural Netw...
research
11/21/2022

BrainTTA: A 35 fJ/op Compiler Programmable Mixed-Precision Transport-Triggered NN SoC

Recently, accelerators for extremely quantized deep neural network (DNN)...
research
01/29/2019

PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference

Memristor crossbars are circuits capable of performing analog matrix-vec...
research
03/25/2020

ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning

Deep neural networks (DNNs) have surpassed human-level accuracy in a var...
research
03/10/2018

Newton: Gravitating Towards the Physical Limits of Crossbar Acceleration

Many recent works have designed accelerators for Convolutional Neural Ne...
research
07/19/2019

PPAC: A Versatile In-Memory Accelerator for Matrix-Vector-Product-Like Operations

Processing in memory (PIM) moves computation into memories with the goal...
research
01/31/2022

Neural Network Training with Asymmetric Crosspoint Elements

Analog crossbar arrays comprising programmable nonvolatile resistors are...

Please sign up or login with your details

Forgot password? Click here to reset