ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning

03/25/2020
by   Vinay Joshi, et al.
0

Deep neural networks (DNNs) have surpassed human-level accuracy in a variety of cognitive tasks but at the cost of significant memory/time requirements in DNN training. This limits their deployment in energy and memory limited applications that require real-time learning. Matrix-vector multiplications (MVM) and vector-vector outer product (VVOP) are the two most expensive operations associated with the training of DNNs. Strategies to improve the efficiency of MVM computation in hardware have been demonstrated with minimal impact on training accuracy. However, the VVOP computation remains a relatively less explored bottleneck even with the aforementioned strategies. Stochastic computing (SC) has been proposed to improve the efficiency of VVOP computation but on relatively shallow networks with bounded activation functions and floating-point (FP) scaling of activation gradients. In this paper, we propose ESSOP, an efficient and scalable stochastic outer product architecture based on the SC paradigm. We introduce efficient techniques to generalize SC for weight update computation in DNNs with the unbounded activation functions (e.g., ReLU), required by many state-of-the-art networks. Our architecture reduces the computational cost by re-using random numbers and replacing certain FP multiplication operations by bit shift scaling. We show that the ResNet-32 network with 33 convolution layers and a fully-connected layer can be trained with ESSOP on the CIFAR-10 dataset to achieve baseline comparable accuracy. Hardware design of ESSOP at 14nm technology node shows that, compared to a highly pipelined FP16 multiplier design, ESSOP is 82.2 energy and area efficiency respectively for outer product computation.

READ FULL TEXT
research
03/12/2017

Hardware-Driven Nonlinear Activation for Stochastic Computing Based Deep Convolutional Neural Networks

Recently, Deep Convolutional Neural Networks (DCNNs) have made unprecede...
research
03/09/2020

Software-Level Accuracy Using Stochastic Computing With Charge-Trap-Flash Based Weight Matrix

The in-memory computing paradigm with emerging memory devices has been r...
research
10/05/2019

Multiplierless and Sparse Machine Learning based on Margin Propagation Networks

The new generation of machine learning processors have evolved from mult...
research
07/12/2023

Non-Ideal Program-Time Conservation in Charge Trap Flash for Deep Learning

Training deep neural networks (DNNs) is computationally intensive but ar...
research
12/24/2019

PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-efficient ReRAM

The wide adoption of deep neural networks has been accompanied by ever-i...
research
07/22/2019

A Stochastic-Computing based Deep Learning Framework using Adiabatic Quantum-Flux-Parametron SuperconductingTechnology

The Adiabatic Quantum-Flux-Parametron (AQFP) superconducting technology ...
research
05/05/2021

GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural Networks

Machine Learning as a Service (MLaaS) is enabling a wide range of smart ...

Please sign up or login with your details

Forgot password? Click here to reset