SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity

11/03/2017
by   Jingyang Zhu, et al.
0

Contemporary Deep Neural Network (DNN) contains millions of synaptic connections with tens to hundreds of layers. The large computation and memory requirements pose a challenge to the hardware design. In this work, we leverage the intrinsic activation sparsity of DNN to substantially reduce the execution cycles and the energy consumption. An end-to-end training algorithm is proposed to develop a lightweight run-time predictor for the output activation sparsity on the fly. From our experimental results, the computation overhead of the prediction phase can be reduced to less than 5 phase with negligible accuracy loss. Furthermore, an energy-efficient hardware architecture, SparseNN, is proposed to exploit both the input and output sparsity. SparseNN is a scalable architecture with distributed memories and processing elements connected through a dedicated on-chip network. Compared with the state-of-the-art accelerators which only exploit the input sparsity, SparseNN can achieve a 10 of around 50

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2020

TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference

TensorDash is a hardware level technique for enabling data-parallel MAC ...
research
09/16/2021

Exploiting Activation based Gradient Output Sparsity to Accelerate Backpropagation in CNNs

Machine/deep-learning (ML/DL) based techniques are emerging as a driving...
research
02/11/2018

ThUnderVolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy Efficient Deep Neural Network Accelerators

Hardware accelerators are being increasingly deployed to boost the perfo...
research
10/25/2019

An End-to-End HW/SW Co-Design Methodology to Design Efficient Deep Neural Network Systems using Virtual Models

End-to-end performance estimation and measurement of deep neural network...
research
05/23/2018

Approximate Random Dropout

The training phases of Deep neural network (DNN) consume enormous proces...
research
10/12/2020

DESCNet: Developing Efficient Scratchpad Memories for Capsule Network Hardware

Deep Neural Networks (DNNs) have been established as the state-of-the-ar...

Please sign up or login with your details

Forgot password? Click here to reset