Design Challenges of Neural Network Acceleration Using Stochastic Computing

06/08/2020
by   Alireza Khadem, et al.
0

The enormous and ever-increasing complexity of state-of-the-art neural networks (NNs) has impeded the deployment of deep learning on resource-limited devices such as the Internet of Things (IoTs). Stochastic computing exploits the inherent amenability to approximation characteristic of NNs to reduce their energy and area footprint, two critical requirements of small embedded devices suitable for the IoTs. This report evaluates and compares two recently proposed stochastic-based NN designs, referred to as BISC (Binary Interfaced Stochastic Computing) by Sim and Lee, 2017, and ESL (Extended Stochastic Logic) by Canals et al., 2016. Using analysis and simulation, we compare three distinct implementations of these designs in terms of performance, power consumption, area, and accuracy. We also discuss the overall challenges faced in adopting stochastic computing for building NNs. We find that BISC outperforms the other architectures when executing the LeNet-5 NN model applied to the MNIST digit recognition dataset. Our analysis and simulation experiments indicate that this architecture is around 50X faster, occupies 5.7X and 2.9X less area, and consumes 7.8X and 1.8X less power than the two ESL architectures.

READ FULL TEXT

page 5

page 9

page 13

research
06/22/2020

Fully-parallel Convolutional Neural Network Hardware

A new trans-disciplinary knowledge area, Edge Artificial Intelligence or...
research
11/18/2016

SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing

With recent advancing of Internet of Things (IoTs), it becomes very attr...
research
08/17/2017

Power Optimizations in MTJ-based Neural Networks through Stochastic Computing

Artificial Neural Networks (ANNs) have found widespread applications in ...
research
05/15/2019

Accelerating Deterministic and Stochastic Binarized Neural Networks on FPGAs Using OpenCL

Recent technological advances have proliferated the available computing ...
research
05/30/2019

Toward Runtime-Throttleable Neural Networks

As deep neural network (NN) methods have matured, there has been increas...
research
09/29/2015

VLSI Implementation of Deep Neural Network Using Integral Stochastic Computing

The hardware implementation of deep neural networks (DNNs) has recently ...
research
03/11/2019

AX-DBN: An Approximate Computing Framework for the Design of Low-Power Discriminative Deep Belief Networks

The power budget for embedded hardware implementations of Deep Learning ...

Please sign up or login with your details

Forgot password? Click here to reset