SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing

11/18/2016
by   Ao Ren, et al.
0

With recent advancing of Internet of Things (IoTs), it becomes very attractive to implement the deep convolutional neural networks (DCNNs) onto embedded/portable systems. Presently, executing the software-based DCNNs requires high-performance server clusters in practice, restricting their widespread deployment on the mobile devices. To overcome this issue, considerable research efforts have been conducted in the context of developing highly-parallel and specific DCNN hardware, utilizing GPGPUs, FPGAs, and ASICs. Stochastic Computing (SC), which uses bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has a high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power/energy and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources bring about immense design space for enhancing scalability and robustness for hardware DCNNs. This paper presents the first comprehensive design and optimization framework of SC-based DCNNs (SC-DCNNs). We first present the optimal designs of function blocks that perform the basic operations, i.e., inner product, pooling, and activation function. Then we propose the optimal design of four types of combinations of basic function blocks, named feature extraction blocks, which are in charge of extracting features from input feature maps. Besides, weight storage methods are investigated to reduce the area and power/energy consumption for storing weights. Finally, the whole SC-DCNN implementation is optimized, with feature extraction blocks carefully selected, to minimize area and power/energy consumption while maintaining a high network accuracy level.

READ FULL TEXT

page 3

page 9

page 10

research
02/03/2018

An Area and Energy Efficient Design of Domain-Wall Memory-Based Deep Convolutional Neural Networks using Stochastic Computing

With recent trend of wearable devices and Internet of Things (IoTs), it ...
research
03/12/2017

Hardware-Driven Nonlinear Activation for Stochastic Computing Based Deep Convolutional Neural Networks

Recently, Deep Convolutional Neural Networks (DCNNs) have made unprecede...
research
06/22/2020

Fully-parallel Convolutional Neural Network Hardware

A new trans-disciplinary knowledge area, Edge Artificial Intelligence or...
research
05/10/2018

Towards Budget-Driven Hardware Optimization for Deep Convolutional Neural Networks using Stochastic Computing

Recently, Deep Convolutional Neural Network (DCNN) has achieved tremendo...
research
06/06/2021

From DNNs to GANs: Review of efficient hardware architectures for deep learning

In recent times, the trend in very large scale integration (VLSI) indust...
research
06/08/2020

Design Challenges of Neural Network Acceleration Using Stochastic Computing

The enormous and ever-increasing complexity of state-of-the-art neural n...
research
04/21/2019

A Parallel Bitstream Generator for Stochastic Computing

Stochastic computing (SC) presents high error tolerance and low hardware...

Please sign up or login with your details

Forgot password? Click here to reset