From DNNs to GANs: Review of efficient hardware architectures for deep learning

06/06/2021
by   Gaurab Bhattacharya, et al.
0

In recent times, the trend in very large scale integration (VLSI) industry is multi-dimensional, for example, reduction of energy consumption, occupancy of less space, precise result, less power dissipation, faster response. To meet these needs, the hardware architecture should be reliable and robust to these problems. Recently, neural network and deep learning has been started to impact the present research paradigm significantly which consists of parameters in the order of millions, nonlinear function for activation, convolutional operation for feature extraction, regression for classification, generative adversarial networks. These operations involve huge calculation and memory overhead. Presently available DSP processors are incapable of performing these operations and they mostly face the problems, for example, memory overhead, performance drop and compromised accuracy. Moreover, if a huge silicon area is powered to accelerate the operation using parallel computation, the ICs will be having significant chance of burning out due to the considerable generation of heat. Hence, novel dark silicon constraint is developed to reduce the heat dissipation without sacrificing the accuracy. Similarly, different algorithms have been adapted to design a DSP processor compatible for fast performance in neural network, activation function, convolutional neural network and generative adversarial network. In this review, we illustrate the recent developments in hardware for accelerating the efficient implementation of deep learning networks with enhanced performance. The techniques investigated in this review are expected to direct future research challenges of hardware optimization for high-performance computations.

READ FULL TEXT
research
11/18/2016

SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing

With recent advancing of Internet of Things (IoTs), it becomes very attr...
research
08/14/2021

A Survey on GAN Acceleration Using Memory Compression Technique

Since its invention, Generative adversarial networks (GANs) have shown o...
research
09/17/2019

K-TanH: Hardware Efficient Activations For Deep Learning

We propose K-TanH, a novel, highly accurate, hardware efficient approxim...
research
05/30/2022

A Transistor Operations Model for Deep Learning Energy Consumption Scaling Law

Deep Learning (DL) has transformed the automation of a wide range of ind...
research
05/24/2018

Autonomously and Simultaneously Refining Deep Neural Network Parameters by Generative Adversarial Networks

The choice of parameters, and the design of the network architecture are...
research
06/08/2022

Binary Single-dimensional Convolutional Neural Network for Seizure Prediction

Nowadays, several deep learning methods are proposed to tackle the chall...
research
04/08/2019

Image-based reconstruction for the impact problems by using DPNNs

With the improvement of the pattern recognition and feature extraction o...

Please sign up or login with your details

Forgot password? Click here to reset