AX-DBN: An Approximate Computing Framework for the Design of Low-Power Discriminative Deep Belief Networks

03/11/2019
by   Ian Colbert, et al.
0

The power budget for embedded hardware implementations of Deep Learning algorithms can be extremely tight. To address implementation challenges in such domains, new design paradigms, like Approximate Computing, have drawn significant attention. Approximate Computing exploits the innate error-resilience of Deep Learning algorithms, a property that makes them amenable for deployment on low-power computing platforms. This paper describes an Approximate Computing design methodology, AX-DBN, for an architecture belonging to the class of stochastic Deep Learning algorithms known as Deep Belief Networks (DBNs). Specifically, we consider procedures for efficiently implementing the Discriminative Deep Belief Network (DDBN), a stochastic neural network which is used for classification tasks. For the purpose of optimizing the DDBN for hardware implementations, we explore the use of: (a) Limited precision of neurons and functional approximations of activation functions; (b) Criticality analysis to identify the nodes in the network which can operate at reduced precision while allowing the network to maintain target accuracy levels; and (c) A greedy search methodology with incremental retraining to determine the optimal reduction in precision for all neurons to maximize power savings. Using the AX-DBN methodology proposed in this paper, we present experimental results across several network architectures that show significant power savings under a user-specified accuracy loss constraint with respect to ideal full precision implementations.

READ FULL TEXT

page 1

page 8

page 9

research
04/13/2017

ApproxDBN: Approximate Computing for Discriminative Deep Belief Networks

Probabilistic generative neural networks are useful for many application...
research
01/31/2021

Generative and Discriminative Deep Belief Network Classifiers: Comparisons Under an Approximate Computing Framework

The use of Deep Learning hardware algorithms for embedded applications i...
research
11/22/2019

Implementation of Optical Deep Neural Networks using the Fabry-Perot Interferometer

Future developments in deep learning applications requiring large datase...
research
08/21/2020

ADIC: Anomaly Detection Integrated Circuit in 65nm CMOS utilizing Approximate Computing

In this paper, we present a low-power anomaly detection integrated circu...
research
04/08/2023

Training Neural Networks for Execution on Approximate Hardware

Approximate computing methods have shown great potential for deep learni...
research
06/08/2020

Design Challenges of Neural Network Acceleration Using Stochastic Computing

The enormous and ever-increasing complexity of state-of-the-art neural n...
research
12/02/2019

ReD-CaNe: A Systematic Methodology for Resilience Analysis and Design of Capsule Networks under Approximations

Recent advances in Capsule Networks (CapsNets) have shown their superior...

Please sign up or login with your details

Forgot password? Click here to reset