Significance Driven Hybrid 8T-6T SRAM for Energy-Efficient Synaptic Storage in Artificial Neural Networks

Multilayered artificial neural networks (ANN) have found widespread utility in classification and recognition applications. The scale and complexity of such networks together with the inadequacies of general purpose computing platforms have led to a significant interest in the development of efficient hardware implementations. In this work, we focus on designing energy efficient on-chip storage for the synaptic weights. In order to minimize the power consumption of typical digital CMOS implementations of such large-scale networks, the digital neurons could be operated reliably at scaled voltages by reducing the clock frequency. On the contrary, the on-chip synaptic storage designed using a conventional 6T SRAM is susceptible to bitcell failures at reduced voltages. However, the intrinsic error resiliency of NNs to small synaptic weight perturbations enables us to scale the operating voltage of the 6TSRAM. Our analysis on a widely used digit recognition dataset indicates that the voltage can be scaled by 200mV from the nominal operating voltage (950mV) for practically no loss (less than 0.5 technology). Scaling beyond that causes substantial performance degradation owing to increased probability of failures in the MSBs of the synaptic weights. We, therefore propose a significance driven hybrid 8T-6T SRAM, wherein the sensitive MSBs are stored in 8T bitcells that are robust at scaled voltages due to decoupled read and write paths. In an effort to further minimize the area penalty, we present a synaptic-sensitivity driven hybrid memory architecture consisting of multiple 8T-6T SRAM banks. Our circuit to system-level simulation framework shows that the proposed synaptic-sensitivity driven architecture provides a 30.91 overhead, for less than 1

READ FULL TEXT
research
02/27/2016

Multiplier-less Artificial Neurons Exploiting Error Resiliency for Energy-Efficient Neural Computing

Large-scale artificial neural networks have shown significant promise in...
research
04/17/2019

MorphIC: A 65-nm 738k-Synapse/mm^2 Quad-Core Binary-Weight Digital Neuromorphic Processor with Stochastic Spike-Driven Online Learning

Recent trends in the field of artificial neural networks (ANNs) and conv...
research
10/02/2019

An Introduction to Probabilistic Spiking Neural Networks

Spiking neural networks (SNNs) are distributed trainable systems whose c...
research
10/02/2019

An Introduction to Probabilistic Spiking Neural Networks: Probabilistic Models, Learning Rules, and Applications

Spiking neural networks (SNNs) are distributed trainable systems whose c...
research
08/17/2017

Power Optimizations in MTJ-based Neural Networks through Stochastic Computing

Artificial Neural Networks (ANNs) have found widespread applications in ...
research
09/15/2017

Recursive Binary Neural Network Learning Model with 2.28b/Weight Storage Requirement

This paper presents a storage-efficient learning model titled Recursive ...
research
09/08/2017

An On-chip Trainable and Clock-less Spiking Neural Network with 1R Memristive Synapses

Spiking neural networks (SNNs) are being explored in an attempt to mimic...

Please sign up or login with your details

Forgot password? Click here to reset