DeepAI AI Chat
Log In Sign Up

Exploiting Inherent Error-Resiliency of Neuromorphic Computing to achieve Extreme Energy-Efficiency through Mixed-Signal Neurons

by   Baibhab Chatterjee, et al.

Neuromorphic computing, inspired by the brain, promises extreme efficiency for certain classes of learning tasks, such as classification and pattern recognition. The performance and power consumption of neuromorphic computing depends heavily on the choice of the neuron architecture. Digital neurons (Dig-N) are conventionally known to be accurate and efficient at high speed, while suffering from high leakage currents from a large number of transistors in a large design. On the other hand, analog/mixed-signal neurons are prone to noise, variability and mismatch, but can lead to extremely low-power designs. In this work, we will analyze, compare and contrast existing neuron architectures with a proposed mixed-signal neuron (MS-N) in terms of performance, power and noise, thereby demonstrating the applicability of the proposed mixed-signal neuron for achieving extreme energy-efficiency in neuromorphic computing. The proposed MS-N is implemented in 65 nm CMOS technology and exhibits > 100X better energy-efficiency across all frequencies over two traditional digital neurons synthesized in the same technology node. We also demonstrate that the inherent error-resiliency of a fully connected or even convolutional neural network (CNN) can handle the noise as well as the manufacturing non-idealities of the MS-N up to certain degrees. Notably, a system-level implementation on MNIST datasets exhibits a worst-case increase in classification error by 2.1 is 0.1 uV2, along with +-3σ amount of variation and mismatch introduced in the transistor parameters for the proposed neuron with 8-bit precision.


An Energy-Efficient Mixed-Signal Neuron for Inherently Error-Resilient Neuromorphic Systems

This work presents the design and analysis of a mixed-signal neuron (MS-...

THOR – A Neuromorphic Processor with 7.29G TSOP^2/mm^2Js Energy-Throughput Efficiency

Neuromorphic computing using biologically inspired Spiking Neural Networ...

A Low-Power Domino Logic Architecture for Memristor-Based Neuromorphic Computing

We propose a domino logic architecture for memristor-based neuromorphic ...

Wave-based extreme deep learning based on non-linear time-Floquet entanglement

Wave-based analog signal processing holds the promise of extremely fast,...

Reliability of Event Timing in Silicon Neurons

Analog, low-voltage electronics show great promise in producing silicon ...

Improving Noise Tolerance of Mixed-Signal Neural Networks

Mixed-signal hardware accelerators for deep learning achieve orders of m...

End-to-End Memristive HTM System for Pattern Recognition and Sequence Prediction

Neuromorphic systems that learn and predict from streaming inputs hold s...