Neuromorphic Deep Learning Machines

12/16/2016
by   Emre Neftci, et al.
0

An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated weights are not essential for learning deep representations. Random BP replaces feedback weights with random ones and encourages the network to adjust its feed-forward weights to learn pseudo-inverses of the (random) feedback weights. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations in neuromorphic computing hardware. The rule requires only one addition and two comparisons for each synaptic weight using a two-compartment leaky Integrate & Fire (I&F) neuron, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving nearly identical classification accuracies compared to artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/29/2017

Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning

Embedded, continual learning for autonomous and adaptive behavior is a k...
research
08/21/2023

Neuromorphic Hebbian learning with magnetic tunnel junction synapses

Neuromorphic computing aims to mimic both the function and structure of ...
research
04/03/2016

A New Learning Method for Inference Accuracy, Core Occupation, and Performance Co-optimization on TrueNorth Chip

IBM TrueNorth chip uses digital spikes to perform neuromorphic computing...
research
05/05/2020

Towards On-Chip Bayesian Neuromorphic Learning

If edge devices are to be deployed to critical applications where their ...
research
10/16/2020

Towards truly local gradients with CLAPP: Contrastive, Local And Predictive Plasticity

Back-propagation (BP) is costly to implement in hardware and implausible...
research
07/20/2017

Adaptive Learning Rule for Hardware-based Deep Neural Networks Using Electronic Synapse Devices

In this paper, we propose a learning rule based on a back-propagation (B...
research
02/25/2020

Evaluating complexity and resilience trade-offs in emerging memory inference machines

Neuromorphic-style inference only works well if limited hardware resourc...

Please sign up or login with your details

Forgot password? Click here to reset