Towards On-Chip Bayesian Neuromorphic Learning

05/05/2020
by   Nathan Wycoff, et al.
0

If edge devices are to be deployed to critical applications where their decisions could have serious financial, political, or public-health consequences, they will need a way to signal when they are not sure how to react to their environment. For instance, a lost delivery drone could make its way back to a distribution center or contact the client if it is confused about how exactly to make its delivery, rather than taking the action which is "most likely" correct. This issue is compounded for health care or military applications. However, the brain-realistic temporal credit assignment problem neuromorphic computing algorithms have to solve is difficult. The double role weights play in backpropagation-based-learning, dictating how the network reacts to both input and feedback, needs to be decoupled. e-prop 1 is a promising learning algorithm that tackles this with Broadcast Alignment (a technique where network weights are replaced with random weights during feedback) and accumulated local information. We investigate under what conditions the Bayesian loss term can be expressed in a similar fashion, proposing an algorithm that can be computed with only local information as well and which is thus no more difficult to implement on hardware. This algorithm is exhibited on a store-recall problem, which suggests that it can learn good uncertainty on decisions to be made over time.

READ FULL TEXT
research
06/13/2021

The Backpropagation Algorithm Implemented on Spiking Neuromorphic Hardware

The capabilities of natural neural systems have inspired new generations...
research
12/16/2016

Neuromorphic Deep Learning Machines

An ongoing challenge in neuromorphic computing is to devise general and ...
research
11/15/2019

Ghost Units Yield Biologically Plausible Backprop in Deep Neural Networks

In the past few years, deep learning has transformed artificial intellig...
research
10/27/2021

BioGrad: Biologically Plausible Gradient-Based Learning for Spiking Neural Networks

Spiking neural networks (SNN) are delivering energy-efficient, massively...
research
07/15/2019

Iterative temporal differencing with random synaptic feedback weights support error backpropagation for deep learning

This work shows that a differentiable activation function is not necessa...
research
12/29/2022

Biologically Plausible Learning on Neuromorphic Hardware Architectures

With an ever-growing number of parameters defining increasingly complex ...

Please sign up or login with your details

Forgot password? Click here to reset