Iterative temporal differencing with random synaptic feedback weights support error backpropagation for deep learning

07/15/2019
by   Aras R. Dargazany, et al.
0

This work shows that a differentiable activation function is not necessary any more for error backpropagation. The derivative of the activation function can be replaced by an iterative temporal differencing using fixed random feedback alignment. Using fixed random synaptic feedback alignment with an iterative temporal differencing is transforming the traditional error backpropagation into a more biologically plausible approach for learning deep neural network architectures. This can be a big step toward the integration of STDP-based error backpropagation in deep learning.

READ FULL TEXT
research
06/10/2021

Convergence and Alignment of Gradient Descent with Random Back Propagation Weights

Stochastic gradient descent with backpropagation is the workhorse of art...
research
06/03/2022

A Robust Backpropagation-Free Framework for Images

While current deep learning algorithms have been successful for a wide v...
research
12/30/2017

Dendritic error backpropagation in deep cortical microcircuits

Animal behaviour depends on learning to associate sensory stimuli with t...
research
06/23/2020

Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures

Despite being the workhorse of deep learning, the backpropagation algori...
research
06/04/2023

Random Feedback Alignment Algorithms to train Neural Networks: Why do they Align?

Feedback alignment algorithms are an alternative to backpropagation to t...
research
12/08/2016

Learning in the Machine: Random Backpropagation and the Learning Channel

Random backpropagation (RBP) is a variant of the backpropagation algorit...
research
05/05/2020

Towards On-Chip Bayesian Neuromorphic Learning

If edge devices are to be deployed to critical applications where their ...

Please sign up or login with your details

Forgot password? Click here to reset