Error Forward-Propagation: Reusing Feedforward Connections to Propagate Errors in Deep Learning

08/09/2018
by   Adam A. Kohan, et al.
0

We introduce Error Forward-Propagation, a biologically plausible mechanism to propagate error feedback forward through the network. Architectural constraints on connectivity are virtually eliminated for error feedback in the brain; systematic backward connectivity is not used or needed to deliver error feedback. Feedback as a means of assigning credit to neurons earlier in the forward pathway for their contribution to the final output is thought to be used in learning in the brain. How the brain solves the credit assignment problem is unclear. In machine learning, error backpropagation is a highly successful mechanism for credit assignment in deep multilayered networks. Backpropagation requires symmetric reciprocal connectivity for every neuron. From a biological perspective, there is no evidence of such an architectural constraint, which makes backpropagation implausible for learning in the brain. This architectural constraint is reduced with the use of random feedback weights. Models using random feedback weights require backward connectivity patterns for every neuron, but avoid symmetric weights and reciprocal connections. In this paper, we practically remove this architectural constraint, requiring only a backward loop connection for effective error feedback. We propose reusing the forward connections to deliver the error feedback by feeding the outputs into the input receiving layer. This mechanism, Error Forward-Propagation, is a plausible basis for how error feedback occurs deep in the brain independent of and yet in support of the functionality underlying intricate network architectures. We show experimentally that recurrent neural networks with two and three hidden layers can be trained using Error Forward-Propagation on the MNIST and Fashion MNIST datasets, achieving 1.90% and 11% generalization errors respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2016

Direct Feedback Alignment Provides Learning in Deep Neural Networks

Artificial neural networks are most commonly trained with the back-propa...
research
07/10/2020

Biological credit assignment through dynamic inversion of feedforward networks

Learning depends on changes in synaptic connections deep inside the brai...
research
11/15/2019

Ghost Units Yield Biologically Plausible Backprop in Deep Neural Networks

In the past few years, deep learning has transformed artificial intellig...
research
08/01/2022

Replacing Backpropagation with Biological Plausible Top-down Credit Assignment in Deep Neural Networks Training

Top-down connections in the biological brain has been shown to be import...
research
10/02/2020

Relaxing the Constraints on Predictive Coding Models

Predictive coding is an influential theory of cortical function which po...
research
10/21/2021

Cortico-cerebellar networks as decoupling neural interfaces

The brain solves the credit assignment problem remarkably well. For cred...
research
09/07/2022

Multimodal Speech Enhancement Using Burst Propagation

This paper proposes the MBURST, a novel multimodal solution for audio-vi...

Please sign up or login with your details

Forgot password? Click here to reset