GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

06/11/2020
by   Nasir Ahmad, et al.
1

Traditional backpropagation of error, though a highly successful algorithm for learning in artificial neural network models, includes features which are biologically implausible for learning in real neural circuits. An alternative called target propagation proposes to solve this implausibility by using a top-down model of neural activity to convert an error at the output of a neural network into layer-wise and plausible 'targets' for every unit. These targets can then be used to produce weight updates for network training. However, thus far, target propagation has been heuristically proposed without demonstrable equivalence to backpropagation. Here, we derive an exact correspondence between backpropagation and a modified form of target propagation (GAIT-prop) where the target is a small perturbation of the forward pass. Specifically, backpropagation and GAIT-prop give identical updates when synaptic weight matrices are orthogonal. In a series of simple computer vision experiments, we show near-identical performance between backpropagation and GAIT-prop with a soft orthogonality-inducing regularizer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2021

Scaling up learning with GAIT-prop

Backpropagation of error (BP) is a widely used and highly successful lea...
research
11/30/2020

A biologically plausible neural network for local supervision in cortical microcircuits

The backpropagation algorithm is an invaluable tool for training artific...
research
01/31/2022

Towards Scaling Difference Target Propagation by Learning Backprop Targets

The development of biologically-plausible learning algorithms is importa...
research
06/15/2020

Equilibrium Propagation for Complete Directed Neural Networks

Artificial neural networks, one of the most successful approaches to sup...
research
11/24/2021

Information Bottleneck-Based Hebbian Learning Rule Naturally Ties Working Memory and Synaptic Updates

Artificial neural networks have successfully tackled a large variety of ...
research
07/29/2020

Deriving Differential Target Propagation from Iterating Approximate Inverses

We show that a particular form of target propagation, i.e., relying on l...
research
09/30/2021

Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks

We develop biologically plausible training mechanisms for self-supervise...

Please sign up or login with your details

Forgot password? Click here to reset