Credit Assignment Through Broadcasting a Global Error Vector

06/08/2021
by   David G. Clark, et al.
0

Backpropagation (BP) uses detailed, unit-specific feedback to train deep neural networks (DNNs) with remarkable success. That biological neural circuits appear to perform credit assignment, but cannot implement BP, implies the existence of other powerful learning algorithms. Here, we explore the extent to which a globally broadcast learning signal, coupled with local weight updates, enables training of DNNs. We present both a learning rule, called global error-vector broadcasting (GEVB), and a class of DNNs, called vectorized nonnegative networks (VNNs), in which this learning rule operates. VNNs have vector-valued units and nonnegative weights past the first layer. The GEVB learning rule generalizes three-factor Hebbian learning, updating each weight by an amount proportional to the inner product of the presynaptic activation and a globally broadcast error vector when the postsynaptic unit is active. We prove that these weight updates are matched in sign to the gradient, enabling accurate credit assignment. Moreover, at initialization, these updates are exactly proportional to the gradient in the limit of infinite network width. GEVB matches the performance of BP in VNNs, and in some cases outperforms direct feedback alignment (DFA) applied in conventional networks. Unlike DFA, GEVB successfully trains convolutional layers. Altogether, our theoretical and empirical results point to a surprisingly powerful role for a global learning signal in training DNNs.

READ FULL TEXT

page 3

page 5

page 11

page 12

page 13

page 16

page 17

page 18

research
08/01/2022

Replacing Backpropagation with Biological Plausible Top-down Credit Assignment in Deep Neural Networks Training

Top-down connections in the biological brain has been shown to be import...
research
05/16/2016

Alternating optimization method based on nonnegative matrix factorizations for deep neural networks

The backpropagation algorithm for calculating gradients has been widely ...
research
12/21/2020

Training DNNs in O(1) memory with MEM-DFA using Random Matrices

This work presents a method for reducing memory consumption to a constan...
research
05/15/2022

A Computational Framework of Cortical Microcircuits Approximates Sign-concordant Random Backpropagation

Several recent studies attempt to address the biological implausibility ...
research
07/25/2023

Unbiased Weight Maximization

A biologically plausible method for training an Artificial Neural Networ...
research
09/23/2022

Hebbian Deep Learning Without Feedback

Recent approximations to backpropagation (BP) have mitigated many of BP'...
research
07/25/2023

Structural Credit Assignment with Coordinated Exploration

A biologically plausible method for training an Artificial Neural Networ...

Please sign up or login with your details

Forgot password? Click here to reset