Spike-based causal inference for weight alignment

10/03/2019
by   Jordan Guerguiev, et al.
0

In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called "weight transport problem" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST and CIFAR-10. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/10/2019

Using Weight Mirrors to Improve Feedback Alignment

Current algorithms for deep learning probably cannot run in the brain be...
research
04/10/2019

Deep Learning without Weight Transport

Current algorithms for deep learning probably cannot run in the brain be...
research
09/03/2019

Learning without feedback: Direct random target projection as a feedback-alignment algorithm with layerwise feedforward training

While the backpropagation of error algorithm allowed for a rapid rise in...
research
01/27/2022

Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass

Supervised learning in artificial neural networks typically relies on ba...
research
06/19/2018

Contrastive Hebbian Learning with Random Feedback Weights

Neural networks are commonly trained to make predictions through learnin...
research
12/14/2020

Constraints on Hebbian and STDP learned weights of a spiking neuron

We analyse mathematically the constraints on weights resulting from Hebb...
research
11/23/2020

Natural-gradient learning for spiking neurons

In many normative theories of synaptic plasticity, weight updates implic...

Please sign up or login with your details

Forgot password? Click here to reset