Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass

by   Giorgia Dellaferrera, et al.

Supervised learning in artificial neural networks typically relies on backpropagation, where the weights are updated based on the error-function gradients and sequentially propagated from the output layer to the input layer. Although this approach has proven effective in a wide domain of applications, it lacks biological plausibility in many regards, including the weight symmetry problem, the dependence of learning on non-local signals, the freezing of neural activity during error propagation, and the update locking problem. Alternative training schemes - such as sign symmetry, feedback alignment, and direct feedback alignment - have been introduced, but invariably rely on a backward pass that hinders the possibility of solving all the issues simultaneously. Here, we propose to replace the backward pass with a second forward pass in which the input signal is modulated based on the error of the network. We show that this novel learning rule comprehensively addresses all the above-mentioned issues and can be applied to both fully connected and convolutional models. We test this learning rule on MNIST, CIFAR-10, and CIFAR-100. These results help incorporate biological principles into machine learning.


page 1

page 2

page 3

page 4


Learning without feedback: Direct random target projection as a feedback-alignment algorithm with layerwise feedforward training

While the backpropagation of error algorithm allowed for a rapid rise in...

Direct Feedback Alignment Provides Learning in Deep Neural Networks

Artificial neural networks are most commonly trained with the back-propa...

Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry

The backpropagation algorithm has experienced remarkable success in trai...

Spike-based causal inference for weight alignment

In artificial neural networks trained with gradient descent, the weights...

Layer-Parallel Training of Residual Networks with Auxiliary-Variable Networks

Gradient-based methods for the distributed training of residual networks...

Learning on tree architectures outperforms a convolutional feedforward network

Advanced deep learning architectures consist of tens of fully connected ...

Learning in the Machine: the Symmetries of the Deep Learning Channel

In a physical neural system, learning rules must be local both in space ...

Please sign up or login with your details

Forgot password? Click here to reset