Hebbian Deep Learning Without Feedback

09/23/2022
by   Adrien Journé, et al.
53

Recent approximations to backpropagation (BP) have mitigated many of BP's computational inefficiencies and incompatibilities with biology, but important limitations still remain. Moreover, the approximations significantly decrease accuracy in benchmarks, suggesting that an entirely different approach may be more fruitful. Here, grounded on recent theory for Hebbian learning in soft winner-take-all networks, we present multilayer SoftHebb, i.e. an algorithm that trains deep neural networks, without any feedback, target, or error signals. As a result, it achieves efficiency by avoiding weight transport, non-local plasticity, time-locking of layer updates, iterative equilibria, and (self-) supervisory or other feedback signals – which were necessary in other approaches. Its increased efficiency and biological compatibility do not trade off accuracy compared to state-of-the-art bio-plausible learning, but rather improve it. With up to five hidden layers and an added linear classifier, accuracies on MNIST, CIFAR-10, STL-10, and ImageNet, respectively reach 99.4 80.3 different approach from BP that Deep Learning over few layers may be plausible in the brain and increases the accuracy of bio-plausible machine learning.

READ FULL TEXT

page 2

page 5

page 8

page 22

research
02/23/2017

Bidirectional Backpropagation: Towards Biologically Plausible Error Signal Transmission in Neural Networks

The back-propagation (BP) algorithm has been considered the de-facto met...
research
12/21/2020

Training DNNs in O(1) memory with MEM-DFA using Random Matrices

This work presents a method for reducing memory consumption to a constan...
research
06/25/2020

A Theoretical Framework for Target Propagation

The success of deep learning, a brain-inspired form of AI, has sparked i...
research
02/23/2021

Scaling up learning with GAIT-prop

Backpropagation of error (BP) is a widely used and highly successful lea...
research
09/14/2018

Non-iterative recomputation of dense layers for performance improvement of DCNN

An iterative method of learning has become a paradigm for training deep ...
research
06/23/2020

Extension of Direct Feedback Alignment to Convolutional and Recurrent Neural Network for Bio-plausible Deep Learning

Throughout this paper, we focus on the improvement of the direct feedbac...
research
06/08/2021

Credit Assignment Through Broadcasting a Global Error Vector

Backpropagation (BP) uses detailed, unit-specific feedback to train deep...

Please sign up or login with your details

Forgot password? Click here to reset