Biologically plausible deep learning

11/19/2018
by   Yali Amit, et al.
0

Building on the model proposed in Lillicrap et. al. we show that deep networks can be trained using biologically plausible Hebbian rules yielding similar performance to ordinary back-propagation. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. But, in contrast to Lillicrap et. al., they are also updated with a local rule - a weight is updated solely based on the activity of the units it connects. With fixed feedback weights the performance degrades quickly as the depth of the network increases, when they are updated the performance is comparable to regular back-propagation. We also propose a cost function whose derivative can be represented as a local update rule on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with sparse layers corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights.

READ FULL TEXT
research
11/08/2018

Biologically-plausible learning algorithms can scale to large datasets

The backpropagation (BP) algorithm is often thought to be biologically i...
research
09/30/2021

Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks

We develop biologically plausible training mechanisms for self-supervise...
research
07/04/2021

Multi-layer Hebbian networks with modern deep learning frameworks

Deep learning networks generally use non-biological learning methods. By...
research
12/19/2022

Fixed-Weight Difference Target Propagation

Target Propagation (TP) is a biologically more plausible algorithm than ...
research
09/10/2019

Learning Hierarchically Structured Concepts

We study the question of how concepts that have structure get represente...
research
03/17/2020

Hyperplane Arrangements of Trained ConvNets Are Biased

We investigate the geometric properties of the functions learned by trai...
research
11/24/2021

Information Bottleneck-Based Hebbian Learning Rule Naturally Ties Working Memory and Synaptic Updates

Artificial neural networks have successfully tackled a large variety of ...

Please sign up or login with your details

Forgot password? Click here to reset