Contrastive Hebbian Learning with Random Feedback Weights

06/19/2018
by   Georgios Detorakis, et al.
0

Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning.

READ FULL TEXT
research
04/10/2019

Using Weight Mirrors to Improve Feedback Alignment

Current algorithms for deep learning probably cannot run in the brain be...
research
03/17/2023

A Two-Step Rule for Backpropagation

We present a simplified computational rule for the back-propagation form...
research
04/10/2019

Deep Learning without Weight Transport

Current algorithms for deep learning probably cannot run in the brain be...
research
10/03/2019

Spike-based causal inference for weight alignment

In artificial neural networks trained with gradient descent, the weights...
research
06/02/2023

Layer-Wise Feedback Alignment is Conserved in Deep Neural Networks

In the quest to enhance the efficiency and bio-plausibility of training ...
research
05/30/2023

Synaptic Weight Distributions Depend on the Geometry of Plasticity

Most learning algorithms in machine learning rely on gradient descent to...
research
08/29/2016

About Learning in Recurrent Bistable Gradient Networks

Recurrent Bistable Gradient Networks are attractor based neural networks...

Please sign up or login with your details

Forgot password? Click here to reset