SoftHebb: Bayesian inference in unsupervised Hebbian soft winner-take-all networks

07/12/2021
by   Timoleon Moraitis, et al.
0

State-of-the-art artificial neural networks (ANNs) require labelled data or feedback between layers, are often biologically implausible, and are vulnerable to adversarial attacks that humans are not susceptible to. On the other hand, Hebbian learning in winner-take-all (WTA) networks, is unsupervised, feed-forward, and biologically plausible. However, an objective optimization theory for WTA networks has been missing, except under very limiting assumptions. Here we derive formally such a theory, based on biologically plausible but generic ANN elements. Through Hebbian learning, network parameters maintain a Bayesian generative model of the data. There is no supervisory loss function, but the network does minimize cross-entropy between its activations and the input distribution. The key is a "soft" WTA where there is no absolute "hard" winner neuron, and a specific type of Hebbian-like plasticity of weights and biases. We confirm our theory in practice, where, in handwritten digit (MNIST) recognition, our Hebbian algorithm, SoftHebb, minimizes cross-entropy without having access to it, and outperforms the more frequently used, hard-WTA-based method. Strikingly, it even outperforms supervised end-to-end backpropagation, under certain conditions. Specifically, in a two-layered network, SoftHebb outperforms backpropagation when the training dataset is only presented once, when the testing data is noisy, and under gradient-based adversarial attacks. Adversarial attacks that confuse SoftHebb are also confusing to the human eye. Finally, the model can generate interpolations of objects from its input distribution.

READ FULL TEXT
research
04/27/2021

A Dual Process Model for Optimizing Cross Entropy in Neural Networks

Minimizing cross-entropy is a widely used method for training artificial...
research
06/13/2023

Finite Gaussian Neurons: Defending against adversarial attacks by making neural networks say "I don't know"

Since 2014, artificial neural networks have been known to be vulnerable ...
research
04/26/2023

Feed-Forward Optimization With Delayed Feedback for Neural Networks

Backpropagation has long been criticized for being biologically implausi...
research
08/30/2021

Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms

Backpropagation is the default algorithm for training deep neural networ...
research
08/05/2019

The HSIC Bottleneck: Deep Learning without Back-Propagation

We introduce the HSIC (Hilbert-Schmidt independence criterion) bottlenec...
research
06/03/2019

Learning to solve the credit assignment problem

Backpropagation is driving today's artificial neural networks (ANNs). Ho...
research
11/29/2019

MSTDP: A More Biologically Plausible Learning

Spike-timing dependent plasticity (STDP) which observed in the brain has...

Please sign up or login with your details

Forgot password? Click here to reset