DeepAI AI Chat
Log In Sign Up

A More Biologically Plausible Local Learning Rule for ANNs

by   Shashi Kant Gupta, et al.

The backpropagation algorithm is often debated for its biological plausibility. However, various learning methods for neural architecture have been proposed in search of more biologically plausible learning. Most of them have tried to solve the "weight transport problem" and try to propagate errors backward in the architecture via some alternative methods. In this work, we investigated a slightly different approach that uses only the local information which captures spike timing information with no propagation of errors. The proposed learning rule is derived from the concepts of spike timing dependant plasticity and neuronal association. A preliminary evaluation done on the binary classification of MNIST and IRIS datasets with two hidden layers shows comparable performance with backpropagation. The model learned using this method also shows a possibility of better adversarial robustness against the FGSM attack compared to the model learned through backpropagation of cross-entropy loss. The local nature of learning gives a possibility of large scale distributed and parallel learning in the network. And finally, the proposed method is a more biologically sound method that can probably help in understanding how biological neurons learn different abstractions.


page 1

page 2

page 3

page 4


Biologically plausible deep learning -- but how far can we go with shallow networks?

Training deep neural networks with the error backpropagation algorithm i...

Spike-Timing-Dependent Inference of Synaptic Weights

A potential solution to the weight transport problem, which questions th...

Possible Mechanisms for Neural Reconfigurability and their Implications

The paper introduces a biologically and evolutionarily plausible neural ...

Variational Probability Flow for Biologically Plausible Training of Deep Neural Networks

The quest for biologically plausible deep learning is driven, not just b...

Synaptic Dynamics Realize First-order Adaptive Learning and Weight Symmetry

Gradient-based first-order adaptive optimization methods such as the Ada...

An Alternative to Backpropagation in Deep Reinforcement Learning

State-of-the-art deep learning algorithms mostly rely on gradient backpr...

Temporally Efficient Deep Learning with Spikes

The vast majority of natural sensory data is temporally redundant. Video...