Log In Sign Up

A biologically plausible neural network for local supervision in cortical microcircuits

by   Siavash Golkar, et al.

The backpropagation algorithm is an invaluable tool for training artificial neural networks; however, because of a weight sharing requirement, it does not provide a plausible model of brain function. Here, in the context of a two-layer network, we derive an algorithm for training a neural network which avoids this problem by not requiring explicit error computation and backpropagation. Furthermore, our algorithm maps onto a neural network that bears a remarkable resemblance to the connectivity structure and learning rules of the cortex. We find that our algorithm empirically performs comparably to backprop on a number of datasets.


page 1

page 2

page 3

page 4


GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Traditional backpropagation of error, though a highly successful algorit...

Physical Deep Learning with Biologically Plausible Training Method

The ever-growing demand for further advances in artificial intelligence ...

Investigating the Scalability and Biological Plausibility of the Activation Relaxation Algorithm

The recently proposed Activation Relaxation (AR) algorithm provides a si...

SoftHebb: Bayesian inference in unsupervised Hebbian soft winner-take-all networks

State-of-the-art artificial neural networks (ANNs) require labelled data...

Backprop Diffusion is Biologically Plausible

The Backpropagation algorithm relies on the abstraction of using a neura...

A Bayesian regularization-backpropagation neural network model for peeling computations

Bayesian regularization-backpropagation neural network (BR-BPNN), a mach...