DeepAI
Log In Sign Up

A biologically plausible neural network for local supervision in cortical microcircuits

11/30/2020
by   Siavash Golkar, et al.
0

The backpropagation algorithm is an invaluable tool for training artificial neural networks; however, because of a weight sharing requirement, it does not provide a plausible model of brain function. Here, in the context of a two-layer network, we derive an algorithm for training a neural network which avoids this problem by not requiring explicit error computation and backpropagation. Furthermore, our algorithm maps onto a neural network that bears a remarkable resemblance to the connectivity structure and learning rules of the cortex. We find that our algorithm empirically performs comparably to backprop on a number of datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/11/2020

GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Traditional backpropagation of error, though a highly successful algorit...
04/01/2022

Physical Deep Learning with Biologically Plausible Training Method

The ever-growing demand for further advances in artificial intelligence ...
10/13/2020

Investigating the Scalability and Biological Plausibility of the Activation Relaxation Algorithm

The recently proposed Activation Relaxation (AR) algorithm provides a si...
07/12/2021

SoftHebb: Bayesian inference in unsupervised Hebbian soft winner-take-all networks

State-of-the-art artificial neural networks (ANNs) require labelled data...
12/10/2019

Backprop Diffusion is Biologically Plausible

The Backpropagation algorithm relies on the abstraction of using a neura...
06/29/2020

A Bayesian regularization-backpropagation neural network model for peeling computations

Bayesian regularization-backpropagation neural network (BR-BPNN), a mach...