Ghost Units Yield Biologically Plausible Backprop in Deep Neural Networks

11/15/2019
by   Thomas Mesnard, et al.
37

In the past few years, deep learning has transformed artificial intelligence research and led to impressive performance in various difficult tasks. However, it is still unclear how the brain can perform credit assignment across many areas as efficiently as backpropagation does in deep neural networks. In this paper, we introduce a model that relies on a new role for a neuronal inhibitory machinery, referred to as ghost units. By cancelling the feedback coming from the upper layer when no target signal is provided to the top layer, the ghost units enables the network to backpropagate errors and do efficient credit assignment in deep structures. While considering one-compartment neurons and requiring very few biological assumptions, it is able to approximate the error gradient and achieve good performance on classification tasks. Error backpropagation occurs through the recurrent dynamics of the network and thanks to biologically plausible local learning rules. In particular, it does not require separate feedforward and feedback circuits. Different mechanisms for cancelling the feedback were studied, ranging from complete duplication of the connectivity by long term processes to online replication of the feedback activity. This reduced system combines the essential elements to have a working biologically abstracted analogue of backpropagation with a simple formulation and proofs of the associated results. Therefore, this model is a step towards understanding how learning and memory are implemented in cortical multilayer structures, but it also raises interesting perspectives for neuromorphic hardware.

READ FULL TEXT

page 3

page 4

page 5

page 8

page 9

page 10

page 13

page 18

research
08/09/2018

Error Forward-Propagation: Reusing Feedforward Connections to Propagate Errors in Deep Learning

We introduce Error Forward-Propagation, a biologically plausible mechani...
research
06/03/2019

Learning to solve the credit assignment problem

Backpropagation is driving today's artificial neural networks (ANNs). Ho...
research
10/26/2018

Dendritic cortical microcircuits approximate the backpropagation algorithm

Deep learning has seen remarkable developments over the last years, many...
research
06/23/2022

Single-phase deep learning in cortico-cortical networks

The error-backpropagation (backprop) algorithm remains the most common s...
research
05/05/2020

Towards On-Chip Bayesian Neuromorphic Learning

If edge devices are to be deployed to critical applications where their ...
research
01/06/2023

Feedback-Gated Rectified Linear Units

Feedback connections play a prominent role in the human brain but have n...
research
06/02/2022

Beyond accuracy: generalization properties of bio-plausible temporal credit assignment rules

To unveil how the brain learns, ongoing work seeks biologically-plausibl...

Please sign up or login with your details

Forgot password? Click here to reset