Cortico-cerebellar networks as decoupling neural interfaces

10/21/2021
by   Joseph Pemberton, et al.
0

The brain solves the credit assignment problem remarkably well. For credit to be assigned across neural networks they must, in principle, wait for specific neural computations to finish. How the brain deals with this inherent locking problem has remained unclear. Deep learning methods suffer from similar locking constraints both on the forward and feedback phase. Recently, decoupled neural interfaces (DNIs) were introduced as a solution to the forward and feedback locking problems in deep networks. Here we propose that a specialised brain region, the cerebellum, helps the cerebral cortex solve similar locking problems akin to DNIs. To demonstrate the potential of this framework we introduce a systems-level model in which a recurrent cortical network receives online temporal feedback predictions from a cerebellar module. We test this cortico-cerebellar recurrent neural network (ccRNN) model on a number of sensorimotor (line and digit drawing) and cognitive tasks (pattern recognition and caption generation) that have been shown to be cerebellar-dependent. In all tasks, we observe that ccRNNs facilitates learning while reducing ataxia-like behaviours, consistent with classical experimental observations. Moreover, our model also explains recent behavioural and neuronal observations while making several testable predictions across multiple levels. Overall, our work offers a novel perspective on the cerebellum as a brain-wide decoupling machine for efficient credit assignment and opens a new avenue between deep learning and neuroscience.

READ FULL TEXT
research
06/23/2022

Single-phase deep learning in cortico-cortical networks

The error-backpropagation (backprop) algorithm remains the most common s...
research
08/09/2018

Error Forward-Propagation: Reusing Feedforward Connections to Propagate Errors in Deep Learning

We introduce Error Forward-Propagation, a biologically plausible mechani...
research
07/04/2022

The least-control principle for learning at equilibrium

Equilibrium systems are a powerful way to express neural computations. A...
research
08/22/2016

Surprisal-Driven Feedback in Recurrent Networks

Recurrent neural nets are widely used for predicting temporal data. Thei...
research
02/07/2021

Ensemble perspective for understanding temporal credit assignment

Recurrent neural networks are widely used for modeling spatio-temporal s...
research
12/02/2022

Credit Assignment for Trained Neural Networks Based on Koopman Operator Theory

Credit assignment problem of neural networks refers to evaluating the cr...
research
06/27/2022

Distinguishing Learning Rules with Brain Machine Interfaces

Despite extensive theoretical work on biologically plausible learning ru...

Please sign up or login with your details

Forgot password? Click here to reset