Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals

03/22/2022
by   Timo Flesch, et al.
0

Humans can learn several tasks in succession with minimal mutual interference but perform more poorly when trained on multiple tasks at once. The opposite is true for standard deep neural networks. Here, we propose novel computational constraints for artificial neural networks, inspired by earlier work on gating in the primate prefrontal cortex, that capture the cost of interleaved training and allow the network to learn two tasks in sequence without forgetting. We augment standard stochastic gradient descent with two algorithmic motifs, so-called "sluggish" task units and a Hebbian training step that strengthens connections between task units and hidden units that encode task-relevant information. We found that the "sluggish" units introduce a switch-cost during training, which biases representations under interleaved training towards a joint representation that ignores the contextual cue, while the Hebbian step promotes the formation of a gating scheme from task units to the hidden layer that produces orthogonal representations which are perfectly guarded against interference. Validating the model on previously published human behavioural data revealed that it matches performance of participants who had been trained on blocked or interleaved curricula, and that these performance differences were driven by misestimation of the true category boundary.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 10

page 11

page 12

page 14

research
12/28/2021

Towards continual task learning in artificial neural networks: current approaches and insights from neuroscience

The innate capacity of humans and other animals to learn a diverse, and ...
research
09/27/2020

Beneficial Perturbation Network for designing general adaptive artificial intelligence systems

The human brain is the gold standard of adaptive learning. It not only c...
research
02/22/2023

Regularised neural networks mimic human insight

Humans sometimes show sudden improvements in task performance that have ...
research
12/09/2016

Learning Representations by Stochastic Meta-Gradient Descent in Neural Networks

Representations are fundamental to artificial intelligence. The performa...
research
10/06/2020

Usable Information and Evolution of Optimal Representations During Training

We introduce a notion of usable information contained in the representat...
research
11/23/2022

Cooperative data-driven modeling

Data-driven modeling in mechanics is evolving rapidly based on recent ma...
research
01/03/2023

Improving Performance in Neural Networks by Dendrites-Activated Connections

Computational units in artificial neural networks compute a linear combi...

Please sign up or login with your details

Forgot password? Click here to reset