Continuous Learning of Context-dependent Processing in Neural Networks

09/29/2018
by   Guanxiong Zeng, et al.
0

Deep artificial neural networks (DNNs) are powerful tools for recognition and classification as they learn sophisticated mapping rules between the inputs and the outputs. However, the rules that learned by the majority of current DNNs used for pattern recognition are largely fixed and do not vary with different conditions. This limits the network's ability to work in more complex and dynamical situations in which the mapping rules themselves are not fixed but constantly change according to contexts, such as different environments and goals. Inspired by the role of the prefrontal cortex (PFC) in mediating context-dependent processing in the primate brain, here we propose a novel approach, involving a learning algorithm named orthogonal weights modification (OWM) with the addition of a PFC-like module, that enables networks to continually learn different mapping rules in a context-dependent way. We demonstrate that with OWM to protect previously acquired knowledge, the networks could sequentially learn up to thousands of different mapping rules without interference, and needing as few as ∼10 samples to learn each, reaching a human level ability in online, continual learning. In addition, by using a PFC-like module to enable contextual information to modulate the representation of sensory features, a network could sequentially learn different, context-specific mappings for identical stimuli. Taken together, these approaches allow us to teach a single network numerous context-dependent mapping rules in an online, continual manner. This would enable highly compact systems to gradually learn myriad of regularities of the real world and eventually behave appropriately within it.

READ FULL TEXT

page 1

page 3

page 4

page 6

research
09/27/2020

Beneficial Perturbation Network for designing general adaptive artificial intelligence systems

The human brain is the gold standard of adaptive learning. It not only c...
research
03/06/2020

Triple Memory Networks: a Brain-Inspired Method for Continual Learning

Continual acquisition of novel experience without interfering previously...
research
02/21/2020

Learning to Continually Learn

Continual lifelong learning requires an agent or model to learn many seq...
research
12/28/2021

Towards continual task learning in artificial neural networks: current approaches and insights from neuroscience

The innate capacity of humans and other animals to learn a diverse, and ...
research
03/03/2022

Continual SLAM: Beyond Lifelong Simultaneous Localization and Mapping through Continual Learning

While lifelong SLAM addresses the capability of a robot to adapt to chan...
research
01/25/2022

Representation learnt by SGD and Adaptive learning rules – Conditions that Vary Sparsity and Selectivity in Neural Network

From the point of view of the human brain, continual learning can perfor...
research
03/11/2023

Probing neural representations of scene perception in a hippocampally dependent task using artificial neural networks

Deep artificial neural networks (DNNs) trained through backpropagation p...

Please sign up or login with your details

Forgot password? Click here to reset