Active Long Term Memory Networks

06/07/2016
by   Tommaso Furlanello, et al.
0

Continual Learning in artificial neural networks suffers from interference and forgetting when different tasks are learned sequentially. This paper introduces the Active Long Term Memory Networks (A-LTM), a model of sequential multi-task deep learning that is able to maintain previously learned association between sensory input and behavioral output while acquiring knew knowledge. A-LTM exploits the non-convex nature of deep neural networks and actively maintains knowledge of previously learned, inactive tasks using a distillation loss. Distortions of the learned input-output map are penalized but hidden layers are free to transverse towards new local optima that are more favorable for the multi-task objective. We re-frame the McClelland's seminal Hippocampal theory with respect to Catastrophic Inference (CI) behavior exhibited by modern deep architectures trained with back-propagation and inhomogeneous sampling of latent factors across epochs. We present empirical results of non-trivial CI during continual learning in Deep Linear Networks trained on the same task, in Convolutional Neural Networks when the task shifts from predicting semantic to graphical factors and during domain adaptation from simple to complex environments. We present results of the A-LTM model's ability to maintain viewpoint recognition learned in the highly controlled iLab-20M dataset with 10 object categories and 88 camera viewpoints, while adapting to the unstructured domain of Imagenet with 1,000 object categories.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2019

Continual Learning in Deep Neural Networks by Using Kalman Optimiser

Learning and adapting to new distributions or learning new tasks sequent...
research
05/20/2019

Continual Learning in Deep Neural Networks by Using a Kalman Optimiser

Learning and adapting to new distributions or learning new tasks sequent...
research
05/20/2019

Continual Learning in Deep Neural Network by Using a Kalman Optimiser

Learning and adapting to new distributions or learning new tasks sequent...
research
03/20/2019

Regularize, Expand and Compress: Multi-task based Lifelong Learning via NonExpansive AutoML

Lifelong learning, the problem of continual learning where tasks arrive ...
research
06/03/2019

Continual learning with hypernetworks

Artificial neural networks suffer from catastrophic forgetting when they...
research
09/27/2020

Beneficial Perturbation Network for designing general adaptive artificial intelligence systems

The human brain is the gold standard of adaptive learning. It not only c...
research
05/28/2018

Parallel Weight Consolidation: A Brain Segmentation Case Study

Collecting the large datasets needed to train deep neural networks can b...

Please sign up or login with your details

Forgot password? Click here to reset