-
Reinforced Continual Learning
Most artificial intelligence models have limiting ability to solve new t...
read it
-
Gradient Episodic Memory with a Soft Constraint for Continual Learning
Catastrophic forgetting in continual learning is a common destructive ph...
read it
-
Residual Continual Learning
We propose a novel continual learning method called Residual Continual L...
read it
-
Lifelong Learning of Compositional Structures
A hallmark of human intelligence is the ability to construct self-contai...
read it
-
Collaborative and continual learning for classification tasks in a society of devices
Today we live in a context in which devices are increasingly interconnec...
read it
-
CLeaR: An Adaptive Continual Learning Framework for Regression Tasks
Catastrophic forgetting means that a trained neural network model gradua...
read it
-
Understanding the Role of Training Regimes in Continual Learning
Catastrophic forgetting affects the training of neural networks, limitin...
read it
Continual learning: A comparative study on how to defy forgetting in classification tasks
Artificial neural networks thrive in solving the classification problem for a particular rigid task, where the network resembles a static entity of knowledge, acquired through generalized learning behaviour from a distinct training phase. However, endeavours to extend this knowledge without targeting the original task usually result in a catastrophic forgetting of this task. Continual learning shifts this paradigm towards a network that can continually accumulate knowledge over different tasks without the need for retraining from scratch, with methods in particular aiming to alleviate forgetting. We focus on task-incremental classification, where tasks arrive in a batch-like fashion, and are delineated by clear boundaries. Our main contributions concern 1) a taxonomy and extensive overview of the state-of-the-art, 2) a novel framework to continually determine stability-plasticity trade-off of the continual learner, 3) a comprehensive experimental comparison of 10 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize which method performs best, both on balanced Tiny Imagenet and a large-scale unbalanced iNaturalist datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
READ FULL TEXT
Comments
There are no comments yet.