Continual learning: A comparative study on how to defy forgetting in classification tasks

09/18/2019
by   Matthias De Lange, et al.
0

Artificial neural networks thrive in solving the classification problem for a particular rigid task, where the network resembles a static entity of knowledge, acquired through generalized learning behaviour from a distinct training phase. However, endeavours to extend this knowledge without targeting the original task usually result in a catastrophic forgetting of this task. Continual learning shifts this paradigm towards a network that can continually accumulate knowledge over different tasks without the need for retraining from scratch, with methods in particular aiming to alleviate forgetting. We focus on task-incremental classification, where tasks arrive in a batch-like fashion, and are delineated by clear boundaries. Our main contributions concern 1) a taxonomy and extensive overview of the state-of-the-art, 2) a novel framework to continually determine stability-plasticity trade-off of the continual learner, 3) a comprehensive experimental comparison of 10 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize which method performs best, both on balanced Tiny Imagenet and a large-scale unbalanced iNaturalist datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 19

05/31/2018

Reinforced Continual Learning

Most artificial intelligence models have limiting ability to solve new t...
02/17/2020

Residual Continual Learning

We propose a novel continual learning method called Residual Continual L...
05/28/2021

More Is Better: An Analysis of Instance Quantity/Quality Trade-off in Rehearsal-based Continual Learning

The design of machines and algorithms capable of learning in a dynamical...
05/23/2019

Prototype Reminding for Continual Learning

Continual learning is a critical ability of continually acquiring and tr...
06/12/2020

Understanding the Role of Training Regimes in Continual Learning

Catastrophic forgetting affects the training of neural networks, limitin...
06/12/2020

Collaborative and continual learning for classification tasks in a society of devices

Today we live in a context in which devices are increasingly interconnec...
01/04/2021

CLeaR: An Adaptive Continual Learning Framework for Regression Tasks

Catastrophic forgetting means that a trained neural network model gradua...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.