Continual learning: A comparative study on how to defy forgetting in classification tasks

09/18/2019
by   Matthias De Lange, et al.
0

Artificial neural networks thrive in solving the classification problem for a particular rigid task, where the network resembles a static entity of knowledge, acquired through generalized learning behaviour from a distinct training phase. However, endeavours to extend this knowledge without targeting the original task usually result in a catastrophic forgetting of this task. Continual learning shifts this paradigm towards a network that can continually accumulate knowledge over different tasks without the need for retraining from scratch, with methods in particular aiming to alleviate forgetting. We focus on task-incremental classification, where tasks arrive in a batch-like fashion, and are delineated by clear boundaries. Our main contributions concern 1) a taxonomy and extensive overview of the state-of-the-art, 2) a novel framework to continually determine stability-plasticity trade-off of the continual learner, 3) a comprehensive experimental comparison of 10 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize which method performs best, both on balanced Tiny Imagenet and a large-scale unbalanced iNaturalist datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.

READ FULL TEXT

page 4

page 19

research
05/31/2018

Reinforced Continual Learning

Most artificial intelligence models have limiting ability to solve new t...
research
02/17/2020

Residual Continual Learning

We propose a novel continual learning method called Residual Continual L...
research
04/15/2019

Three scenarios for continual learning

Standard artificial neural networks suffer from the well-known issue of ...
research
07/27/2023

Detecting Morphing Attacks via Continual Incremental Training

Scenarios in which restrictions in data transfer and storage limit the p...
research
09/25/2022

Exploring Example Influence in Continual Learning

Continual Learning (CL) sequentially learns new tasks like human beings,...
research
05/28/2021

More Is Better: An Analysis of Instance Quantity/Quality Trade-off in Rehearsal-based Continual Learning

The design of machines and algorithms capable of learning in a dynamical...
research
06/06/2023

Continual Learning in Linear Classification on Separable Data

We analyze continual learning on a sequence of separable linear classifi...

Please sign up or login with your details

Forgot password? Click here to reset