New metrics for analyzing continual learners

09/01/2023
by   Nicolas Michel, et al.
0

Deep neural networks have shown remarkable performance when trained on independent and identically distributed data from a fixed set of classes. However, in real-world scenarios, it can be desirable to train models on a continuous stream of data where multiple classification tasks are presented sequentially. This scenario, known as Continual Learning (CL) poses challenges to standard learning algorithms which struggle to maintain knowledge of old tasks while learning new ones. This stability-plasticity dilemma remains central to CL and multiple metrics have been proposed to adequately measure stability and plasticity separately. However, none considers the increasing difficulty of the classification task, which inherently results in performance loss for any model. In that sense, we analyze some limitations of current metrics and identify the presence of setup-induced forgetting. Therefore, we propose new metrics that account for the task's increasing difficulty. Through experiments on benchmark datasets, we demonstrate that our proposed metrics can provide new insights into the stability-plasticity trade-off achieved by models in the continual learning environment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2018

Don't forget, there is more than forgetting: new metrics for Continual Learning

Continual learning consists of algorithms that learn from a stream of da...
research
07/27/2021

Continual Learning with Neuron Activation Importance

Continual learning is a concept of online learning with multiple sequent...
research
03/16/2023

Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning

In contrast to the natural capabilities of humans to learn new tasks in ...
research
05/25/2023

Batch Model Consolidation: A Multi-Task Model Consolidation Framework

In Continual Learning (CL), a model is required to learn a stream of tas...
research
06/06/2023

Learning Representations on the Unit Sphere: Application to Online Continual Learning

We use the maximum a posteriori estimation principle for learning repres...
research
12/16/2021

Effective prevention of semantic drift as angular distance in memory-less continual deep neural networks

Lifelong machine learning or continual learning models attempt to learn ...
research
04/20/2020

CLOPS: Continual Learning of Physiological Signals

Deep learning algorithms are known to experience destructive interferenc...

Please sign up or login with your details

Forgot password? Click here to reset