From MNIST to ImageNet and Back: Benchmarking Continual Curriculum Learning

03/16/2023
by   Kamil Faber, et al.
0

Continual learning (CL) is one of the most promising trends in recent machine learning research. Its goal is to go beyond classical assumptions in machine learning and develop models and learning strategies that present high robustness in dynamic environments. The landscape of CL research is fragmented into several learning evaluation protocols, comprising different learning tasks, datasets, and evaluation metrics. Additionally, the benchmarks adopted so far are still distant from the complexity of real-world scenarios, and are usually tailored to highlight capabilities specific to certain strategies. In such a landscape, it is hard to objectively assess strategies. In this work, we fill this gap for CL on image data by introducing two novel CL benchmarks that involve multiple heterogeneous tasks from six image datasets, with varying levels of complexity and quality. Our aim is to fairly evaluate current state-of-the-art CL strategies on a common ground that is closer to complex real-world scenarios. We additionally structure our benchmarks so that tasks are presented in increasing and decreasing order of complexity – according to a curriculum – in order to evaluate if current CL models are able to exploit structure across tasks. We devote particular emphasis to providing the CL community with a rigorous and reproducible evaluation protocol for measuring the ability of a model to generalize and not to forget while learning. Furthermore, we provide an extensive experimental evaluation showing that popular CL strategies, when challenged with our benchmarks, yield sub-par performance, high levels of forgetting, and present a limited ability to effectively leverage curriculum task ordering. We believe that these results highlight the need for rigorous comparisons in future CL works as well as pave the way to design new CL strategies that are able to deal with more complex scenarios.

READ FULL TEXT

page 4

page 14

page 16

page 17

research
06/29/2022

Continual Learning for Human State Monitoring

Continual Learning (CL) on time series data represents a promising but u...
research
05/26/2022

The Effect of Task Ordering in Continual Learning

We investigate the effect of task ordering on continual learning perform...
research
03/12/2021

Continual Learning for Recurrent Neural Networks: a Review and Empirical Evaluation

Learning continuously during all model lifetime is fundamental to deploy...
research
08/15/2021

An Investigation of Replay-based Approaches for Continual Learning

Continual learning (CL) is a major challenge of machine learning (ML) an...
research
10/07/2021

CLEVA-Compass: A Continual Learning EValuation Assessment Compass to Promote Research Transparency and Comparability

What is the state of the art in continual machine learning? Although a n...
research
10/21/2021

On Hard Episodes in Meta-Learning

Existing meta-learners primarily focus on improving the average task acc...
research
08/21/2023

Foundation Model-oriented Robustness: Robust Image Model Evaluation with Pretrained Models

Machine learning has demonstrated remarkable performance over finite dat...

Please sign up or login with your details

Forgot password? Click here to reset