Continual evaluation for lifelong learning: Identifying the stability gap

05/26/2022
by   Matthias De Lange, et al.
0

Introducing a time dependency on the data generating distribution has proven to be difficult for gradient-based training of neural networks, as the greedy updates result in catastrophic forgetting of previous timesteps. Continual learning aims to overcome the greedy optimization to enable continuous accumulation of knowledge over time. The data stream is typically divided into locally stationary distributions, called tasks, allowing task-based evaluation on held-out data from the training tasks. Contemporary evaluation protocols and metrics in continual learning are task-based and quantify the trade-off between stability and plasticity only at task transitions. However, our empirical evidence suggests that between task transitions significant, temporary forgetting can occur, remaining unidentified in task-based evaluation. Therefore, we propose a framework for continual evaluation that establishes per-iteration evaluation and define a new set of metrics that enables identifying the worst-case performance of the learner over its lifetime. Performing continual evaluation, we empirically identify that replay suffers from a stability gap: upon learning a new task, there is a substantial but transient decrease in performance on past tasks. Further conceptual and empirical analysis suggests not only replay-based, but also regularization-based continual learning methods are prone to the stability gap.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2020

Using Hindsight to Anchor Past Knowledge in Continual Learning

In continual learning, the learner faces a stream of data whose distribu...
research
08/04/2022

A Benchmark and Empirical Analysis for Replay Strategies in Continual Learning

With the capacity of continual learning, humans can continuously acquire...
research
06/16/2022

Continual Learning with Guarantees via Weight Interval Constraints

We introduce a new training paradigm that enforces interval constraints ...
research
04/15/2019

Three scenarios for continual learning

Standard artificial neural networks suffer from the well-known issue of ...
research
09/27/2018

Generative replay with feedback connections as a general strategy for continual learning

Standard artificial neural networks suffer from the well-known issue of ...
research
05/11/2021

TAG: Task-based Accumulated Gradients for Lifelong learning

When an agent encounters a continual stream of new tasks in the lifelong...
research
11/25/2020

Continual learning with direction-constrained optimization

This paper studies a new design of the optimization algorithm for traini...

Please sign up or login with your details

Forgot password? Click here to reset