Using Hindsight to Anchor Past Knowledge in Continual Learning

02/19/2020
by   Arslan Chaudhry, et al.
0

In continual learning, the learner faces a stream of data whose distribution changes over time. Modern neural networks are known to suffer under this setting, as they quickly forget previously acquired knowledge. To address such catastrophic forgetting, many continual learning methods implement different types of experience replay, re-learning on past data stored in a small buffer known as episodic memory. In this work, we complement experience replay with a new objective that we call anchoring, where the learner uses bilevel optimization to update its knowledge on the current task, while keeping intact the predictions on some anchor points of past tasks. These anchor points are learned using gradient-based optimization to maximize forgetting, which is approximated by fine-tuning the currently trained model on the episodic memory of past tasks. Experiments on several supervised learning benchmarks for continual learning demonstrate that our approach improves the standard experience replay in terms of both accuracy and forgetting metrics and for various sizes of episodic memories.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2021

Distilled Replay: Overcoming Forgetting through Synthetic Samples

Replay strategies are Continual Learning techniques which mitigate catas...
research
12/31/2021

Revisiting Experience Replay: Continual Learning by Adaptively Tuning Task-wise Relationship

Continual learning requires models to learn new tasks while maintaining ...
research
03/06/2021

Learning to Continually Learn Rapidly from Few and Noisy Data

Neural networks suffer from catastrophic forgetting and are unable to se...
research
05/26/2022

Continual evaluation for lifelong learning: Identifying the stability gap

Introducing a time dependency on the data generating distribution has pr...
research
04/15/2020

Dark Experience for General Continual Learning: a Strong, Simple Baseline

Neural networks struggle to learn continuously, as they forget the old k...
research
04/09/2023

Does Continual Learning Equally Forget All Parameters?

Distribution shift (e.g., task or domain shift) in continual learning (C...
research
08/21/2021

Principal Gradient Direction and Confidence Reservoir Sampling for Continual Learning

Task-free online continual learning aims to alleviate catastrophic forge...

Please sign up or login with your details

Forgot password? Click here to reset