Consistency is the key to further mitigating catastrophic forgetting in continual learning

07/11/2022
by   Prashant Bhat, et al.
38

Deep neural networks struggle to continually learn multiple sequential tasks due to catastrophic forgetting of previously learned tasks. Rehearsal-based methods which explicitly store previous task samples in the buffer and interleave them with the current task samples have proven to be the most effective in mitigating forgetting. However, Experience Replay (ER) does not perform well under low-buffer regimes and longer task sequences as its performance is commensurate with the buffer size. Consistency in predictions of soft-targets can assist ER in preserving information pertaining to previous tasks better as soft-targets capture the rich similarity structure of the data. Therefore, we examine the role of consistency regularization in ER framework under various continual learning scenarios. We also propose to cast consistency regularization as a self-supervised pretext task thereby enabling the use of a wide variety of self-supervised learning methods as regularizers. While simultaneously enhancing model calibration and robustness to natural corruptions, regularizing consistency in predictions results in lesser forgetting across all continual learning scenarios. Among the different families of regularizers, we find that stricter consistency constraints preserve previous task information in ER better.

READ FULL TEXT

page 2

page 8

page 15

page 16

research
02/21/2022

Learning Bayesian Sparse Networks with Full Experience Replay for Continual Learning

Continual Learning (CL) methods aim to enable machine learning models to...
research
11/16/2020

Gradient Episodic Memory with a Soft Constraint for Continual Learning

Catastrophic forgetting in continual learning is a common destructive ph...
research
10/14/2021

Continual Learning on Noisy Data Streams via Self-Purified Replay

Continually learning in the real world must overcome many challenges, am...
research
12/30/2021

Continually Learning Self-Supervised Representations with Projected Functional Regularization

Recent self-supervised learning methods are able to learn high-quality i...
research
02/15/2022

Improving Pedestrian Prediction Models with Self-Supervised Continual Learning

Autonomous mobile robots require accurate human motion predictions to sa...
research
10/04/2020

Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting

The goal of continual learning (CL) is to learn a sequence of tasks with...
research
04/19/2019

Continual Learning with Self-Organizing Maps

Despite remarkable successes achieved by modern neural networks in a wid...

Please sign up or login with your details

Forgot password? Click here to reset