Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting

10/04/2020
by   Sayna Ebrahimi, et al.
9

The goal of continual learning (CL) is to learn a sequence of tasks without suffering from the phenomenon of catastrophic forgetting. Previous work has shown that leveraging memory in the form of a replay buffer can reduce performance degradation on prior tasks. We hypothesize that forgetting can be further reduced when the model is encouraged to remember the evidence for previously made decisions. As a first step towards exploring this hypothesis, we propose a simple novel training paradigm, called Remembering for the Right Reasons (RRR), that additionally stores visual model explanations for each example in the buffer and ensures the model has "the right reasons" for its predictions by encouraging its explanations to remain consistent with those used to make decisions at training time. Without this constraint, there is a drift in explanations and increase in forgetting as conventional continual learning algorithms learn new tasks. We demonstrate how RRR can be easily added to any memory or regularization-based approach and results in reduced forgetting, and more importantly, improved model explanations. We have evaluated our approach in the standard and few-shot settings and observed a consistent improvement across various CL approaches using different architectures and techniques to generate model explanations and demonstrated our approach showing a promising connection between explainability and continual learning. Our code is available at https://github.com/SaynaEbrahimi/Remembering-for-the-Right-Reasons.

READ FULL TEXT

page 2

page 8

page 12

research
10/21/2021

Wide Neural Networks Forget Less Catastrophically

A growing body of research in continual learning is devoted to overcomin...
research
09/09/2020

Routing Networks with Co-training for Continual Learning

The core challenge with continual learning is catastrophic forgetting, t...
research
07/11/2022

Consistency is the key to further mitigating catastrophic forgetting in continual learning

Deep neural networks struggle to continually learn multiple sequential t...
research
06/26/2020

Continual Learning from the Perspective of Compression

Connectionist models such as neural networks suffer from catastrophic fo...
research
12/21/2022

Continual Interactive Behavior Learning With Traffic Divergence Measurement: A Dynamic Gradient Scenario Memory Approach

Developing autonomous vehicles (AVs) helps improve the road safety and t...
research
10/12/2022

On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning

Rehearsal approaches enjoy immense popularity with Continual Learning (C...
research
03/03/2022

Provable and Efficient Continual Representation Learning

In continual learning (CL), the goal is to design models that can learn ...

Please sign up or login with your details

Forgot password? Click here to reset