Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

05/04/2022
by   Sami Ede, et al.
0

The ability to continuously process and retain new information like we do naturally as humans is a feat that is highly sought after when training neural networks. Unfortunately, the traditional optimization algorithms often require large amounts of data available during training time and updates wrt. new data are difficult after the training process has been completed. In fact, when new data or tasks arise, previous progress may be lost as neural networks are prone to catastrophic forgetting. Catastrophic forgetting describes the phenomenon when a neural network completely forgets previous knowledge when given new information. We propose a novel training algorithm called training by explaining in which we leverage Layer-wise Relevance Propagation in order to retain the information a neural network has already learned in previous tasks when training on new data. The method is evaluated on a range of benchmark datasets as well as more complex data. Our method not only successfully retains the knowledge of old tasks within the neural networks but does so more resource-efficiently than other state-of-the-art solutions.

READ FULL TEXT

page 5

page 11

research
02/22/2018

Overcoming Catastrophic Forgetting in Convolutional Neural Networks by Selective Network Augmentation

Lifelong learning aims to develop machine learning systems that can lear...
research
06/16/2023

Catastrophic Forgetting in the Context of Model Updates

A large obstacle to deploying deep learning models in practice is the pr...
research
06/08/2018

SupportNet: solving catastrophic forgetting in class incremental learning with support data

A plain well-trained deep learning model often does not have the ability...
research
04/29/2020

Neural Network Retraining for Model Serving

We propose incremental (re)training of a neural network model to cope wi...
research
08/11/2022

Empirical investigations on WVA structural issues

In this paper we want to present the results of empirical verification o...
research
02/21/2022

BERT WEAVER: Using WEight AVERaging to Enable Lifelong Learning for Transformer-based Models

Recent developments in transfer learning have boosted the advancements i...
research
10/10/2019

Learning to Remember from a Multi-Task Teacher

Recent studies on catastrophic forgetting during sequential learning typ...

Please sign up or login with your details

Forgot password? Click here to reset