DeepAI AI Chat
Log In Sign Up

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

by   Sami Ede, et al.

The ability to continuously process and retain new information like we do naturally as humans is a feat that is highly sought after when training neural networks. Unfortunately, the traditional optimization algorithms often require large amounts of data available during training time and updates wrt. new data are difficult after the training process has been completed. In fact, when new data or tasks arise, previous progress may be lost as neural networks are prone to catastrophic forgetting. Catastrophic forgetting describes the phenomenon when a neural network completely forgets previous knowledge when given new information. We propose a novel training algorithm called training by explaining in which we leverage Layer-wise Relevance Propagation in order to retain the information a neural network has already learned in previous tasks when training on new data. The method is evaluated on a range of benchmark datasets as well as more complex data. Our method not only successfully retains the knowledge of old tasks within the neural networks but does so more resource-efficiently than other state-of-the-art solutions.


page 5

page 11


Overcoming Catastrophic Forgetting in Convolutional Neural Networks by Selective Network Augmentation

Lifelong learning aims to develop machine learning systems that can lear...

SupportNet: solving catastrophic forgetting in class incremental learning with support data

A plain well-trained deep learning model often does not have the ability...

BERT WEAVER: Using WEight AVERaging to Enable Lifelong Learning for Transformer-based Models

Recent developments in transfer learning have boosted the advancements i...

Neural Network Retraining for Model Serving

We propose incremental (re)training of a neural network model to cope wi...

Empirical investigations on WVA structural issues

In this paper we want to present the results of empirical verification o...

Learning to Remember from a Multi-Task Teacher

Recent studies on catastrophic forgetting during sequential learning typ...

Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment Classification

Lifelong learning capabilities are crucial for sentiment classifiers to ...