Localizing Catastrophic Forgetting in Neural Networks

06/06/2019
by   Felix Wiewel, et al.
0

Artificial neural networks (ANNs) suffer from catastrophic forgetting when trained on a sequence of tasks. While this phenomenon was studied in the past, there is only very limited recent research on this phenomenon. We propose a method for determining the contribution of individual parameters in an ANN to catastrophic forgetting. The method is used to analyze an ANNs response to three different continual learning scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2018

Continual Reinforcement Learning with Complex Synapses

Unlike humans, who are capable of continual learning over their lifetime...
research
08/20/2018

Catastrophic Importance of Catastrophic Forgetting

This paper describes some of the possibilities of artificial neural netw...
research
08/10/2022

ATLAS: Universal Function Approximator for Memory Retention

Artificial neural networks (ANNs), despite their universal function appr...
research
06/01/2023

Out-of-distribution forgetting: vulnerability of continual learning to intra-class distribution shift

Continual learning (CL) is an important technique to allow artificial ne...
research
02/02/2018

Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization

Humans and most animals can learn new tasks without forgetting old ones....
research
03/02/2019

Attention-Based Structural-Plasticity

Catastrophic forgetting/interference is a critical problem for lifelong ...
research
05/19/2022

How catastrophic can catastrophic forgetting be in linear regression?

To better understand catastrophic forgetting, we study fitting an overpa...

Please sign up or login with your details

Forgot password? Click here to reset