DeepAI AI Chat
Log In Sign Up

Overcoming catastrophic forgetting in neural networks

by   James Kirkpatrick, et al.

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.


page 1

page 2

page 3

page 4


Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization

Humans and most animals can learn new tasks without forgetting old ones....

Learn-Prune-Share for Lifelong Learning

In lifelong learning, we wish to maintain and update a model (e.g., a ne...

BooVAE: A scalable framework for continual VAE learning under boosting approach

Variational Auto Encoders (VAE) are capable of generating realistic imag...

A general approach to progressive intelligence

In biological learning, data is used to improve performance on the task ...

Synaptic metaplasticity with multi-level memristive devices

Deep learning has made remarkable progress in various tasks, surpassing ...

Sequential mastery of multiple tasks: Networks naturally learn to learn

We explore the behavior of a standard convolutional neural net in a sett...

Pseudo-Recursal: Solving the Catastrophic Forgetting Problem in Deep Neural Networks

In general, neural networks are not currently capable of learning tasks ...

Code Repositories


Implementation of "Overcoming catastrophic forgetting in neural networks" in Tensorflow

view repo