Closed-Loop GAN for continual Learning

11/03/2018
by   Amanda Rios, et al.
0

Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting. Generative models have been explored as a means to approximate the distribution of old tasks and bypass storage of real data. Here we propose a cumulative closed-loop generator and embedded classifier using an AC-GAN architecture provided with external regularization by a small buffer. We evaluate incremental learning using a notoriously hard paradigm, single headed learning, in which each task is a disjoint subset of classes in the overall dataset, and performance is evaluated on all previous classes. First, we show that the variability contained in a small percentage of a dataset (memory buffer) accounts for a significant portion of the reported accuracy, both in multi-task and continual learning settings. Second, we show that using a generator to continuously output new images while training provides an up-sampling of the buffer, which prevents catastrophic forgetting and yields superior performance when compared to a fixed buffer. We achieve an average accuracy for all classes of 92.26 after 5 tasks using GAN sampling with a buffer of only 0.17 dataset size. We compare to a network with regularization (EWC) which shows a deteriorated average performance of 29.19 baseline of no regularization (plain gradient descent) performs at 99.84 (MNIST) and 99.79 tasks. Our method has very low long-term memory cost, the buffer, as well as negligible intermediate memory storage.

READ FULL TEXT
research
11/23/2022

Integral Continual Learning Along the Tangent Vector Field of Tasks

We propose a continual learning method which incorporates information fr...
research
11/28/2018

Experience Replay for Continual Learning

Continual learning is the problem of learning new tasks or knowledge whi...
research
05/18/2018

Overcoming catastrophic forgetting problem by weight consolidation and long-term memory

Sequential learning of multiple tasks in artificial neural networks usin...
research
01/25/2021

Online Continual Learning in Image Classification: An Empirical Survey

Online continual learning for image classification studies the problem o...
research
04/09/2023

Does Continual Learning Equally Forget All Parameters?

Distribution shift (e.g., task or domain shift) in continual learning (C...
research
06/01/2022

Transfer without Forgetting

This work investigates the entanglement between Continual Learning (CL) ...
research
02/11/2022

Incremental Learning of Structured Memory via Closed-Loop Transcription

This work proposes a minimal computational model for learning a structur...

Please sign up or login with your details

Forgot password? Click here to reset