FoCL: Feature-Oriented Continual Learning for Generative Models

03/09/2020
by   Qicheng Lao, et al.
0

In this paper, we propose a general framework in continual learning for generative models: Feature-oriented Continual Learning (FoCL). Unlike previous works that aim to solve the catastrophic forgetting problem by introducing regularization in the parameter space or image space, FoCL imposes regularization in the feature space. We show in our experiments that FoCL has faster adaptation to distributional changes in sequentially arriving tasks, and achieves the state-of-the-art performance for generative models in task incremental learning. We discuss choices of combined regularization spaces towards different use case scenarios for boosted performance, e.g., tasks that have high variability in the background. Finally, we introduce a forgetfulness measure that fairly evaluates the degree to which a model suffers from forgetting. Interestingly, the analysis of our proposed forgetfulness score also implies that FoCL tends to have a mitigated forgetting for future tasks.

READ FULL TEXT
research
10/29/2017

Variational Continual Learning

This paper develops variational continual learning (VCL), a simple but g...
research
03/17/2023

Fixed Design Analysis of Regularization-Based Continual Learning

We consider a continual learning (CL) problem with two linear regression...
research
05/17/2023

Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models

The recent proliferation of large-scale text-to-image models has led to ...
research
03/25/2021

Efficient Feature Transformations for Discriminative and Generative Continual Learning

As neural networks are increasingly being applied to real-world applicat...
research
05/23/2017

Continual Learning in Generative Adversarial Nets

Developments in deep generative models have allowed for tractable learni...
research
02/11/2022

Continual Learning with Invertible Generative Models

Catastrophic forgetting (CF) happens whenever a neural network overwrite...
research
07/05/2020

Pseudo-Rehearsal for Continual Learning with Normalizing Flows

Catastrophic forgetting (CF) happens whenever a neural network overwrite...

Please sign up or login with your details

Forgot password? Click here to reset