Generative Adversarial Network Training is a Continual Learning Problem

11/27/2018
by   Kevin J Liang, et al.
22

Generative Adversarial Networks (GANs) have proven to be a powerful framework for learning to draw samples from complex distributions. However, GANs are also notoriously difficult to train, with mode collapse and oscillations a common problem. We hypothesize that this is at least in part due to the evolution of the generator distribution and the catastrophic forgetting tendency of neural networks, which leads to the discriminator losing the ability to remember synthesized samples from previous instantiations of the generator. Recognizing this, our contributions are twofold. First, we show that GAN training makes for a more interesting and realistic benchmark for continual learning methods evaluation than some of the more canonical datasets. Second, we propose leveraging continual learning techniques to augment the discriminator, preserving its ability to recognize previous generator samples. We show that the resulting methods add only a light amount of computation, involve minimal changes to the model, and result in better overall performance on the examined image and text generation tasks.

READ FULL TEXT

page 13

page 14

research
12/29/2021

Overcoming Mode Collapse with Adaptive Multi Adversarial Training

Generative Adversarial Networks (GANs) are a class of generative models ...
research
05/19/2023

Few-Shot Continual Learning for Conditional Generative Adversarial Networks

In few-shot continual learning for generative models, a target mode must...
research
08/26/2021

Re-using Adversarial Mask Discriminators for Test-time Training under Distribution Shifts

Thanks to their ability to learn flexible data-driven losses, Generative...
research
03/06/2021

Efficient Continual Adaptation for Generative Adversarial Networks

We present a continual learning approach for generative adversarial netw...
research
11/30/2018

Lipizzaner: A System That Scales Robust Generative Adversarial Network Training

GANs are difficult to train due to convergence pathologies such as mode ...
research
11/27/2019

GRIm-RePR: Prioritising Generating Important Features for Pseudo-Rehearsal

Pseudo-rehearsal allows neural networks to learn a sequence of tasks wit...
research
12/06/2021

CSG0: Continual Urban Scene Generation with Zero Forgetting

With the rapid advances in generative adversarial networks (GANs), the v...

Please sign up or login with your details

Forgot password? Click here to reset