Empirical Analysis of Overfitting and Mode Drop in GAN Training

06/25/2020
by   Yasin Yazıcı, et al.
0

We examine two key questions in GAN training, namely overfitting and mode drop, from an empirical perspective. We show that when stochasticity is removed from the training procedure, GANs can overfit and exhibit almost no mode drop. Our results shed light on important characteristics of the GAN training procedure. They also provide evidence against prevailing intuitions that GANs do not memorize the training set, and that mode dropping is mainly due to properties of the GAN objective rather than how it is optimized during training.

READ FULL TEXT
research
10/24/2019

Seeing What a GAN Cannot Generate

Despite the success of Generative Adversarial Networks (GANs), mode coll...
research
06/09/2021

Realizing GANs via a Tunable Loss Function

We introduce a tunable GAN, called α-GAN, parameterized by α∈ (0,∞], whi...
research
07/12/2021

Prb-GAN: A Probabilistic Framework for GAN Modelling

Generative adversarial networks (GANs) are very popular to generate real...
research
08/26/2021

Can the Transformer Be Used as a Drop-in Replacement for RNNs in Text-Generating GANs?

In this paper we address the problem of fine-tuned text generation with ...
research
08/29/2019

Spectral Regularization for Combating Mode Collapse in GANs

Despite excellent progress in recent years, mode collapse remains a majo...
research
02/23/2022

When do GANs replicate? On the choice of dataset size

Do GANs replicate training images? Previous studies have shown that GANs...
research
05/19/2017

On Convergence and Stability of GANs

We propose studying GAN training dynamics as regret minimization, which ...

Please sign up or login with your details

Forgot password? Click here to reset