Investigating Under and Overfitting in Wasserstein Generative Adversarial Networks

10/30/2019
by   Ben Adlam, et al.
0

We investigate under and overfitting in Generative Adversarial Networks (GANs), using discriminators unseen by the generator to measure generalization. We find that the model capacity of the discriminator has a significant effect on the generator's model quality, and that the generator's poor performance coincides with the discriminator underfitting. Contrary to our expectations, we find that generators with large model capacities relative to the discriminator do not show evidence of overfitting on CIFAR10, CIFAR100, and CelebA.

READ FULL TEXT
research
11/07/2016

Unrolled Generative Adversarial Networks

We introduce a method to stabilize Generative Adversarial Networks (GANs...
research
11/03/2021

Discriminator Synthesis: On reusing the other half of Generative Adversarial Networks

Generative Adversarial Networks have long since revolutionized the world...
research
11/12/2021

Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data

Generative adversarial networks (GANs) typically require ample data for ...
research
02/24/2020

LogicGAN: Logic-guided Generative Adversarial Networks

Generative Adversarial Networks (GANs) are a revolutionary class of Deep...
research
01/28/2021

The Hidden Tasks of Generative Adversarial Networks: An Alternative Perspective on GAN Training

We present an alternative perspective on the training of generative adve...
research
02/28/2020

A U-Net Based Discriminator for Generative Adversarial Networks

Among the major remaining challenges for generative adversarial networks...
research
07/27/2018

From Adversarial Training to Generative Adversarial Networks

In this paper, we are interested in two seemingly different concepts: ad...

Please sign up or login with your details

Forgot password? Click here to reset