PacGAN: The power of two samples in generative adversarial networks

12/12/2017
by   Zinan Lin, et al.
0

Generative adversarial networks (GANs) are innovative techniques for learning generative models of complex data distributions from samples. Despite remarkable recent improvements in generating realistic images, one of their major shortcomings is the fact that in practice, they tend to produce samples with little diversity, even when trained on diverse datasets. This phenomenon, known as mode collapse, has been the main focus of several recent advances in GANs. Yet there is little understanding of why mode collapse happens and why existing approaches are able to mitigate mode collapse. We propose a principled approach to handling mode collapse, which we call packing. The main idea is to modify the discriminator to make decisions based on multiple samples from the same class, either real or artificially generated. We borrow analysis tools from binary hypothesis testing---in particular the seminal result of Blackwell [Bla53]---to prove a fundamental connection between packing and mode collapse. We show that packing naturally penalizes generators with mode collapse, thereby favoring generator distributions with less mode collapse during the training process. Numerical experiments on benchmark datasets suggests that packing provides significant improvements in practice as well.

READ FULL TEXT
research
09/05/2021

VARGAN: Variance Enforcing Network Enhanced GAN

Generative adversarial networks (GANs) are one of the most widely used g...
research
12/29/2021

Overcoming Mode Collapse with Adaptive Multi Adversarial Training

Generative Adversarial Networks (GANs) are a class of generative models ...
research
07/04/2022

Selectively increasing the diversity of GAN-generated samples

Generative Adversarial Networks (GANs) are powerful models able to synth...
research
11/14/2022

Shared Loss between Generators of GANs

Generative adversarial networks are generative models that are capable o...
research
09/24/2020

GANs with Variational Entropy Regularizers: Applications in Mitigating the Mode-Collapse Issue

Building on the success of deep learning, Generative Adversarial Network...
research
04/02/2021

Partition-Guided GANs

Despite the success of Generative Adversarial Networks (GANs), their tra...
research
05/22/2017

VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning

Deep generative models provide powerful tools for distributions over com...

Please sign up or login with your details

Forgot password? Click here to reset