Selectively increasing the diversity of GAN-generated samples

07/04/2022
by   Jan Dubiński, et al.
0

Generative Adversarial Networks (GANs) are powerful models able to synthesize data samples closely resembling the distribution of real data, yet the diversity of those generated samples is limited due to the so-called mode collapse phenomenon observed in GANs. Especially prone to mode collapse are conditional GANs, which tend to ignore the input noise vector and focus on the conditional information. Recent methods proposed to mitigate this limitation increase the diversity of generated samples, yet they reduce the performance of the models when similarity of samples is required. To address this shortcoming, we propose a novel method to selectively increase the diversity of GAN-generated samples. By adding a simple, yet effective regularization to the training loss function we encourage the generator to discover new data modes for inputs related to diverse outputs while generating consistent samples for the remaining ones. More precisely, we maximise the ratio of distances between generated images and input latent vectors scaling the effect according to the diversity of samples for a given conditional input. We show the superiority of our method in a synthetic benchmark as well as a real-life scenario of simulating data from the Zero Degree Calorimeter of ALICE experiment in LHC, CERN.

READ FULL TEXT

page 6

page 8

research
09/24/2020

GANs with Variational Entropy Regularizers: Applications in Mitigating the Mode-Collapse Issue

Building on the success of deep learning, Generative Adversarial Network...
research
09/05/2021

VARGAN: Variance Enforcing Network Enhanced GAN

Generative adversarial networks (GANs) are one of the most widely used g...
research
11/25/2020

How to train your conditional GAN: An approach using geometrically structured latent manifolds

Conditional generative modeling typically requires capturing one-to-many...
research
12/12/2017

PacGAN: The power of two samples in generative adversarial networks

Generative adversarial networks (GANs) are innovative techniques for lea...
research
05/22/2017

VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning

Deep generative models provide powerful tools for distributions over com...
research
10/05/2022

The Vendi Score: A Diversity Evaluation Metric for Machine Learning

Diversity is an important criterion for many areas of machine learning (...

Please sign up or login with your details

Forgot password? Click here to reset