Log In Sign Up

Improved BiGAN training with marginal likelihood equalization

by   Pablo Sánchez-Martín, et al.

We propose a novel training procedure for improving the performance of generative adversarial networks (GANs), especially to bidirectional GANs. First, we enforce that the empirical distribution of the inverse inference network matches the prior distribution, which favors the generator network reproducibility on the seen samples. Second, we have found that the marginal log-likelihood of the samples shows a severe overrepresentation of a certain type of samples. To address this issue, we propose to train the bidirectional GAN using a non-uniform sampling for the mini-batch selection, resulting in improved quality and variety in generated samples measured quantitatively and by visual inspection. We illustrate our new procedure with the well-known CIFAR10, Fashion MNIST and CelebA datasets.


page 9

page 10

page 12

page 20

page 21


Bidirectional Conditional Generative Adversarial Networks

Conditional variants of Generative Adversarial Networks (GANs), known as...

Prescribed Generative Adversarial Networks

Generative adversarial networks (GANs) are a powerful approach to unsupe...

Ways of Conditioning Generative Adversarial Networks

The GANs are generative models whose random samples realistically reflec...

Mixed batches and symmetric discriminators for GAN training

Generative adversarial networks (GANs) are pow- erful generative models ...

Reducing Noise in GAN Training with Variance Reduced Extragradient

Using large mini-batches when training generative adversarial networks (...

Out-of-Sample Testing for GANs

We propose a new method to evaluate GANs, namely EvalGAN. EvalGAN relies...

Stopping GAN Violence: Generative Unadversarial Networks

While the costs of human violence have attracted a great deal of attenti...