DeepAI
Log In Sign Up

Improved BiGAN training with marginal likelihood equalization

11/04/2019
by   Pablo Sánchez-Martín, et al.
8

We propose a novel training procedure for improving the performance of generative adversarial networks (GANs), especially to bidirectional GANs. First, we enforce that the empirical distribution of the inverse inference network matches the prior distribution, which favors the generator network reproducibility on the seen samples. Second, we have found that the marginal log-likelihood of the samples shows a severe overrepresentation of a certain type of samples. To address this issue, we propose to train the bidirectional GAN using a non-uniform sampling for the mini-batch selection, resulting in improved quality and variety in generated samples measured quantitatively and by visual inspection. We illustrate our new procedure with the well-known CIFAR10, Fashion MNIST and CelebA datasets.

READ FULL TEXT

page 9

page 10

page 12

page 20

page 21

11/20/2017

Bidirectional Conditional Generative Adversarial Networks

Conditional variants of Generative Adversarial Networks (GANs), known as...
10/09/2019

Prescribed Generative Adversarial Networks

Generative adversarial networks (GANs) are a powerful approach to unsupe...
11/04/2016

Ways of Conditioning Generative Adversarial Networks

The GANs are generative models whose random samples realistically reflec...
06/19/2018

Mixed batches and symmetric discriminators for GAN training

Generative adversarial networks (GANs) are pow- erful generative models ...
04/18/2019

Reducing Noise in GAN Training with Variance Reduced Extragradient

Using large mini-batches when training generative adversarial networks (...
01/28/2019

Out-of-Sample Testing for GANs

We propose a new method to evaluate GANs, namely EvalGAN. EvalGAN relies...
03/07/2017

Stopping GAN Violence: Generative Unadversarial Networks

While the costs of human violence have attracted a great deal of attenti...