Log In Sign Up

Prescribed Generative Adversarial Networks

by   Adji B. Dieng, et al.

Generative adversarial networks (GANs) are a powerful approach to unsupervised learning. They have achieved state-of-the-art performance in the image domain. However, GANs are limited in two ways. They often learn distributions with low support—a phenomenon known as mode collapse—and they do not guarantee the existence of a probability density, which makes evaluating generalization using predictive log-likelihood impossible. In this paper, we develop the prescribed GAN (PresGAN) to address these shortcomings. PresGANs add noise to the output of a density network and optimize an entropy-regularized adversarial loss. The added noise renders tractable approximations of the predictive log-likelihood and stabilizes the training procedure. The entropy regularizer encourages PresGANs to capture all the modes of the data distribution. Fitting PresGANs involves computing the intractable gradients of the entropy regularization term; PresGANs sidestep this intractability using unbiased stochastic estimates. We evaluate PresGANs on several datasets and found they mitigate mode collapse and generate samples with high perceptual quality. We further found that PresGANs reduce the gap in performance in terms of predictive log-likelihood between traditional GANs and variational autoencoders (VAEs).


page 2

page 18

page 28


"Best-of-Many-Samples" Distribution Matching

Generative Adversarial Networks (GANs) can achieve state-of-the-art samp...

Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs

Building on the success of deep learning, two modern approaches to learn...

Improved BiGAN training with marginal likelihood equalization

We propose a novel training procedure for improving the performance of g...

Stopping GAN Violence: Generative Unadversarial Networks

While the costs of human violence have attracted a great deal of attenti...

Maximum Entropy Generators for Energy-Based Models

Unsupervised learning is about capturing dependencies between variables ...

Out-of-Sample Testing for GANs

We propose a new method to evaluate GANs, namely EvalGAN. EvalGAN relies...

GANs with Variational Entropy Regularizers: Applications in Mitigating the Mode-Collapse Issue

Building on the success of deep learning, Generative Adversarial Network...

Code Repositories


Prescribed Generative Adversarial Networks

view repo