DeepAI AI Chat
Log In Sign Up

Non-saturating GAN training as divergence minimization

by   Matt Shannon, et al.

Non-saturating generative adversarial network (GAN) training is widely used and has continued to obtain groundbreaking results. However so far this approach has lacked strong theoretical justification, in contrast to alternatives such as f-GANs and Wasserstein GANs which are motivated in terms of approximate divergence minimization. In this paper we show that non-saturating GAN training does in fact approximately minimize a particular f-divergence. We develop general theoretical tools to compare and classify f-divergences and use these to show that the new f-divergence is qualitatively similar to reverse KL. These results help to explain the high sample quality but poor diversity often observed empirically when using this scheme.


page 1

page 2

page 3

page 4


Bridging the Gap Between f-GANs and Wasserstein GANs

Generative adversarial networks (GANs) have enjoyed much success in lear...

f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization

Generative neural samplers are probabilistic models that implement sampl...

Approximability of Discriminators Implies Diversity in GANs

While Generative Adversarial Networks (GANs) have empirically produced i...

Properties of f-divergences and f-GAN training

In this technical report we describe some properties of f-divergences an...

Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step

Generative adversarial networks (GANs) are a family of generative models...

Empirical Evaluation of Biased Methods for Alpha Divergence Minimization

In this paper we empirically evaluate biased methods for alpha-divergenc...

α-GAN: Convergence and Estimation Guarantees

We prove a two-way correspondence between the min-max optimization of ge...