DeepAI AI Chat
Log In Sign Up

Non-saturating GAN training as divergence minimization

10/15/2020
by   Matt Shannon, et al.
0

Non-saturating generative adversarial network (GAN) training is widely used and has continued to obtain groundbreaking results. However so far this approach has lacked strong theoretical justification, in contrast to alternatives such as f-GANs and Wasserstein GANs which are motivated in terms of approximate divergence minimization. In this paper we show that non-saturating GAN training does in fact approximately minimize a particular f-divergence. We develop general theoretical tools to compare and classify f-divergences and use these to show that the new f-divergence is qualitatively similar to reverse KL. These results help to explain the high sample quality but poor diversity often observed empirically when using this scheme.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/22/2019

Bridging the Gap Between f-GANs and Wasserstein GANs

Generative adversarial networks (GANs) have enjoyed much success in lear...
06/02/2016

f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization

Generative neural samplers are probabilistic models that implement sampl...
06/27/2018

Approximability of Discriminators Implies Diversity in GANs

While Generative Adversarial Networks (GANs) have empirically produced i...
09/02/2020

Properties of f-divergences and f-GAN training

In this technical report we describe some properties of f-divergences an...
10/23/2017

Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step

Generative adversarial networks (GANs) are a family of generative models...
05/13/2021

Empirical Evaluation of Biased Methods for Alpha Divergence Minimization

In this paper we empirically evaluate biased methods for alpha-divergenc...
05/12/2022

α-GAN: Convergence and Estimation Guarantees

We prove a two-way correspondence between the min-max optimization of ge...