GANs beyond divergence minimization

Generative adversarial networks (GANs) can be interpreted as an adversarial game between two players, a discriminator D and a generator G, in which D learns to classify real from fake data and G learns to generate realistic data by "fooling" D into thinking that fake data is actually real data. Currently, a dominating view is that G actually learns by minimizing a divergence given that the general objective function is a divergence when D is optimal. However, this view has been challenged due to inconsistencies between theory and practice. In this paper, we discuss of the properties associated with most loss functions for G (e.g., saturating/non-saturating f-GAN, LSGAN, WGAN, etc.). We show that these loss functions are not divergences and do not have the same equilibrium as expected of divergences. This suggests that G does not need to minimize the same objective function as D maximize, nor maximize the objective of D after swapping real data with fake data (non-saturating GAN) but can instead use a wide range of possible loss functions to learn to generate realistic data. We define GANs through two separate and independent D maximization and G minimization steps. We generalize the generator step to four new classes of loss functions, most of which are actual divergences (while traditional G loss functions are not). We test a wide variety of loss functions from these four classes on a synthetic dataset and on CIFAR-10. We observe that most loss functions converge well and provide comparable data generation quality to non-saturating GAN, LSGAN, and WGAN-GP generator loss functions, whether we use divergences or non-divergences. These results suggest that GANs do not conform well to the divergence minimization theory and form a much broader range of models than previously assumed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2018

The relativistic discriminator: a key element missing from standard GAN

In standard generative adversarial network (SGAN), the discriminator est...
research
12/05/2020

Adaptive Weighted Discriminator for Training Generative Adversarial Networks

Generative adversarial network (GAN) has become one of the most importan...
research
11/06/2017

KGAN: How to Break The Minimax Game in GAN

Generative Adversarial Networks (GANs) were intuitively and attractively...
research
05/19/2017

On Convergence and Stability of GANs

We propose studying GAN training dynamics as regret minimization, which ...
research
02/02/2023

MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations

The conventional understanding of adversarial training in generative adv...
research
02/28/2023

Towards Addressing GAN Training Instabilities: Dual-objective GANs with Tunable Parameters

In an effort to address the training instabilities of GANs, we introduce...
research
07/14/2017

f-GANs in an Information Geometric Nutshell

Nowozin et al showed last year how to extend the GAN principle to all f-...

Please sign up or login with your details

Forgot password? Click here to reset