Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step

10/23/2017
by   William Fedus, et al.
0

Generative adversarial networks (GANs) are a family of generative models that do not minimize a single training criterion. Unlike other generative models, the data distribution is learned via a game between a generator (the generative model) and a discriminator (a teacher providing training signal) that each minimize their own cost. GANs are designed to reach a Nash equilibrium at which each player cannot reduce their cost without changing the other players' parameters. One useful approach for the theory of GANs is to show that a divergence between the training distribution and the model distribution obtains its minimum value at equilibrium. Several recent research directions have been motivated by the idea that this divergence is the primary guide for the learning process and that every step of learning should decrease the divergence. We show that this view is overly restrictive. During GAN training, the discriminator provides learning signal in situations where the gradients of the divergences between distributions would not be useful. We provide empirical counterexamples to the view of GAN training as divergence minimization. Specifically, we demonstrate that GANs are able to learn distributions in situations where the divergence minimization point of view predicts they would fail. We also show that gradient penalties motivated from the divergence minimization perspective are equally helpful when applied in other contexts in which the divergence minimization perspective does not predict they would be helpful. This contributes to a growing body of evidence that GAN training may be more usefully viewed as approaching Nash equilibria via trajectories that do not necessarily minimize a specific divergence at each step.

READ FULL TEXT

page 10

page 16

page 17

page 18

research
02/21/2020

GANs May Have No Nash Equilibria

Generative adversarial networks (GANs) represent a zero-sum game between...
research
05/19/2017

On Convergence and Stability of GANs

We propose studying GAN training dynamics as regret minimization, which ...
research
10/10/2016

Generative Adversarial Nets from a Density Ratio Estimation Perspective

Generative adversarial networks (GANs) are successful deep generative mo...
research
10/15/2020

Non-saturating GAN training as divergence minimization

Non-saturating generative adversarial network (GAN) training is widely u...
research
03/28/2022

Conjugate Gradient Method for Generative Adversarial Networks

While the generative model has many advantages, it is not feasible to ca...
research
06/11/2019

A Closer Look at the Optimization Landscapes of Generative Adversarial Networks

Generative adversarial networks have been very successful in generative ...
research
02/20/2020

A Novel Framework for Selection of GANs for an Application

Generative Adversarial Network (GAN) is a current focal point of researc...

Please sign up or login with your details

Forgot password? Click here to reset