On Convergence and Stability of GANs

by   Naveen Kodali, et al.

We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.


page 7

page 11

page 12

page 13

page 14


Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step

Generative adversarial networks (GANs) are a family of generative models...

GANs beyond divergence minimization

Generative adversarial networks (GANs) can be interpreted as an adversar...

On the convergence properties of GAN training

Recent work has shown local convergence of GAN training for absolutely c...

Implicit competitive regularization in GANs

Generative adversarial networks (GANs) are capable of producing high qua...

Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields

Generative adversarial networks (GANs) evolved into one of the most succ...

Empirical Analysis of Overfitting and Mode Drop in GAN Training

We examine two key questions in GAN training, namely overfitting and mod...

A Novel Framework for Selection of GANs for an Application

Generative Adversarial Network (GAN) is a current focal point of researc...

Code Repositories


A stable algorithm for GAN training

view repo

Please sign up or login with your details

Forgot password? Click here to reset