On the convergence properties of GAN training

01/13/2018
by   Lars Mescheder, et al.
0

Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this note we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is generally not convergent. Furthermore, we discuss recent regularization strategies that were proposed to stabilize GAN training. Our analysis shows that while GAN training with instance noise or gradient penalties converges, Wasserstein-GANs and Wasserstein-GANs-GP with a finite number of discriminator updates per generator update do in general not converge to the equilibrium point. We explain these results and show that both instance noise and gradient penalties constitute solutions to the problem of purely imaginary eigenvalues of the Jacobian of the gradient vector field. Based on our analysis, we also propose a simplified gradient penalty with the same effects on local convergence as more complicated penalties.

READ FULL TEXT
research
01/13/2018

Which Training Methods for GANs do actually Converge?

Recent work has shown local convergence of GAN training for absolutely c...
research
02/08/2021

Functional Space Analysis of Local GAN Convergence

Recent work demonstrated the benefits of studying continuous-time dynami...
research
10/05/2018

Local Stability and Performance of Simple Gradient Penalty mu-Wasserstein GAN

Wasserstein GAN(WGAN) is a model that minimizes the Wasserstein distance...
research
02/05/2019

Perturbative GAN: GAN with Perturbation Layers

Perturbative GAN, which replaces convolution layers of existing convolut...
research
05/19/2017

On Convergence and Stability of GANs

We propose studying GAN training dynamics as regret minimization, which ...
research
11/11/2020

(f,Γ)-Divergences: Interpolating between f-Divergences and Integral Probability Metrics

We develop a general framework for constructing new information-theoreti...
research
02/06/2023

Private GANs, Revisited

We show that the canonical approach for training differentially private ...

Please sign up or login with your details

Forgot password? Click here to reset