Gradient descent GAN optimization is locally stable

06/13/2017
by   Vaishnavh Nagarajan, et al.
0

Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze the "gradient descent" form of GAN optimization i.e., the natural setting where we simultaneously take small gradient steps in both generator and discriminator parameters. We show that even though GAN optimization does not correspond to a convex-concave game (even for simple parameterizations), under proper conditions, equilibrium points of this optimization procedure are still locally asymptotically stable for the traditional GAN formulation. On the other hand, we show that the recently proposed Wasserstein GAN can have non-convergent limit cycles near equilibrium. Motivated by this stability analysis, we propose an additional regularization term for gradient descent GAN updates, which is able to guarantee local stability for both the WGAN and the traditional GAN, and also shows practical promise in speeding up convergence and addressing mode collapse.

READ FULL TEXT

page 10

page 12

research
02/11/2020

Smoothness and Stability in GANs

Generative adversarial networks, or GANs, commonly display unstable beha...
research
08/04/2018

Global Convergence to the Equilibrium of GANs using Variational Inequalities

In optimization, the negative gradient of a function denotes the directi...
research
01/27/2019

Deconstructing Generative Adversarial Networks

We deconstruct the performance of GANs into three components: 1. Formu...
research
02/02/2019

Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal

Minmax optimization, especially in its general nonconvex-nonconcave form...
research
01/13/2018

Which Training Methods for GANs do actually Converge?

Recent work has shown local convergence of GAN training for absolutely c...
research
11/04/2020

On the Convergence of Gradient Descent in GANs: MMD GAN As a Gradient Flow

We consider the maximum mean discrepancy (MMD) GAN problem and propose a...
research
04/22/2020

Stabilizing Training of Generative Adversarial Nets via Langevin Stein Variational Gradient Descent

Generative adversarial networks (GANs), famous for the capability of lea...

Please sign up or login with your details

Forgot password? Click here to reset