Local Convergence of Gradient Descent-Ascent for Training Generative Adversarial Networks

05/14/2023
by   Evan Becker, et al.
0

Generative Adversarial Networks (GANs) are a popular formulation to train generative models for complex high dimensional data. The standard method for training GANs involves a gradient descent-ascent (GDA) procedure on a minimax optimization problem. This procedure is hard to analyze in general due to the nonlinear nature of the dynamics. We study the local dynamics of GDA for training a GAN with a kernel-based discriminator. This convergence analysis is based on a linearization of a non-linear dynamical system that describes the GDA iterations, under an isolated points model assumption from [Becker et al. 2022]. Our analysis brings out the effect of the learning rates, regularization, and the bandwidth of the kernel discriminator, on the local convergence rate of GDA. Importantly, we show phase transitions that indicate when the system converges, oscillates, or diverges. We also provide numerical simulations that verify our claims.

READ FULL TEXT
research
02/11/2019

Improving Generalization and Stability of Generative Adversarial Networks

Generative Adversarial Networks (GANs) are one of the most popular tools...
research
08/21/2022

Instability and Local Minima in GAN Training with Kernel Discriminators

Generative Adversarial Networks (GANs) are a widely-used tool for genera...
research
05/26/2019

ODE Analysis of Stochastic Gradient Methods with Optimism and Anchoring for Minimax Problems and GANs

Despite remarkable empirical success, the training dynamics of generativ...
research
02/16/2018

Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks

Motivated by the pursuit of a systematic computational and algorithmic u...
research
01/09/2017

AdaGAN: Boosting Generative Models

Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an e...
research
01/26/2019

Witnessing Adversarial Training in Reproducing Kernel Hilbert Spaces

Modern implicit generative models such as generative adversarial network...
research
01/07/2018

Gradient Layer: Enhancing the Convergence of Adversarial Training for Generative Models

We propose a new technique that boosts the convergence of training gener...

Please sign up or login with your details

Forgot password? Click here to reset