Global Convergence to the Equilibrium of GANs using Variational Inequalities

08/04/2018
by   Ian Gemp, et al.
0

In optimization, the negative gradient of a function denotes the direction of steepest descent. Furthermore, traveling in any direction orthogonal to the gradient maintains the value of the function. In this work, we show that these orthogonal directions that are ignored by gradient descent can be critical in equilibrium problems. Equilibrium problems have drawn heightened attention in machine learning due to the emergence of the Generative Adversarial Network (GAN). We use the framework of Variational Inequalities to analyze popular training algorithms for a fundamental GAN variant: the Wasserstein Linear-Quadratic GAN. We show that the steepest descent direction causes divergence from the equilibrium, and guaranteed convergence to the equilibrium is achieved through following a particular orthogonal direction. We call this successful technique Crossing-the-Curl, named for its mathematical derivation as well as its intuition: identify the game's axis of rotation and move "across" space in the direction towards smaller "curling".

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2017

Gradient descent GAN optimization is locally stable

Despite the growing prominence of generative adversarial networks (GANs)...
research
11/04/2020

On the Convergence of Gradient Descent in GANs: MMD GAN As a Gradient Flow

We consider the maximum mean discrepancy (MMD) GAN problem and propose a...
research
02/13/2018

First Order Generative Adversarial Networks

GANs excel at learning high dimensional distributions, but they can upda...
research
07/12/2018

Negative Momentum for Improved Game Dynamics

Games generalize the optimization paradigm by introducing different obje...
research
09/02/2021

Solving Inverse Problems with Conditional-GAN Prior via Fast Network-Projected Gradient Descent

The projected gradient descent (PGD) method has shown to be effective in...
research
10/09/2022

Dissecting adaptive methods in GANs

Adaptive methods are a crucial component widely used for training genera...
research
03/05/2019

Convergence of gradient descent-ascent analyzed as a Newtonian dynamical system with dissipation

A dynamical system is defined in terms of the gradient of a payoff funct...

Please sign up or login with your details

Forgot password? Click here to reset