A Variational Inequality Perspective on Generative Adversarial Nets

02/28/2018
by   Gauthier Gidel, et al.
0

Stability has been a recurrent issue in training generative adversarial networks (GANs). One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods specifically designed for this adversarial training. In this work, we review the "variational inequality" framework which contains most formulations of the GAN objective introduced so far. Taping into the mathematical programming literature, we counter some common misconceptions about the difficulties of saddle point optimization and propose to extend standard methods designed for variational inequalities to GANs training, such as a stochastic version of the extragradient method, and empirically investigate their behavior on GANs.

READ FULL TEXT

page 8

page 16

page 18

research
06/19/2017

Dualing GANs

Generative adversarial nets (GANs) are a promising technique for modelin...
research
04/18/2019

Reducing Noise in GAN Training with Variance Reduced Extragradient

Using large mini-batches when training generative adversarial networks (...
research
09/01/2020

A Mathematical Introduction to Generative Adversarial Nets (GAN)

Generative Adversarial Nets (GAN) have received considerable attention s...
research
03/13/2018

Analysis of Nonautonomous Adversarial Systems

Generative adversarial networks are used to generate images but still th...
research
05/27/2019

Revisiting Stochastic Extragradient

We consider a new extension of the extragradient method that is motivate...
research
11/02/2017

A Classification-Based Perspective on GAN Distributions

A fundamental, and still largely unanswered, question in the context of ...
research
12/01/2021

Convergence of GANs Training: A Game and Stochastic Control Methodology

Training of generative adversarial networks (GANs) is known for its diff...

Please sign up or login with your details

Forgot password? Click here to reset