Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile

07/07/2018
by   Panayotis Mertikopoulos, et al.
0

Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond. By necessity, most theoretical guarantees revolve around convex-concave (or even linear) problems; however, making theoretical inroads towards efficient GAN training depends crucially on moving beyond this classic framework. To make piecemeal progress along these lines, we analyze the behavior of mirror descent (MD) in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality - a property which we call coherence. We first show that ordinary, "vanilla" MD converges under a strict version of this condition, but not otherwise; in particular, it may fail to converge even in bilinear models with a unique solution. We then show that this deficiency is mitigated by optimism: by taking an "extra-gradient" step, optimistic mirror descent (OMD) converges in all coherent problems. Our analysis generalizes and extends the results of Daskalakis et al. (2018) for optimistic gradient descent (OGD) in bilinear problems, and makes concrete headway for establishing convergence beyond convex-concave games. We also provide stochastic analogues of these results, and we validate our analysis by numerical experiments in a wide array of GAN models (including Gaussian mixture models, as well as the CelebA and CIFAR-10 datasets).

READ FULL TEXT

page 10

page 23

research
07/07/2018

Mirror descent in saddle-point problems: Going the extra (gradient) mile

Owing to their connection with generative adversarial networks (GANs), s...
research
07/08/2020

Stochastic Hamiltonian Gradient Methods for Smooth Games

The success of adversarial formulations in machine learning has brought ...
research
10/17/2022

Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems

Despite the established convergence theory of Optimistic Gradient Descen...
research
06/12/2018

The Unusual Effectiveness of Averaging in GAN Training

We show empirically that the optimal strategy of parameter averaging in ...
research
03/23/2020

Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling

Owing to their stability and convergence speed, extragradient methods ha...
research
05/27/2019

Revisiting Stochastic Extragradient

We consider a new extension of the extragradient method that is motivate...
research
02/24/2019

Training GANs with Centripetal Acceleration

Training generative adversarial networks (GANs) often suffers from cycli...

Please sign up or login with your details

Forgot password? Click here to reset