Mirror descent in saddle-point problems: Going the extra (gradient) mile

07/07/2018
by   Panayotis Mertikopoulos, et al.
0

Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond. By necessity, most theoretical guarantees revolve around convex-concave problems; however, making theoretical inroads towards efficient GAN training crucially depends on moving beyond this classic framework. To make piecemeal progress along these lines, we analyze the widely used mirror descent (MD) method in a class of non-monotone problems - called coherent - whose solutions coincide with those of a naturally associated variational inequality. Our first result is that, under strict coherence (a condition satisfied by all strictly convex-concave problems), MD methods converge globally; however, they may fail to converge even in simple, bilinear models. To mitigate this deficiency, we add on an "extra-gradient" step which we show stabilizes MD methods by looking ahead and using a "future gradient". These theoretical results are subsequently validated by numerical experiments in GANs.

READ FULL TEXT
research
07/07/2018

Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile

Owing to their connection with generative adversarial networks (GANs), s...
research
10/17/2022

Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems

Despite the established convergence theory of Optimistic Gradient Descen...
research
09/29/2021

On the One-sided Convergence of Adam-type Algorithms in Non-convex Non-concave Min-max Optimization

Adam-type methods, the extension of adaptive gradient methods, have show...
research
03/29/2021

Saddle Point Optimization with Approximate Minimization Oracle

A major approach to saddle point optimization min_xmax_y f(x, y) is a gr...
research
01/24/2019

A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach

We consider solving convex-concave saddle point problems. We focus on tw...
research
10/31/2019

A Decentralized Proximal Point-type Method for Saddle Point Problems

In this paper, we focus on solving a class of constrained non-convex non...
research
08/17/2023

Distributed Extra-gradient with Optimal Complexity and Communication Guarantees

We consider monotone variational inequality (VI) problems in multi-GPU s...

Please sign up or login with your details

Forgot password? Click here to reset