A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach

01/24/2019
by   Aryan Mokhtari, et al.
0

We consider solving convex-concave saddle point problems. We focus on two variants of gradient decent-ascent algorithms, Extra-gradient (EG) and Optimistic Gradient (OGDA) methods, and show that they admit a unified analysis as approximations of the classical proximal point method for solving saddle-point problems. This viewpoint enables us to generalize EG (in terms of extrapolation steps) and OGDA (in terms of parameters) and obtain new convergence rate results for these algorithms for the bilinear case as well as the strongly convex-concave case.

READ FULL TEXT
research
01/25/2021

Extragradient and Extrapolation Methods with Generalized Bregman Distances for Saddle Point Problems

In this work, we introduce two algorithmic frameworks, named Bregman ext...
research
01/23/2020

An O(s^r)-Resolution ODE Framework for Discrete-Time Optimization Algorithms and Applications to Convex-Concave Saddle-Point Problems

There has been a long history of using Ordinary Differential Equations (...
research
10/17/2022

Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems

Despite the established convergence theory of Optimistic Gradient Descen...
research
09/13/2019

A Stochastic Proximal Point Algorithm for Saddle-Point Problems

We consider saddle point problems which objective functions are the aver...
research
05/17/2020

From Proximal Point Method to Nesterov's Acceleration

The proximal point method (PPM) is a fundamental method in optimization ...
research
07/07/2018

Mirror descent in saddle-point problems: Going the extra (gradient) mile

Owing to their connection with generative adversarial networks (GANs), s...

Please sign up or login with your details

Forgot password? Click here to reset