Tight Last-Iterate Convergence of the Extragradient and the Optimistic Gradient Descent-Ascent Algorithm for Constrained Monotone Variational Inequalities

04/20/2022
by   Yang Cai, et al.
0

The monotone variational inequality is a central problem in mathematical programming that unifies and generalizes many important settings such as smooth convex optimization, two-player zero-sum games, convex-concave saddle point problems, etc. The extragradient algorithm by Korpelevich [1976] and the optimistic gradient descent-ascent algorithm by Popov [1980] are arguably the two most classical and popular methods for solving monotone variational inequalities. Despite their long histories, the following major problem remains open. What is the last-iterate convergence rate of the extragradient algorithm or the optimistic gradient descent-ascent algorithm for monotone and Lipschitz variational inequalities with constraints? We resolve this open problem by showing that both the extragradient algorithm and the optimistic gradient descent-ascent algorithm have a tight O(1/√(T)) last-iterate convergence rate for arbitrary convex feasible sets, which matches the lower bound by Golowich et al. [2020a,b]. Our rate is measured in terms of the standard gap function. At the core of our results lies a non-standard performance measure – the tangent residual, which can be viewed as an adaptation of the norm of the operator that takes the local constraints into account. We use the tangent residual (or a slight variation of the tangent residual) as the the potential function in our analysis of the extragradient algorithm (or the optimistic gradient descent-ascent algorithm) and prove that it is non-increasing between two consecutive iterates.

READ FULL TEXT

Authors

page 9

page 15

page 33

10/08/2021

Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity

Extragradient method (EG) Korpelevich [1976] is one of the most popular ...
03/07/2021

Optimistic Dual Extrapolation for Coherent Non-monotone Variational Inequalities

The optimization problems associated with training generative adversaria...
07/05/2021

The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities

In this paper, we analyze the local convergence rate of optimistic mirro...
06/17/2022

Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization

We consider the smooth convex-concave bilinearly-coupled saddle-point pr...
06/10/2022

Accelerated Algorithms for Monotone Inclusions and Constrained Nonconvex-Nonconcave Min-Max Optimization

We study monotone inclusions and monotone variational inequalities, as w...
10/15/2020

Adaptive and Universal Single-gradient Algorithms for Variational Inequalities

Variational inequalities with monotone operators capture many problems o...
05/06/2022

Perseus: A Simple High-Order Regularization Method for Variational Inequalities

This paper settles an open and challenging question pertaining to the de...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.