Adaptive and Universal Single-gradient Algorithms for Variational Inequalities

by   Alina Ene, et al.

Variational inequalities with monotone operators capture many problems of interest, notably convex optimization and convex-concave saddle point problems. Classical methods based on the MirrorProx algorithm require two operator evaluations per iteration, which is the dominant factor in the running time in many settings. Additionally, the algorithms typically require careful settings of the step sizes based on the parameters of the problems such as smoothness. In this work, we develop new algorithms addressing both of these shortcomings simultaneously. Our algorithms use a single operator evaluation per iteration and automatically adapt to problem parameters such as smoothness. We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings.



page 1

page 2

page 3

page 4


A Universal Algorithm for Variational Inequalities Adaptive to Smoothness and Noise

We consider variational inequalities coming from monotone operators, a s...

Geometry-Aware Universal Mirror-Prox

Mirror-prox (MP) is a well-known algorithm to solve variational inequali...

Higher-order methods for convex-concave min-max optimization and monotone variational inequalities

We provide improved convergence rates for constrained convex-concave min...

Optimal Methods for Higher-Order Smooth Monotone Variational Inequalities

In this work, we present new simple and optimal algorithms for solving t...

Forward-backward-forward methods with variance reduction for stochastic variational inequalities

We develop a new stochastic algorithm with variance reduction for solvin...

Adaptive extra-gradient methods for min-max optimization and games

We present a new family of min-max optimization algorithms that automati...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.