The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities

07/05/2021
by   Waïss Azizian, et al.
0

In this paper, we analyze the local convergence rate of optimistic mirror descent methods in stochastic variational inequalities, a class of optimization problems with important applications to learning theory and machine learning. Our analysis reveals an intricate relation between the algorithm's rate of convergence and the local geometry induced by the method's underlying Bregman function. We quantify this relation by means of the Legendre exponent, a notion that we introduce to measure the growth rate of the Bregman divergence relative to the ambient norm near a solution. We show that this exponent determines both the optimal step-size policy of the algorithm and the optimal rates attained, explaining in this way the differences observed for some popular Bregman functions (Euclidean projection, negative entropy, fractional power, etc.).

READ FULL TEXT

page 1

page 2

page 3

page 4

12/12/2017

Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity

We generalize the classic convergence rate theory for subgradient method...
02/02/2022

Tight Convergence Rate Bounds for Optimization Under Power Law Spectral Conditions

Performance of optimization on quadratic problems sensitively depends on...
08/22/2019

On the convergence of single-call stochastic extra-gradient methods

Variational inequalities have recently attracted considerable interest i...
02/10/2019

Deducing Kurdyka-Łojasiewicz exponent via inf-projection

Kurdyka-Łojasiewicz (KL) exponent plays an important role in estimating ...
06/09/2021

Mixture weights optimisation for Alpha-Divergence Variational Inference

This paper focuses on α-divergence minimisation methods for Variational ...