DeepAI AI Chat
Log In Sign Up

On the rate of convergence of Bregman proximal methods in constrained variational inequalities

by   Waïss Azizian, et al.

We examine the last-iterate convergence rate of Bregman proximal methods - from mirror descent to mirror-prox - in constrained variational inequalities. Our analysis shows that the convergence speed of a given method depends sharply on the Legendre exponent of the underlying Bregman regularizer (Euclidean, entropic, or other), a notion that measures the growth rate of said regularizer near a solution. In particular, we show that boundary solutions exhibit a clear separation of regimes between methods with a zero and non-zero Legendre exponent respectively, with linear convergence for the former versus sublinear for the latter. This dichotomy becomes even more pronounced in linearly constrained problems where, specifically, Euclidean methods converge along sharp directions in a finite number of steps, compared to a linear rate for entropic methods.


page 1

page 2

page 3

page 4


The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities

In this paper, we analyze the local convergence rate of optimistic mirro...

Convergence Analysis of Penalty Based Numerical Methods for Constrained Inequality Problems

This paper presents a general convergence theory of penalty based numeri...

Additive Schwarz methods for fourth-order variational inequalities

We consider additive Schwarz methods for fourth-order variational inequa...

On the linear convergence of additive Schwarz methods for the p-Laplacian

We consider additive Schwarz methods for boundary value problems involvi...

Convergence Analysis For Non Linear System Of Parabolic Variational Inequalities

This work aims to provide a comprehensive and unified numerical analysis...

A comparative accuracy and convergence study of eigenerosion and phase-field models of fracture

We compare the accuracy, convergence rate and computational cost of eige...