Extragradient SVRG for Variational Inequalities: Error Bounds and Increasing Iterate Averaging

06/01/2023
by   Tianlong Nan, et al.
0

We study variance reduction methods for extragradient (EG) algorithms for a class of variational inequalities satisfying a classical error-bound condition. Previously, linear convergence was only known to hold under strong monotonicity. The error-bound condition is much weaker than strong monotonicity and captures a larger class of problems, including bilinear saddle-point problems such as those arising from two-player zero-sum Nash equilibrium computation. We show that EG algorithms with SVRG-style variance reduction (SVRG-EG) achieve linear convergence under the error-bound condition. In addition, motivated by the empirical success of increasing iterate averaging techniques in solving saddle-point problems, we also establish new convergence results for variance-reduced EG with increasing iterate averaging. Finally, we conduct numerical experiments to demonstrate the advantage of SVRG-EG, with and without increasing iterate averaging, over deterministic EG.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro