Faster Rates for Convex-Concave Games

05/17/2018
by   Jacob Abernethy, et al.
0

We consider the use of no-regret algorithms to compute equilibria for particular classes of convex-concave games. While standard regret bounds would lead to convergence rates on the order of O(T^-1/2), recent work RS13,SALS15 has established O(1/T) rates by taking advantage of a particular class of optimistic prediction algorithms. In this work we go further, showing that for a particular class of games one achieves a O(1/T^2) rate, and we show how this applies to the Frank-Wolfe method and recovers a similar bound D15. We also show that such no-regret techniques can even achieve a linear rate, O((-T)), for equilibrium computation under additional curvature assumptions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2022

No-Regret Learning in Network Stochastic Zero-Sum Games

No-regret learning has been widely used to compute a Nash equilibrium in...
research
07/27/2018

Acceleration through Optimistic No-Regret Dynamics

We consider the problem of minimizing a smooth convex function by reduci...
research
06/16/2020

Linear Last-iterate Convergence for Matrix Games and Stochastic Games

Optimistic Gradient Descent Ascent (OGDA) algorithm for saddle-point opt...
research
06/13/2022

No-Regret Learning in Games with Noisy Feedback: Faster Rates and Adaptivity via Learning Rate Separation

We examine the problem of regret minimization when the learner is involv...
research
04/24/2020

Proving μ>1

Choosing the right selection rate is a long standing issue in evolutiona...
research
10/26/2020

Tight last-iterate convergence rates for no-regret learning in multi-player games

We study the question of obtaining last-iterate convergence rates for no...
research
02/24/2022

Solving optimization problems with Blackwell approachability

We introduce the Conic Blackwell Algorithm^+ (CBA^+) regret minimizer, a...

Please sign up or login with your details

Forgot password? Click here to reset