Optimistic Mirror Descent Either Converges to Nash or to Strong Coarse Correlated Equilibria in Bimatrix Games

03/22/2022
by   Ioannis Anagnostides, et al.
0

We show that, for any sufficiently small fixed ϵ > 0, when both players in a general-sum two-player (bimatrix) game employ optimistic mirror descent (OMD) with smooth regularization, learning rate η = O(ϵ^2) and T = Ω(poly(1/ϵ)) repetitions, either the dynamics reach an ϵ-approximate Nash equilibrium (NE), or the average correlated distribution of play is an Ω(poly(ϵ))-strong coarse correlated equilibrium (CCE): any possible unilateral deviation does not only leave the player worse, but will decrease its utility by Ω(poly(ϵ)). As an immediate consequence, when the iterates of OMD are bounded away from being Nash equilibria in a bimatrix game, we guarantee convergence to an exact CCE after only O(1) iterations. Our results reveal that uncoupled no-regret learning algorithms can converge to CCE in general-sum games remarkably faster than to NE in, for example, zero-sum games. To establish this, we show that when OMD does not reach arbitrarily close to a NE, the (cumulative) regret of both players is not only negative, but decays linearly with time. Given that regret is the canonical measure of performance in online learning, our results suggest that cycling behavior of no-regret learning algorithms in games can be justified in terms of efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/14/2023

Coarse Correlated Equilibrium Implies Nash Equilibrium in Two-Player Zero-Sum Games

We give a simple proof of the well-known result that the marginal strate...
research
03/22/2022

On Last-Iterate Convergence Beyond Zero-Sum Games

Most existing results about last-iterate convergence of learning dynamic...
research
10/14/2019

Learning to Correlate in Multi-Player General-Sum Sequential Games

In the context of multi-player, general-sum games, there is an increasin...
research
06/08/2020

Hedging in games: Faster convergence of external and swap regrets

We consider the setting where players run the Hedge algorithm or its opt...
research
08/16/2021

Near-Optimal No-Regret Learning in General Games

We show that Optimistic Hedge – a common variant of multiplicative-weigh...
research
02/24/2022

No-Regret Learning in Games is Turing Complete

Games are natural models for multi-agent machine learning settings, such...
research
01/26/2023

On the Convergence of No-Regret Learning Dynamics in Time-Varying Games

Most of the literature on learning in games has focused on the restricti...

Please sign up or login with your details

Forgot password? Click here to reset