On Last-Iterate Convergence Beyond Zero-Sum Games

by   Ioannis Anagnostides, et al.

Most existing results about last-iterate convergence of learning dynamics are limited to two-player zero-sum games, and only apply under rigid assumptions about what dynamics the players follow. In this paper we provide new results and techniques that apply to broader families of games and learning dynamics. First, we use a regret-based analysis to show that in a class of games that includes constant-sum polymatrix and strategically zero-sum games, dynamics such as optimistic mirror descent (OMD) have bounded second-order path lengths, a property which holds even when players employ different algorithms and prediction mechanisms. This enables us to obtain O(1/√(T)) rates and optimal O(1) regret bounds. Our analysis also reveals a surprising property: OMD either reaches arbitrarily close to a Nash equilibrium, or it outperforms the robust price of anarchy in efficiency. Moreover, for potential games we establish convergence to an ϵ-equilibrium after O(1/ϵ^2) iterations for mirror descent under a broad class of regularizers, as well as optimal O(1) regret bounds for OMD variants. Our framework also extends to near-potential games, and unifies known analyses for distributed learning in Fisher's market model. Finally, we analyze the convergence, efficiency, and robustness of optimistic gradient descent (OGD) in general-sum continuous games.


page 1

page 2

page 3

page 4


Optimistic Mirror Descent Either Converges to Nash or to Strong Coarse Correlated Equilibria in Bimatrix Games

We show that, for any sufficiently small fixed ϵ > 0, when both players ...

Uncoupled Learning Dynamics with O(log T) Swap Regret in Multiplayer Games

In this paper we establish efficient and uncoupled learning dynamics so ...

On the Convergence of No-Regret Learning Dynamics in Time-Varying Games

Most of the literature on learning in games has focused on the restricti...

Training GANs with Optimism

We address the issue of limit cycling behavior in training Generative Ad...

O(T^-1) Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games

We prove that optimistic-follow-the-regularized-leader (OFTRL), together...

Let's be honest: An optimal no-regret framework for zero-sum games

We revisit the problem of solving two-player zero-sum games in the decen...

Solving Zero-Sum Games through Alternating Projections

In this work, we establish near-linear and strong convergence for a natu...

Please sign up or login with your details

Forgot password? Click here to reset