DeepAI AI Chat
Log In Sign Up

From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization

by   Julien Perolat, et al.

In this paper we investigate the Follow the Regularized Leader dynamics in sequential imperfect information games (IIG). We generalize existing results of Poincaré recurrence from normal-form games to zero-sum two-player imperfect information games and other sequential game settings. We then investigate how adapting the reward (by adding a regularization term) of the game can give strong convergence guarantees in monotone games. We continue by showing how this reward adaptation technique can be leveraged to build algorithms that converge exactly to the Nash equilibrium. Finally, we show how these insights can be directly used to build state-of-the-art model-free algorithms for zero-sum two-player Imperfect Information Games (IIG).


page 1

page 2

page 3

page 4


Successful Nash Equilibrium Agent for a 3-Player Imperfect-Information Game

Creating strong agents for games with more than two players is a major o...

Adapting to game trees in zero-sum imperfect information games

Imperfect information games (IIG) are games in which each player only pa...

Joint Policy Search for Multi-agent Collaboration with Imperfect Information

To learn good joint policies for multi-agent collaboration with imperfec...

Last-iterate Convergence to Trembling-hand Perfect Equilibria

Designing efficient algorithms to find Nash equilibrium (NE) refinements...

A Slingshot Approach to Learning in Monotone Games

In this paper, we address the problem of computing equilibria in monoton...

Sound Search in Imperfect Information Games

Search has played a fundamental role in computer game research since the...

Model-Free Learning for Two-Player Zero-Sum Partially Observable Markov Games with Perfect Recall

We study the problem of learning a Nash equilibrium (NE) in an imperfect...