From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization

02/19/2020 ∙ by Julien Perolat, et al. ∙ 32

In this paper we investigate the Follow the Regularized Leader dynamics in sequential imperfect information games (IIG). We generalize existing results of Poincaré recurrence from normal-form games to zero-sum two-player imperfect information games and other sequential game settings. We then investigate how adapting the reward (by adding a regularization term) of the game can give strong convergence guarantees in monotone games. We continue by showing how this reward adaptation technique can be leveraged to build algorithms that converge exactly to the Nash equilibrium. Finally, we show how these insights can be directly used to build state-of-the-art model-free algorithms for zero-sum two-player Imperfect Information Games (IIG).

READ FULL TEXT
POST COMMENT

Comments

Vaclav Kosar

Listen to the paper here you can https://youtu.be/4WQkoOsr0DU 👂📰🤓  From Poincaré Recurrence to Convergence in Imperfect Information Games - To build model-free algorithms for zero-sum games this paper generalize existing results of Poincare recurrence.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.