Uncoupled and Convergent Learning in Two-Player Zero-Sum Markov Games
We revisit the problem of learning in two-player zero-sum Markov games, focusing on developing an algorithm that is uncoupled, convergent, and rational, with non-asymptotic convergence rates. We start from the case of stateless matrix game with bandit feedback as a warm-up, showing an 𝒪(t^-1/8) last-iterate convergence rate. To the best of our knowledge, this is the first result that obtains finite last-iterate convergence rate given access to only bandit feedback. We extend our result to the case of irreducible Markov games, providing a last-iterate convergence rate of 𝒪(t^-1/9+ε) for any ε>0. Finally, we study Markov games without any assumptions on the dynamics, and show a path convergence rate, which is a new notion of convergence we defined, of 𝒪(t^-1/10). Our algorithm removes the synchronization and prior knowledge requirement of [Wei et al., 2021], which pursued the same goals as us for irreducible Markov games. Our algorithm is related to [Chen et al., 2021, Cen et al., 2021] and also builds on the entropy regularization technique. However, we remove their requirement of communications on the entropy values, making our algorithm entirely uncoupled.
READ FULL TEXT