Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions

10/24/2019
by   Gabriele Farina, et al.
0

We study the performance of optimistic regret-minimization algorithms for both minimizing regret in, and computing Nash equilibria of, zero-sum extensive-form games. In order to apply these algorithms to extensive-form games, a distance-generating function is needed. We study the use of the dilated entropy and dilated Euclidean distance functions. For the dilated Euclidean distance function we prove the first explicit bounds on the strong-convexity parameter for general treeplexes. Furthermore, we show that the use of dilated distance-generating functions allow us to decompose the mirror descent algorithm, and its optimistic variant, into local mirror descent algorithms at each information set. This decomposition mirrors the structure of the counterfactual regret minimization framework. We experimentally compare our algorithms to the popular algorithm CFR+. CFR+ is known to often converge at a rate of T^-1, or better, in practice. We give an example matrix game where CFR+ experimentally converges at a relatively slow rate of T^-0.74, whereas optimistic methods converge faster than T^-1. We go on to show that this fast rate also holds in the Kuhn poker game, which is an extensive-form game. For games with deeper game trees however, we find that CFR+ is still faster. Finally we show that when the goal is minimizing regret, rather than computing a Nash equilibrium, optimistic methods can outperform CFR+, even in deep game trees.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset