Online Double Oracle

03/13/2021 ∙ by Le Cong Dinh, et al. ∙ 3

Solving strategic games with huge action space is a critical yet under-explored topic in economics, operations research and artificial intelligence. This paper proposes new learning algorithms for solving two-player zero-sum normal-form games where the number of pure strategies is prohibitively large. Specifically, we combine no-regret analysis from online learning with Double Oracle (DO) methods from game theory. Our method – Online Double Oracle (ODO) – is provably convergent to a Nash equilibrium (NE). Most importantly, unlike normal DO methods, ODO is rationale in the sense that each agent in ODO can exploit strategic adversary with a regret bound of 𝒪(√(T k log(k))) where k is not the total number of pure strategies, but rather the size of effective strategy set that is linearly dependent on the support size of the NE. On tens of different real-world games, ODO outperforms DO, PSRO methods, and no-regret algorithms such as Multiplicative Weight Update by a significant margin, both in terms of convergence rate to a NE and average payoff against strategic adversaries.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.