On Reinforcement Learning for Turn-based Zero-sum Markov Games

02/25/2020
by   Devavrat Shah, et al.
8

We consider the problem of finding Nash equilibrium for two-player turn-based zero-sum games. Inspired by the AlphaGo Zero (AGZ) algorithm, we develop a Reinforcement Learning based approach. Specifically, we propose Explore-Improve-Supervise (EIS) method that combines "exploration", "policy improvement"' and "supervised learning" to find the value function and policy associated with Nash equilibrium. We identify sufficient conditions for convergence and correctness for such an approach. For a concrete instance of EIS where random policy is used for "exploration", Monte-Carlo Tree Search is used for "policy improvement" and Nearest Neighbors is used for "supervised learning", we establish that this method finds an ε-approximate value function of Nash equilibrium in O(ε^-(d+4)) steps when the underlying state-space of the game is continuous and d-dimensional. This is nearly optimal as we establish a lower bound of Ω(ε^-(d+2)) for any policy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2020

Off-Policy Exploitability-Evaluation and Equilibrium-Learning in Two-Player Zero-Sum Markov Games

Off-policy evaluation (OPE) is the problem of evaluating new policies us...
research
02/14/2019

On Reinforcement Learning Using Monte Carlo Tree Search with Supervised Learning: Non-Asymptotic Analysis

Inspired by the success of AlphaGo Zero (AGZ) which utilizes Monte Carlo...
research
01/01/2019

A Theoretical Analysis of Deep Q-Learning

Despite the great empirical success of deep reinforcement learning, its ...
research
08/10/2022

Learning Two-Player Mixture Markov Games: Kernel Function Approximation and Correlated Equilibrium

We consider learning Nash equilibria in two-player zero-sum Markov Games...
research
06/13/2023

On Faking a Nash Equilibrium

We characterize offline data poisoning attacks on Multi-Agent Reinforcem...
research
10/12/2022

Zero-Knowledge Optimal Monetary Policy under Stochastic Dominance

Optimal simple rules for the monetary policy of the first stochastically...
research
06/29/2020

On Bellman's Optimality Principle for zs-POSGs

Many non-trivial sequential decision-making problems are efficiently sol...

Please sign up or login with your details

Forgot password? Click here to reset