Off-Policy Exploitability-Evaluation and Equilibrium-Learning in Two-Player Zero-Sum Markov Games

07/04/2020
by   Kenshi Abe, et al.
0

Off-policy evaluation (OPE) is the problem of evaluating new policies using historical data obtained from a different policy. Off-policy learning (OPL), on the other hand, is the problem of finding an optimal policy using historical data. In recent OPE and OPL contexts, most of the studies have focused on one-player cases, and not on more than two-player cases. In this study, we propose methods for OPE and OPL in two-player zero-sum Markov games. For OPE, we estimate exploitability that is often used as a metric for determining how close a strategy profile is to a Nash equilibrium in two-player zero-sum games. For OPL, we calculate maximin policies as Nash equilibrium strategies over the historical data. We prove the exploitability estimation error bounds for OPE and regret bounds for OPL based on the doubly robust and double reinforcement learning estimators. Finally, we demonstrate the effectiveness and performance of the proposed methods through experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/19/2022

Anytime PSRO for Two-Player Zero-Sum Games

Policy space response oracles (PSRO) is a multi-agent reinforcement lear...
research
02/25/2020

On Reinforcement Learning for Turn-based Zero-sum Markov Games

We consider the problem of finding Nash equilibrium for two-player turn-...
research
05/12/2021

Identity Concealment Games: How I Learned to Stop Revealing and Love the Coincidences

In an adversarial environment, a hostile player performing a task may be...
research
03/14/2022

Learning Markov Games with Adversarial Opponents: Efficient Algorithms and Fundamental Limits

An ideal strategy in zero-sum games should not only grant the player an ...
research
05/26/2022

PixelGame: Infrared small target segmentation as a Nash equilibrium

A key challenge of infrared small target segmentation (ISTS) is to balan...
research
06/13/2023

On Faking a Nash Equilibrium

We characterize offline data poisoning attacks on Multi-Agent Reinforcem...
research
02/17/2021

Provably Efficient Policy Gradient Methods for Two-Player Zero-Sum Markov Games

Policy gradient methods are widely used in solving two-player zero-sum g...

Please sign up or login with your details

Forgot password? Click here to reset