Efficient Competitive Self-Play Policy Optimization

09/13/2020
by   Yuanyi Zhong, et al.
11

Reinforcement learning from self-play has recently reported many successes. Self-play, where the agents compete with themselves, is often used to generate training data for iterative policy improvement. In previous work, heuristic rules are designed to choose an opponent for the current learner. Typical rules include choosing the latest agent, the best agent, or a random historical agent. However, these rules may be inefficient in practice and sometimes do not guarantee convergence even in the simplest matrix games. In this paper, we propose a new algorithmic framework for competitive self-play reinforcement learning in two-player zero-sum games. We recognize the fact that the Nash equilibrium coincides with the saddle point of the stochastic payoff function, which motivates us to borrow ideas from classical saddle point optimization literature. Our method trains several agents simultaneously, and intelligently takes each other as opponent based on simple adversarial rules derived from a principled perturbation-based saddle optimization method. We prove theoretically that our algorithm converges to an approximate equilibrium with high probability in convex-concave games under standard assumptions. Beyond the theory, we further show the empirical superiority of our method over baseline methods relying on the aforementioned opponent-selection heuristics in matrix games, grid-world soccer, Gomoku, and simulated robot sumo, with neural net policy function approximators.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2021

Reinforcement Learning In Two Player Zero Sum Simultaneous Action Games

Two player zero sum simultaneous action games are common in video games,...
research
09/01/2020

Learning Nash Equilibria in Zero-Sum Stochastic Games via Entropy-Regularized Policy Approximation

We explore the use of policy approximation for reducing the computationa...
research
08/21/2021

Temporal Induced Self-Play for Stochastic Bayesian Games

One practical requirement in solving dynamic games is to ensure that the...
research
11/27/2019

Improving Fictitious Play Reinforcement Learning with Expanding Models

Fictitious play with reinforcement learning is a general and effective f...
research
09/18/2019

Robust Opponent Modeling via Adversarial Ensemble Reinforcement Learning in Asymmetric Imperfect-Information Games

This paper presents an algorithmic framework for learning robust policie...
research
07/25/2022

Provably Efficient Fictitious Play Policy Optimization for Zero-Sum Markov Games with Structured Transitions

While single-agent policy optimization in a fixed environment has attrac...
research
03/15/2012

Automated Planning in Repeated Adversarial Games

Game theory's prescriptive power typically relies on full rationality an...

Please sign up or login with your details

Forgot password? Click here to reset