Learning to Play Imperfect-Information Games by Imitating an Oracle Planner

12/22/2020
by   Rinu Boney, et al.
0

We consider learning to play multiplayer imperfect-information games with simultaneous moves and large state-action spaces. Previous attempts to tackle such challenging games have largely focused on model-free learning methods, often requiring hundreds of years of experience to produce competitive agents. Our approach is based on model-based planning. We tackle the problem of partial observability by first building an (oracle) planner that has access to the full state of the environment and then distilling the knowledge of the oracle to a (follower) agent which is trained to play the imperfect-information game by imitating the oracle's choices. We experimentally show that planning with naive Monte Carlo tree search does not perform very well in large combinatorial action spaces. We therefore propose planning with a fixed-depth tree search and decoupled Thompson sampling for action selection. We show that the planner is able to discover efficient playing strategies in the games of Clash Royale and Pommerman and the follower policy successfully learns to implement them by training on a few hundred battles.

READ FULL TEXT

page 1

page 2

research
08/30/2018

ExpIt-OOS: Towards Learning from Planning in Imperfect Information Games

The current state of the art in playing many important perfect informati...
research
08/30/2018

ExIt-OOS: Towards Learning from Planning in Imperfect Information Games

The current state of the art in playing many important perfect informati...
research
10/06/2021

No-Press Diplomacy from Scratch

Prior AI successes in complex games have largely focused on settings wit...
research
03/22/2019

Monte Carlo Neural Fictitious Self-Play: Approach to Approximate Nash equilibrium of Imperfect-Information Games

Researchers on artificial intelligence have achieved human-level intelli...
research
12/05/2019

Combining Q-Learning and Search with Amortized Value Estimates

We introduce "Search with Amortized Value Estimates" (SAVE), an approach...
research
05/30/2020

Manipulating the Distributions of Experience used for Self-Play Learning in Expert Iteration

Expert Iteration (ExIt) is an effective framework for learning game-play...
research
11/07/2022

Are AlphaZero-like Agents Robust to Adversarial Perturbations?

The success of AlphaZero (AZ) has demonstrated that neural-network-based...

Please sign up or login with your details

Forgot password? Click here to reset