DeepAI AI Chat
Log In Sign Up

Modeling Strong and Human-Like Gameplay with KL-Regularized Search

by   Athul Paul Jacob, et al.

We consider the task of building strong but human-like policies in multi-agent decision-making problems, given examples of human behavior. Imitation learning is effective at predicting human actions but may not match the strength of expert humans, while self-play learning and search techniques (e.g. AlphaZero) lead to strong performance but may produce policies that are difficult for humans to understand and coordinate with. We show in chess and Go that regularizing search based on the KL divergence from an imitation-learned policy results in higher human prediction accuracy and stronger performance than imitation learning alone. We then introduce a novel regret minimization algorithm that is regularized based on the KL divergence from an imitation-learned policy, and show that using this algorithm for search in no-press Diplomacy yields a policy that matches the human prediction accuracy of imitation learning while being substantially stronger.


page 1

page 2

page 3

page 4


f-GAIL: Learning f-Divergence for Generative Adversarial Imitation Learning

Imitation learning (IL) aims to learn a policy from expert demonstration...

The Boltzmann Policy Distribution: Accounting for Systematic Suboptimality in Human Models

Models of human behavior for prediction and collaboration tend to fall i...

Smooth Imitation Learning via Smooth Costs and Smooth Policies

Imitation learning (IL) is a popular approach in the continuous control ...

Reparameterized Variational Divergence Minimization for Stable Imitation

While recent state-of-the-art results for adversarial imitation-learning...

Improved Policy Optimization for Online Imitation Learning

We consider online imitation learning (OIL), where the task is to find a...

Feedback in Imitation Learning: The Three Regimes of Covariate Shift

Imitation learning practitioners have often noted that conditioning poli...