DeepAI AI Chat
Log In Sign Up

No-Press Diplomacy from Scratch

by   Anton Bakhtin, et al.

Prior AI successes in complex games have largely focused on settings with at most hundreds of actions at each decision point. In contrast, Diplomacy is a game with more than 10^20 possible actions per turn. Previous attempts to address games with large branching factors, such as Diplomacy, StarCraft, and Dota, used human data to bootstrap the policy or used handcrafted reward shaping. In this paper, we describe an algorithm for action exploration and equilibrium approximation in games with combinatorial action spaces. This algorithm simultaneously performs value iteration while learning a policy proposal network. A double oracle step is used to explore additional actions to add to the policy proposals. At each state, the target state value and policy for the model training are computed via an equilibrium search procedure. Using this algorithm, we train an agent, DORA, completely from scratch for a popular two-player variant of Diplomacy and show that it achieves superhuman performance. Additionally, we extend our methods to full-scale no-press Diplomacy and for the first time train an agent from scratch with no human data. We present evidence that this agent plays a strategy that is incompatible with human-data bootstrapped agents. This presents the first strong evidence of multiple equilibria in Diplomacy and suggests that self play alone may be insufficient for achieving superhuman performance in Diplomacy.


page 1

page 22


Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning

No-press Diplomacy is a complex strategy game involving both cooperation...

Learning to Play Imperfect-Information Games by Imitating an Oracle Planner

We consider learning to play multiplayer imperfect-information games wit...

Are AlphaZero-like Agents Robust to Adversarial Perturbations?

The success of AlphaZero (AZ) has demonstrated that neural-network-based...

Solving Large-Scale Extensive-Form Network Security Games via Neural Fictitious Self-Play

Securing networked infrastructures is important in the real world. The p...

Improve Agents without Retraining: Parallel Tree Search with Off-Policy Correction

Tree Search (TS) is crucial to some of the most influential successes in...

Human-Level Performance in No-Press Diplomacy via Equilibrium Search

Prior AI breakthroughs in complex games have focused on either the purel...

Policy Based Inference in Trick-Taking Card Games

Trick-taking card games feature a large amount of private information th...