Joint Policy Search for Multi-agent Collaboration with Imperfect Information

08/14/2020
by   Yuandong Tian, et al.
13

To learn good joint policies for multi-agent collaboration with imperfect information remains a fundamental challenge. While for two-player zero-sum games, coordinate-ascent approaches (optimizing one agent's policy at a time, e.g., self-play) work with guarantees, in multi-agent cooperative setting they often converge to sub-optimal Nash equilibrium. On the other hand, directly modeling joint policy changes in imperfect information game is nontrivial due to complicated interplay of policies (e.g., upstream updates affect downstream state reachability). In this paper, we show global changes of game values can be decomposed to policy changes localized at each information set, with a novel term named policy-change density. Based on this, we propose Joint Policy Search(JPS) that iteratively improves joint policies of collaborative agents in imperfect information games, without re-evaluating the entire game. On multi-agent collaborative tabular games, JPS is proven to never worsen performance and can improve solutions provided by unilateral approaches (e.g, CFR), outperforming algorithms designed for collaborative policy learning (e.g. BAD). Furthermore, for real-world games, JPS has an online form that naturally links with gradient updates. We test it to Contract Bridge, a 4-player imperfect-information game where a team of 2 collaborates to compete against the other. In its bidding phase, players bid in turn to find a good contract through a limited information channel. Based on a strong baseline agent that bids competitive bridge purely through domain-agnostic self-play, JPS improves collaboration of team players and outperforms WBridge5, a championship-winning software, by +0.63 IMPs (International Matching Points) per board over 1k games, substantially better than previous SoTA (+0.41 IMPs/b) under Double-Dummy evaluation.

READ FULL TEXT
research
02/19/2020

From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization

In this paper we investigate the Follow the Regularized Leader dynamics ...
research
10/10/2018

Learning Multi-agent Implicit Communication Through Actions: A Case Study in Contract Bridge, a Collaborative Imperfect-Information Game

In situations where explicit communication is limited, a human collabora...
research
07/11/2023

On Imperfect Recall in Multi-Agent Influence Diagrams

Multi-agent influence diagrams (MAIDs) are a popular game-theoretic mode...
research
03/22/2019

Monte Carlo Neural Fictitious Self-Play: Achieve Approximate Nash equilibrium of Imperfect-Information Games

Researchers on artificial intelligence have achieved human-level intelli...
research
03/30/2022

PerfectDou: Dominating DouDizhu with Perfect Information Distillation

As a challenging multi-player card game, DouDizhu has recently drawn muc...
research
01/11/2021

Solving Common-Payoff Games with Approximate Policy Iteration

For artificially intelligent learning systems to have widespread applica...
research
09/18/2019

Robust Opponent Modeling via Adversarial Ensemble Reinforcement Learning in Asymmetric Imperfect-Information Games

This paper presents an algorithmic framework for learning robust policie...

Please sign up or login with your details

Forgot password? Click here to reset