K-level Reasoning for Zero-Shot Coordination in Hanabi

07/14/2022
by   Brandon Cui, et al.
0

The standard problem setting in cooperative multi-agent settings is self-play (SP), where the goal is to train a team of agents that works well together. However, optimal SP policies commonly contain arbitrary conventions ("handshakes") and are not compatible with other, independently trained agents or humans. This latter desiderata was recently formalized by Hu et al. 2020 as the zero-shot coordination (ZSC) setting and partially addressed with their Other-Play (OP) algorithm, which showed improved ZSC and human-AI performance in the card game Hanabi. OP assumes access to the symmetries of the environment and prevents agents from breaking these in a mutually incompatible way during training. However, as the authors point out, discovering symmetries for a given environment is a computationally hard problem. Instead, we show that through a simple adaption of k-level reasoning (KLR) Costa Gomes et al. 2006, synchronously training all levels, we can obtain competitive ZSC and ad-hoc teamplay performance in Hanabi, including when paired with a human-like proxy bot. We also introduce a new method, synchronous-k-level reasoning with a best response (SyKLRBR), which further improves performance on our synchronous KLR by co-training a best response.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2020

"Other-Play" for Zero-Shot Coordination

We consider the problem of zero-shot coordination - constructing AI agen...
research
03/06/2021

Off-Belief Learning

The standard problem setting in Dec-POMDPs is self-play, where the goal ...
research
06/11/2021

A New Formalism, Method and Open Issues for Zero-Shot Coordination

In many coordination problems, independently reasoning humans are able t...
research
03/14/2021

Quasi-Equivalence Discovery for Zero-Shot Emergent Communication

Effective communication is an important skill for enabling information e...
research
10/11/2022

Human-AI Coordination via Human-Regularized Search and Learning

We consider the problem of making AI agents that collaborate well with h...
research
10/21/2022

Equivariant Networks for Zero-Shot Coordination

Successful coordination in Dec-POMDPs requires agents to adopt robust st...
research
10/01/2020

How to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and Act in Fantasy Worlds

We seek to create agents that both act and communicate with other agents...

Please sign up or login with your details

Forgot password? Click here to reset