Simplified Action Decoder for Deep Multi-Agent Reinforcement Learning

by   Hengyuan Hu, et al.

In recent years we have seen fast progress on a number of benchmark problems in AI, with modern methods achieving near or super human performance in Go, Poker and Dota. One common aspect of all of these challenges is that they are by design adversarial or, technically speaking, zero-sum. In contrast to these settings, success in the real world commonly requires humans to collaborate and communicate with others, in settings that are, at least partially, cooperative. In the last year, the card game Hanabi has been established as a new benchmark environment for AI to fill this gap. In particular, Hanabi is interesting to humans since it is entirely focused on theory of mind, i.e., the ability to effectively reason over the intentions, beliefs and point of view of other agents when observing their actions. Learning to be informative when observed by others is an interesting challenge for Reinforcement Learning (RL): Fundamentally, RL requires agents to explore in order to discover good policies. However, when done naively, this randomness will inherently make their actions less informative to others during training. We present a new deep multi-agent RL method, the Simplified Action Decoder (SAD), which resolves this contradiction exploiting the centralized training phase. During training SAD allows other agents to not only observe the (exploratory) action chosen, but agents instead also observe the greedy action of their team mates. By combining this simple intuition with best practices for multi-agent learning, SAD establishes a new SOTA for learning methods for 2-5 players on the self-play part of the Hanabi challenge. Our ablations show the contributions of SAD compared with the best practice components. All of our code and trained agents are available at


page 1

page 2

page 3

page 4


The StarCraft Multi-Agent Challenge

In the last few years, deep multi-agent reinforcement learning (RL) has ...

Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning

When observing the actions of others, humans carry out inferences about ...

Improving Policies via Search in Cooperative Partially Observable Games

Recent superhuman results in games have largely been achieved in a varie...

Continuous Coordination As a Realistic Scenario for Lifelong Learning

Current deep reinforcement learning (RL) algorithms are still highly tas...

Multi-Agent Path Finding via Tree LSTM

In recent years, Multi-Agent Path Finding (MAPF) has attracted attention...

Herd's Eye View: Improving Game AI Agent Learning with Collaborative Perception

We present a novel perception model named Herd's Eye View (HEV) that ado...

Room Clearance with Feudal Hierarchical Reinforcement Learning

Reinforcement learning (RL) is a general framework that allows systems t...

Please sign up or login with your details

Forgot password? Click here to reset