The Partially Observable Asynchronous Multi-Agent Cooperation Challenge

12/07/2021
by   Meng Yao, et al.
0

Multi-agent reinforcement learning (MARL) has received increasing attention for its applications in various domains. Researchers have paid much attention on its partially observable and cooperative settings for meeting real-world requirements. For testing performance of different algorithms, standardized environments are designed such as the StarCraft Multi-Agent Challenge, which is one of the most successful MARL benchmarks. To our best knowledge, most of current environments are synchronous, where agents execute actions in the same pace. However, heterogeneous agents usually have their own action spaces and there is no guarantee for actions from different agents to have the same executed cycle, which leads to asynchronous multi-agent cooperation. Inspired from the Wargame, a confrontation game between two armies abstracted from real world environment, we propose the first Partially Observable Asynchronous multi-agent Cooperation challenge (POAC) for the MARL community. Specifically, POAC supports two teams of heterogeneous agents to fight with each other, where an agent selects actions based on its own observations and cooperates asynchronously with its allies. Moreover, POAC is a light weight, flexible and easy to use environment, which can be configured by users to meet different experimental requirements such as self-play model, human-AI model and so on. Along with our benchmark, we offer six game scenarios of varying difficulties with the built-in rule-based AI as opponents. Finally, since most MARL algorithms are designed for synchronous agents, we revise several representatives to meet the asynchronous setting, and the relatively poor experimental results validate the challenge of POAC. Source code is released in <http://turingai.ia.ac.cn/data_center/show>.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 6

page 7

research
05/15/2023

More Like Real World Game Challenge for Partially Observable Multi-Agent Cooperation

Some standardized environments have been designed for partially observab...
research
02/11/2019

The StarCraft Multi-Agent Challenge

In the last few years, deep multi-agent reinforcement learning (RL) has ...
research
11/27/2021

Normative Disagreement as a Challenge for Cooperative AI

Cooperation in settings where agents have both common and conflicting in...
research
01/09/2023

Asynchronous Multi-Agent Reinforcement Learning for Efficient Real-Time Multi-Robot Cooperative Exploration

We consider the problem of cooperative exploration where multiple robots...
research
06/22/2022

POGEMA: Partially Observable Grid Environment for Multiple Agents

We introduce POGEMA (https://github.com/AIRI-Institute/pogema) a sandbox...
research
04/13/2022

Copiloting Autonomous Multi-Robot Missions: A Game-inspired Supervisory Control Interface

Real-world deployment of new technology and capabilities can be daunting...
research
06/05/2019

Finding Friend and Foe in Multi-Agent Games

Recent breakthroughs in AI for multi-agent games like Go, Poker, and Dot...

Please sign up or login with your details

Forgot password? Click here to reset