Signal Instructed Coordination in Team Competition

by   Liheng Chen, et al.

Most existing models of multi-agent reinforcement learning (MARL) adopt centralized training with decentralized execution framework. We demonstrate that the decentralized execution scheme restricts agents' capacity to find a better joint policy in team competition games, where each team of agents share the common rewards and cooperate to compete against other teams. To resolve this problem, we propose Signal Instructed Coordination (SIC), a novel coordination module that can be integrated with most existing models. SIC casts a common signal sampled from a pre-defined distribution to team members, and adopts an information-theoretic regularization to encourage agents to exploit in learning the instruction of centralized signals. Our experiments show that SIC can consistently improve team performance over well-recognized MARL models on matrix games and predator-prey games.


page 12

page 13


Signal Instructed Coordination in Cooperative Multi-agent Reinforcement Learning

In many real-world problems, a team of agents need to collaborate to max...

Competing Adaptive Networks

Adaptive networks have the capability to pursue solutions of global stoc...

Towards Flexible Teamwork

Many AI researchers are today striving to build agent teams for complex,...

Flatland Competition 2020: MAPF and MARL for Efficient Train Coordination on a Grid World

The Flatland competition aimed at finding novel approaches to solve the ...

The Communicative Multiagent Team Decision Problem: Analyzing Teamwork Theories and Models

Despite the significant progress in multiagent teamwork, existing resear...

Decentralized Role Assignment in Multi-Agent Teams via Empirical Game-Theoretic Analysis

We propose a method, based on empirical game theory, for a robot operati...

Cooperation without Coordination: Hierarchical Predictive Planning for Decentralized Multiagent Navigation

Decentralized multiagent planning raises many challenges, such as adapti...