DeepAI AI Chat
Log In Sign Up

GCS: Graph-based Coordination Strategy for Multi-Agent Reinforcement Learning

by   Jingqing Ruan, et al.
King's College London

Many real-world scenarios involve a team of agents that have to coordinate their policies to achieve a shared goal. Previous studies mainly focus on decentralized control to maximize a common reward and barely consider the coordination among control policies, which is critical in dynamic and complicated environments. In this work, we propose factorizing the joint team policy into a graph generator and graph-based coordinated policy to enable coordinated behaviours among agents. The graph generator adopts an encoder-decoder framework that outputs directed acyclic graphs (DAGs) to capture the underlying dynamic decision structure. We also apply the DAGness-constrained and DAG depth-constrained optimization in the graph generator to balance efficiency and performance. The graph-based coordinated policy exploits the generated decision structure. The graph generator and coordinated policy are trained simultaneously to maximize the discounted return. Empirical evaluations on Collaborative Gaussian Squeeze, Cooperative Navigation, and Google Research Football demonstrate the superiority of the proposed method.


page 1

page 2

page 3

page 4


Signal Instructed Coordination in Cooperative Multi-agent Reinforcement Learning

In many real-world problems, a team of agents need to collaborate to max...

Self-Motivated Multi-Agent Exploration

In cooperative multi-agent reinforcement learning (CMARL), it is critica...

DeCOM: Decomposed Policy for Constrained Cooperative Multi-Agent Reinforcement Learning

In recent years, multi-agent reinforcement learning (MARL) has presented...

Learning to Advise and Learning from Advice in Cooperative Multi-Agent Reinforcement Learning

Learning to coordinate is a daunting problem in multi-agent reinforcemen...