DeepAI
Log In Sign Up

Universally Expressive Communication in Multi-Agent Reinforcement Learning

06/14/2022
by   Matthew Morris, et al.
0

Allowing agents to share information through communication is crucial for solving complex tasks in multi-agent reinforcement learning. In this work, we consider the question of whether a given communication protocol can express an arbitrary policy. By observing that many existing protocols can be viewed as instances of graph neural networks (GNNs), we demonstrate the equivalence of joint action selection to node labelling. With standard GNN approaches provably limited in their expressive capacity, we draw from existing GNN literature and consider augmenting agent observations with: (1) unique agent IDs and (2) random noise. We provide a theoretical analysis as to how these approaches yield universally expressive communication, and also prove them capable of targeting arbitrary sets of actions for identical agents. Empirically, these augmentations are found to improve performance on tasks where expressive communication is required, whilst, in general, the optimal communication protocol is found to be task-dependent.

READ FULL TEXT

page 36

page 37

page 38

page 39

page 42

03/16/2022

A Survey of Multi-Agent Reinforcement Learning with Communication

Communication is an effective mechanism for coordinating the behavior of...
06/21/2022

Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems

Communication is important in many multi-agent reinforcement learning (M...
06/24/2021

Scalable Perception-Action-Communication Loops with Convolutional and Graph Neural Networks

In this paper, we present a perception-action-communication loop design ...
02/18/2020

Generating random bigraphs with preferential attachment

The bigraph theory is a relatively young, yet formally rigorous, mathema...