Connectivity-driven Communication in Multi-agent Reinforcement Learning through Diffusion Processes on Graphs

02/12/2020
by   Emanuele Pesce, et al.
9

We discuss the problem of learning collaborative behaviour in multi-agent systems using deep reinforcement learning (DRL). A connectivity-driven communication (CDC) algorithm is proposed to address three key aspects: what agents to involve in the communication, what information content to share, and how often to share it. We introduce the notion of a connectivity network, modelled as a weighted graph, where nodes represent agents and edges represent the degree of connectivity between pairs of agents. The optimal graph topology is learned end-to-end concurrently with the stochastic policy so as to maximise future expected returns. The communication patterns depend on the graph's topology through a diffusion process on the graph, the heat kernel, which is found by exponentiating the Laplacian eigensystem through time and is fully differentiable. Empirical results show that CDC is capable of superior performance over alternative algorithms for a range of cooperative navigation tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset