How to Organize your Deep Reinforcement Learning Agents: The Importance of Communication Topology

by   Dhaval Adjodah, et al.

In this empirical paper, we investigate how learning agents can be arranged in more efficient communication topologies for improved learning. This is an important problem because a common technique to improve speed and robustness of learning in deep reinforcement learning and many other machine learning algorithms is to run multiple learning agents in parallel. The standard communication architecture typically involves all agents intermittently communicating with each other (fully connected topology) or with a centralized server (star topology). Unfortunately, optimizing the topology of communication over the space of all possible graphs is a hard problem, so we borrow results from the networked optimization and collective intelligence literatures which suggest that certain families of network topologies can lead to strong improvements over fully-connected networks. We start by introducing alternative network topologies to DRL benchmark tasks under the Evolution Strategies paradigm which we call Network Evolution Strategies. We explore the relative performance of the four main graph families and observe that one such family (Erdos-Renyi random graphs) empirically outperforms all other families, including the de facto fully-connected communication topologies. Additionally, the use of alternative network topologies has a multiplicative performance effect: we observe that when 1000 learning agents are arranged in a carefully designed communication topology, they can compete with 3000 agents arranged in the de facto fully-connected topology. Overall, our work suggests that distributed machine learning algorithms would learn more efficiently if the communication topology between learning agents was optimized.


Communication Topologies Between Learning Agents in Deep Reinforcement Learning

A common technique to improve speed and robustness of learning in deep r...

Improved Learning in Evolution Strategies via Sparser Inter-Agent Network Topologies

We draw upon a previously largely untapped literature on human collectiv...

Social Network Structure Shapes Innovation: Experience-sharing in RL with SAPIENS

The human cultural repertoire relies on innovation: our ability to conti...

Hyperparameter Tuning in Echo State Networks

Echo State Networks represent a type of recurrent neural network with a ...

An Empirical Deep Dive into Deep Learning's Driving Dynamics

We present an empirical dataset surveying the deep learning phenomenon o...

A Systematic Approach to Constructing Families of Incremental Topology Control Algorithms Using Graph Transformation

In the communication systems domain, constructing and maintaining networ...

Dynamic communication topologies for distributed heuristics in energy system optimization algorithms

The communication topology is an essential aspect in designing distribut...

Please sign up or login with your details

Forgot password? Click here to reset