Hoplite: Efficient Collective Communication for Task-Based Distributed Systems

02/13/2020
by   Siyuan Zhuang, et al.
0

Collective communication systems such as MPI offer high performance group communication primitives at the cost of application flexibility. Today, an increasing number of distributed applications (e.g, reinforcement learning) require flexibility in expressing dynamic and asynchronous communication patterns. To accommodate these applications, task-based distributed computing frameworks (e.g., Ray, Dask, Hydro) have become popular as they allow applications to dynamically specify communication by invoking tasks, or functions, at runtime. This design makes efficient collective communication challenging because (1) the group of communicating processes is chosen at runtime, and (2) processes may not all be ready at the same time. We design and implement Hoplite, a communication layer for task-based distributed systems that achieves high performance collective communication without compromising application flexibility. The key idea of Hoplite is to use distributed protocols to compute a data transfer schedule on the fly. This enables the same optimizations used in traditional collective communication, but for applications that specify the communication incrementally. We show that Hoplite can achieve similar performance compared with a traditional collective communication library, MPICH. We port a popular distributed computing framework, Ray, on atop of Hoplite. We show that Hoplite can speed up asynchronous parameter server and distributed reinforcement learning workloads that are difficult to execute efficiently with traditional collective communication by up to 8.1x and 3.9x, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2018

SparCML: High-Performance Sparse Communication for Machine Learning

One of the main drivers behind the rapid recent advances in machine lear...
research
10/20/2021

Monitoring Collective Communication Among GPUs

Communication among devices in multi-GPU systems plays an important role...
research
12/16/2017

Ray: A Distributed Framework for Emerging AI Applications

The next generation of AI applications will continuously interact with t...
research
10/21/2018

RLgraph: Flexible Computation Graphs for Deep Reinforcement Learning

Reinforcement learning (RL) tasks are challenging to implement, execute ...
research
10/26/2017

Programming the Interactions of Collective Adaptive Systems by Relying on Attribute-based Communication

Collective adaptive systems are new emerging computational systems consi...
research
11/28/2022

OpTree: An Efficient Algorithm for All-gather Operation in Optical Interconnect Systems

All-gather collective communication is one of the most important communi...
research
05/17/2017

AI, Native Supercomputing and The Revival of Moore's Law

Based on Alan Turing's proposition on AI and computing machinery, which ...

Please sign up or login with your details

Forgot password? Click here to reset