Straggler-aware Distributed Learning: Communication Computation Latency Trade-off

04/10/2020
by   Emre Ozfatura, et al.
0

When gradient descent (GD) is scaled to many parallel workers for large scale machine learning problems, its per-iteration computation time is limited by the straggling workers. Straggling workers can be tolerated by assigning redundant computations and coding across data and computations, but in most existing schemes, each non-straggling worker transmits one message per iteration to the parameter server (PS) after completing all its computations. Imposing such a limitation results in two main drawbacks; over-computation due to inaccurate prediction of the straggling behaviour, and under-utilization due to treating workers as straggler/non-straggler and discarding partial computations carried out by stragglers. In this paper, to overcome these drawbacks, we consider multi-message communication (MMC) by allowing multiple computations to be conveyed from each worker per iteration, and design straggler avoidance techniques accordingly. Then, we analyze how the proposed designs can be employed efficiently to seek a balance between the computation and communication latency to minimize the overall latency. Furthermore, through extensive simulations, both model-based and real implementation on Amazon EC2 servers, we identify the advantages and disadvantages of these designs in different settings, and demonstrate that MMC can help improve upon existing straggler avoidance schemes.

READ FULL TEXT

page 13

page 14

page 15

page 16

page 17

page 18

page 19

page 20

research
03/05/2019

Gradient Coding with Clustering and Multi-message Communication

Gradient descent (GD) methods are commonly employed in machine learning ...
research
08/07/2018

Speeding Up Distributed Gradient Descent by Utilizing Non-persistent Stragglers

Distributed gradient descent (DGD) is an efficient way of implementing g...
research
10/23/2018

Computation Scheduling for Distributed Machine Learning with Straggling Workers

We study the scheduling of computation tasks across n workers in a large...
research
11/22/2018

Distributed Gradient Descent with Coded Partial Gradient Computations

Coded computation techniques provide robustness against straggling serve...
research
03/01/2021

Gradient Coding with Dynamic Clustering for Straggler-Tolerant Distributed Learning

Distributed implementations are crucial in speeding up large scale machi...
research
03/23/2023

Trading Communication for Computation in Byzantine-Resilient Gradient Coding

We consider gradient coding in the presence of an adversary, controlling...
research
03/08/2018

TicTac: Accelerating Distributed Deep Learning with Communication Scheduling

State-of-the-art deep learning systems rely on iterative distributed tra...

Please sign up or login with your details

Forgot password? Click here to reset