TicTac: Accelerating Distributed Deep Learning with Communication Scheduling

03/08/2018
by   Sayed Hadi Hashemi, et al.
0

State-of-the-art deep learning systems rely on iterative distributed training to tackle the increasing complexity of models and input data. The iteration time in these communication-heavy systems depends on the computation time, communication time and the extent of overlap of computation and communication. In this work, we identify a shortcoming in systems with graph representation for computation, such as TensorFlow and PyTorch, that result in high variance in iteration time --- random order of received parameters across workers. We develop a system, TicTac, to improve the iteration time by fixing this issue in distributed deep learning with Parameter Servers while guaranteeing near-optimal overlap of communication and computation. TicTac identifies and enforces an order of network transfers which improves the iteration time using prioritization. Our system is implemented over TensorFlow and requires no changes to the model or developer inputs. TicTac improves the throughput by up to 37.7% in inference and 19.2% in training, while also reducing straggler effect by up to 2.3×. Our code is publicly available.

READ FULL TEXT
research
03/08/2018

Communication Scheduling as a First-Class Citizen in Distributed Machine Learning Systems

State-of-the-art machine learning systems rely on graph-based models, wi...
research
04/29/2020

Caramel: Accelerating Decentralized Distributed Deep Learning with Computation Scheduling

The method of choice for parameter aggregation in Deep Neural Network (D...
research
09/04/2019

Performance Analysis and Comparison of Distributed Machine Learning Systems

Deep learning has permeated through many aspects of computing/processing...
research
02/24/2023

Decoupling the All-Reduce Primitive for Accelerating Distributed Deep Learning

Communication scheduling has been shown to be effective in accelerating ...
research
05/22/2018

RPC Considered Harmful: Fast Distributed Deep Learning on RDMA

Deep learning emerges as an important new resource-intensive workload an...
research
04/10/2020

Straggler-aware Distributed Learning: Communication Computation Latency Trade-off

When gradient descent (GD) is scaled to many parallel workers for large ...
research
09/24/2019

Exascale Deep Learning for Scientific Inverse Problems

We introduce novel communication strategies in synchronous distributed D...

Please sign up or login with your details

Forgot password? Click here to reset