-
MG-WFBP: Efficient Data Communication for Distributed Synchronous SGD Algorithms
Distributed synchronous stochastic gradient descent has been widely used...
read it
-
Modeling and Evaluation of Synchronous Stochastic Gradient Descent in Distributed Deep Learning on Multiple GPUs
With huge amounts of training data, deep learning has made great breakth...
read it
-
Exascale Deep Learning for Scientific Inverse Problems
We introduce novel communication strategies in synchronous distributed D...
read it
-
A Distributed Synchronous SGD Algorithm with Global Top-k Sparsification for Low Bandwidth Networks
Distributed synchronous stochastic gradient descent (S-SGD) with data pa...
read it
-
Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability
This paper presents a theoretical analysis and practical evaluation of t...
read it
-
Analyzing the benefits of communication channels between deep learning models
As artificial intelligence systems spread to more diverse and larger tas...
read it
-
Towards Scalable Distributed Training of Deep Learning on Public Cloud Clusters
Distributed training techniques have been widely deployed in large-scale...
read it
MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning
Distributed synchronous stochastic gradient descent has been widely used to train deep neural networks (DNNs) on computer clusters. With the increase of computational power, network communications generally limit the system scalability. Wait-free backpropagation (WFBP) is a popular solution to overlap communications with computations during the training process. In this paper, we observe that many DNNs have a large number of layers with only a small amount of data to be communicated at each layer in distributed training, which could make WFBP inefficient. Based on the fact that merging some short communication tasks into a single one can reduce the overall communication time, we formulate an optimization problem to minimize the training time in pipelining communications and computations. We derive an optimal solution that can be solved efficiently without affecting the training performance. We then apply the solution to propose a distributed training algorithm named merged-gradient WFBP (MG-WFBP) and implement it in two platforms Caffe and PyTorch. Extensive experiments in three GPU clusters are conducted to verify the effectiveness of MG-WFBP. We further exploit the trace-based simulation of 64 GPUs to explore the potential scaling efficiency of MG-WFBP. Experimental results show that MG-WFBP achieves much better scaling performance than existing methods.
READ FULL TEXT
Comments
There are no comments yet.