-
A Distributed Synchronous SGD Algorithm with Global Top-k Sparsification for Low Bandwidth Networks
Distributed synchronous stochastic gradient descent (S-SGD) with data pa...
read it
-
Performance Analysis and Comparison of Distributed Machine Learning Systems
Deep learning has permeated through many aspects of computing/processing...
read it
-
Sparsified SGD with Memory
Huge scale machine learning problems are nowadays tackled by distributed...
read it
-
rTop-k: A Statistical Estimation Approach to Distributed SGD
The large communication cost for exchanging gradients between different ...
read it
-
PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
We study gradient compression methods to alleviate the communication bot...
read it
-
On Biased Compression for Distributed Learning
In the last few years, various communication compression techniques have...
read it
-
Distributed Sparse SGD with Majority Voting
Distributed learning, particularly variants of distributed stochastic gr...
read it
Sparse Communication for Training Deep Networks
Synchronous stochastic gradient descent (SGD) is the most common method used for distributed training of deep learning models. In this algorithm, each worker shares its local gradients with others and updates the parameters using the average gradients of all workers. Although distributed training reduces the computation time, the communication overhead associated with the gradient exchange forms a scalability bottleneck for the algorithm. There are many compression techniques proposed to reduce the number of gradients that needs to be communicated. However, compressing the gradients introduces yet another overhead to the problem. In this work, we study several compression schemes and identify how three key parameters affect the performance. We also provide a set of insights on how to increase performance and introduce a simple sparsification scheme, random-block sparsification, that reduces communication while keeping the performance close to standard SGD.
READ FULL TEXT
Comments
There are no comments yet.