Variance-based Gradient Compression for Efficient Distributed Deep Learning

02/16/2018
by   Yusuke Tsuzuku, et al.
0

Due to the substantial computational cost, training state-of-the-art deep neural networks for large-scale datasets often requires distributed training using multiple computation workers. However, by nature, workers need to frequently communicate gradients, causing severe bottlenecks, especially on lower bandwidth connections. A few methods have been proposed to compress gradient for efficient communication, but they either suffer a low compression ratio or significantly harm the resulting model accuracy, particularly when applied to convolutional neural networks. To address these issues, we propose a method to reduce the communication overhead of distributed deep learning. Our key observation is that gradient updates can be delayed until an unambiguous (high amplitude, low variance) gradient has been calculated. We also present an efficient algorithm to compute the variance with negligible additional cost. We experimentally show that our method can achieve very high compression ratio while maintaining the result model accuracy. We also analyze the efficiency using computation and communication cost models and provide the evidence that this method enables distributed deep learning for many scenarios with commodity environments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2017

Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training

Large-scale distributed training requires significant communication band...
research
03/16/2021

Learned Gradient Compression for Distributed Deep Learning

Training deep neural networks on large datasets containing high-dimensio...
research
03/12/2019

Communication-efficient distributed SGD with Sketching

Large-scale distributed training of neural networks is often limited by ...
research
03/05/2021

Pufferfish: Communication-efficient Models At No Extra Cost

To mitigate communication overheads in distributed model training, sever...
research
02/05/2021

DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep Learning

Sparse tensors appear frequently in distributed deep learning, either as...
research
04/03/2023

SparDL: Distributed Deep Learning Training with Efficient Sparse Communication

Top-k sparsification has recently been widely used to reduce the communi...
research
02/18/2021

Efficient Distributed Auto-Differentiation

Although distributed machine learning has opened up numerous frontiers o...

Please sign up or login with your details

Forgot password? Click here to reset