SASG: Sparsification with Adaptive Stochastic Gradients for Communication-efficient Distributed Learning

12/08/2021
by   Xiaoge Deng, et al.
0

Stochastic optimization algorithms implemented on distributed computing architectures are increasingly used to tackle large-scale machine learning applications. A key bottleneck in such distributed systems is the communication overhead for exchanging information such as stochastic gradients between different workers. Sparse communication with memory and the adaptive aggregation methodology are two successful frameworks among the various techniques proposed to address this issue. In this paper, we creatively exploit the advantages of Sparse communication and Adaptive aggregated Stochastic Gradients to design a communication-efficient distributed algorithm named SASG. Specifically, we first determine the workers that need to communicate based on the adaptive aggregation rule and then sparse this transmitted information. Therefore, our algorithm reduces both the overhead of communication rounds and the number of communication bits in the distributed system. We define an auxiliary sequence and give convergence results of the algorithm with the help of Lyapunov function analysis. Experiments on training deep neural networks show that our algorithm can significantly reduce the number of communication rounds and bits compared to the previous methods, with little or no impact on training and testing accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2017

Gradient Sparsification for Communication-Efficient Distributed Optimization

Modern large scale machine learning applications require stochastic opti...
research
10/24/2019

Gradient Sparification for Asynchronous Distributed Training

Modern large scale machine learning applications require stochastic opti...
research
04/02/2019

Nested Dithered Quantization for Communication Reduction in Distributed Training

In distributed training, the communication cost due to the transmission ...
research
06/19/2020

A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning

Modern large-scale machine learning applications require stochastic opti...
research
04/22/2018

MQGrad: Reinforcement Learning of Gradient Quantization in Parameter Server

One of the most significant bottleneck in training large scale machine l...
research
06/30/2022

Scalable K-FAC Training for Deep Neural Networks with Distributed Preconditioning

The second-order optimization methods, notably the D-KFAC (Distributed K...
research
11/14/2013

Fundamental Limits of Online and Distributed Algorithms for Statistical Learning and Estimation

Many machine learning approaches are characterized by information constr...

Please sign up or login with your details

Forgot password? Click here to reset