Faster Asynchronous SGD

01/15/2016
by   Augustus Odena, et al.
0

Asynchronous distributed stochastic gradient descent methods have trouble converging because of stale gradients. A gradient update sent to a parameter server by a client is stale if the parameters used to calculate that gradient have since been updated on the server. Approaches have been proposed to circumvent this problem that quantify staleness in terms of the number of elapsed updates. In this work, we propose a novel method that quantifies staleness in terms of moving averages of gradient statistics. We show that this method outperforms previous methods with respect to convergence speed and scalability to many clients. We also discuss how an extension to this method can be used to dramatically reduce bandwidth costs in a distributed training context. In particular, our method allows reduction of total bandwidth usage by a factor of 5 with little impact on cost convergence. We also describe (and link to) a software library that we have used to simulate these algorithms deterministically on a single machine.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2019

Making Asynchronous Stochastic Gradient Descent Work for Transformers

Asynchronous stochastic gradient descent (SGD) is attractive from a spee...
research
09/29/2019

Distributed SGD Generalizes Well Under Asynchrony

The performance of fully synchronized distributed systems has faced a bo...
research
05/14/2020

OD-SGD: One-step Delay Stochastic Gradient Descent for Distributed Training

The training of modern deep learning neural network calls for large amou...
research
02/20/2022

Personalized Federated Learning with Exact Stochastic Gradient Descent

In Federated Learning (FL), datasets across clients tend to be heterogen...
research
04/23/2021

Decentralized Federated Averaging

Federated averaging (FedAvg) is a communication efficient algorithm for ...
research
07/26/2019

Taming Momentum in a Distributed Asynchronous Environment

Although distributed computing can significantly reduce the training tim...
research
10/04/2020

Feature Whitening via Gradient Transformation for Improved Convergence

Feature whitening is a known technique for speeding up training of DNN. ...

Please sign up or login with your details

Forgot password? Click here to reset