Acceleration of stochastic methods on the example of decentralized SGD

11/15/2020
by   Trimbach Ekaterina, et al.
0

In this paper, we present an algorithm for accelerating decentralized stochastic gradient descent. Recently, decentralized stochastic optimization methods have attracted a lot of attention, mainly due to their low iteration cost, data locality and data exchange efficiency. They are generalizations of algorithms such as SGD and Local SGD. An additional important contribution of this work is the additions to the analysis of acceleration of stochastic methods, which allows achieving acceleration in the decentralized case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/23/2020

A Unified Theory of Decentralized SGD with Changing Topology and Local Updates

Decentralized stochastic optimization methods have gained a lot of atten...
research
02/26/2020

Moniqua: Modulo Quantized Communication in Decentralized SGD

Running Stochastic Gradient Descent (SGD) in a decentralized fashion has...
research
05/29/2019

Accelerated Sparsified SGD with Error Feedback

We study a stochastic gradient method for synchronous distributed optimi...
research
05/12/2023

Decentralized Learning over Wireless Networks: The Effect of Broadcast with Random Access

In this work, we focus on the communication aspect of decentralized lear...
research
05/31/2020

Graph Learning with Loss-Guided Training

Classically, ML models trained with stochastic gradient descent (SGD) ar...
research
06/05/2023

Improved Stability and Generalization Analysis of the Decentralized SGD Algorithm

This paper presents a new generalization error analysis for the Decentra...
research
06/05/2023

Decentralized SGD and Average-direction SAM are Asymptotically Equivalent

Decentralized stochastic gradient descent (D-SGD) allows collaborative l...

Please sign up or login with your details

Forgot password? Click here to reset