Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization

02/26/2020
by   Zhize Li, et al.
0

Due to the high communication cost in distributed and federated learning problems, methods relying on compression of communicated messages are becoming increasingly popular. While in other contexts the best performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce the number of iterations, there are no methods which combine the benefits of both gradient compression and acceleration. In this paper, we remedy this situation and propose the first accelerated compressed gradient descent (ACGD) methods. In the single machine regime, we prove that ACGD enjoys the rate O((1+ω)√(L/μ)log1/ϵ) for μ-strongly convex problems and O((1+ω)√(L/ϵ)) for convex problems, respectively, where L is the smoothness constant and ω is the compression parameter. Our results improve upon the existing non-accelerated rates O((1+ω)L/μlog1/ϵ) and O((1+ω)L/ϵ), respectively, and recover the optimal rates of accelerated gradient descent as a special case when no compression (ω=0) is applied. We further propose a distributed variant of ACGD (called ADIANA) and prove the convergence rate O(ω+√(L/μ) +√((ω/n+√(ω/n))ω L/μ)), where n is the number of devices/workers and O hides the logarithmic factor log1/ϵ. This improves upon the previous best result O(ω + L/μ+ω L/nμ) achieved by the DIANA method of Mishchenko et al (2019). Finally, we conduct several experiments on real-world datasets which corroborate our theoretical results and confirm the practical superiority of our methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2021

CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression

Due to the high communication cost in distributed and federated learning...
research
09/14/2023

Acceleration by Stepsize Hedging I: Multi-Step Descent and the Silver Stepsize Schedule

Can we accelerate convergence of gradient descent without changing the a...
research
02/25/2020

Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization

We consider the setting of distributed empirical risk minimization where...
research
05/09/2023

Accelerated gradient descent method for functionals of probability measures by new convexity and smoothness based on transport maps

We consider problems of minimizing functionals ℱ of probability measures...
research
07/26/2021

Accelerated Gradient Descent Learning over Multiple Access Fading Channels

We consider a distributed learning problem in a wireless network, consis...
research
05/08/2022

Federated Random Reshuffling with Compression and Variance Reduction

Random Reshuffling (RR), which is a variant of Stochastic Gradient Desce...

Please sign up or login with your details

Forgot password? Click here to reset