Federated Learning with Compression: Unified Analysis and Sharp Guarantees

07/02/2020
by   Farzin Haddadpour, et al.
0

In federated learning, communication cost is often a critical bottleneck to scale up distributed optimization algorithms to collaboratively learn a model from millions of devices with potentially unreliable or limited communication and heterogeneous data distributions. Two notable trends to deal with the communication overhead of federated algorithms are gradient compression and local computation with periodic communication. Despite many attempts, characterizing the relationship between these two approaches has proven elusive. We address this by proposing a set of algorithms with periodical compressed (quantized or sparsified) communication and analyze their convergence properties in both homogeneous and heterogeneous local data distributions settings. For the homogeneous setting, our analysis improves existing bounds by providing tighter convergence rates for both strongly convex and non-convex objective functions. To mitigate data heterogeneity, we introduce a local gradient tracking scheme and obtain sharp convergence rates that match the best-known communication complexities without compression for convex, strongly convex, and nonconvex settings. We complement our theoretical results and demonstrate the effectiveness of our proposed methods by several experiments on real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2019

On the Convergence of Local Descent Methods in Federated Learning

In federated distributed learning, the goal is to optimize a global trai...
research
02/25/2021

Achieving Linear Convergence in Federated Learning under Objective and Systems Heterogeneity

We consider a standard federated learning architecture where a group of ...
research
02/25/2021

Distributionally Robust Federated Averaging

In this paper, we study communication efficient distributed algorithms f...
research
05/12/2023

Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression

Communication compression is an essential strategy for alleviating commu...
research
06/18/2018

Distributed learning with compressed gradients

Asynchronous computation and gradient compression have emerged as two ke...
research
12/14/2020

Quantizing data for distributed learning

We consider machine learning applications that train a model by leveragi...
research
08/02/2023

Compressed and distributed least-squares regression: convergence rates with applications to Federated Learning

In this paper, we investigate the impact of compression on stochastic gr...

Please sign up or login with your details

Forgot password? Click here to reset