Achieving Linear Convergence in Federated Learning under Objective and Systems Heterogeneity

02/25/2021
by   George J. Pappas, et al.
1

We consider a standard federated learning architecture where a group of clients periodically coordinate with a central server to train a statistical model. We tackle two major challenges in federated learning: (i) objective heterogeneity, which stems from differences in the clients' local loss functions, and (ii) systems heterogeneity, which leads to slow and straggling client devices. Due to such client heterogeneity, we show that existing federated learning algorithms suffer from a fundamental speed-accuracy conflict: they either guarantee linear convergence but to an incorrect point, or convergence to the global minimum but at a sub-linear rate, i.e., fast convergence comes at the expense of accuracy. To address the above limitation, we propose FedLin - a simple, new algorithm that exploits past gradients and employs client-specific learning rates. When the clients' local loss functions are smooth and strongly convex, we show that FedLin guarantees linear convergence to the global minimum. We then establish matching upper and lower bounds on the convergence rate of FedLin that highlight the trade-offs associated with infrequent, periodic communication. Notably, FedLin is the only approach that is able to match centralized convergence rates (up to constants) for smooth strongly convex, convex, and non-convex loss functions despite arbitrary objective and systems heterogeneity. We further show that FedLin preserves linear convergence rates under aggressive gradient sparsification, and quantify the effect of the compression level on the convergence rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2020

Improved Convergence Rates for Non-Convex Federated Learning with Compression

Federated learning is a new distributed learning paradigm that enables e...
research
12/20/2020

Toward Understanding the Influence of Individual Clients in Federated Learning

Federated learning allows mobile clients to jointly train a global model...
research
07/02/2020

Federated Learning with Compression: Unified Analysis and Sharp Guarantees

In federated learning, communication cost is often a critical bottleneck...
research
03/08/2021

Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning

We study a family of algorithms, which we refer to as local update metho...
research
05/31/2023

Federated Learning in the Presence of Adversarial Client Unavailability

Federated learning is a decentralized machine learning framework wherein...
research
12/05/2022

Partial Variance Reduction improves Non-Convex Federated learning on heterogeneous data

Data heterogeneity across clients is a key challenge in federated learni...
research
06/23/2023

Synthetic data shuffling accelerates the convergence of federated learning under data heterogeneity

In federated learning, data heterogeneity is a critical challenge. A str...

Please sign up or login with your details

Forgot password? Click here to reset