Achieving Linear Convergence in Federated Learning under Objective and Systems Heterogeneity

02/25/2021
by   George J. Pappas, et al.
1

We consider a standard federated learning architecture where a group of clients periodically coordinate with a central server to train a statistical model. We tackle two major challenges in federated learning: (i) objective heterogeneity, which stems from differences in the clients' local loss functions, and (ii) systems heterogeneity, which leads to slow and straggling client devices. Due to such client heterogeneity, we show that existing federated learning algorithms suffer from a fundamental speed-accuracy conflict: they either guarantee linear convergence but to an incorrect point, or convergence to the global minimum but at a sub-linear rate, i.e., fast convergence comes at the expense of accuracy. To address the above limitation, we propose FedLin - a simple, new algorithm that exploits past gradients and employs client-specific learning rates. When the clients' local loss functions are smooth and strongly convex, we show that FedLin guarantees linear convergence to the global minimum. We then establish matching upper and lower bounds on the convergence rate of FedLin that highlight the trade-offs associated with infrequent, periodic communication. Notably, FedLin is the only approach that is able to match centralized convergence rates (up to constants) for smooth strongly convex, convex, and non-convex loss functions despite arbitrary objective and systems heterogeneity. We further show that FedLin preserves linear convergence rates under aggressive gradient sparsification, and quantify the effect of the compression level on the convergence rate.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

12/07/2020

Improved Convergence Rates for Non-Convex Federated Learning with Compression

Federated learning is a new distributed learning paradigm that enables e...
12/20/2020

Toward Understanding the Influence of Individual Clients in Federated Learning

Federated learning allows mobile clients to jointly train a global model...
07/02/2020

Federated Learning with Compression: Unified Analysis and Sharp Guarantees

In federated learning, communication cost is often a critical bottleneck...
03/08/2021

Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning

We study a family of algorithms, which we refer to as local update metho...
02/12/2021

Efficient Algorithms for Federated Saddle Point Optimization

We consider strongly convex-concave minimax problems in the federated se...
02/14/2021

FedU: A Unified Framework for Federated Multi-Task Learning with Laplacian Regularization

Federated multi-task learning (FMTL) has emerged as a natural choice to ...
06/13/2022

Accelerating Federated Learning via Sampling Anchor Clients with Large Batches

Using large batches in recent federated learning studies has improved co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.