TAMUNA: Accelerated Federated Learning with Local Training and Partial Participation

02/20/2023
by   Laurent Condat, et al.
0

In federated learning, a large number of users are involved in a global learning task, in a collaborative way. They alternate local computations and communication with a distant server. Communication, which can be slow and costly, is the main bottleneck in this setting. To accelerate distributed gradient descent, the popular strategy of local training is to communicate less frequently; that is, to perform several iterations of local computations between the communication steps. A recent breakthrough in this field was made by Mishchenko et al. (2022): their Scaffnew algorithm is the first to probably benefit from local training, with accelerated communication complexity. However, it was an open and challenging question to know whether the powerful mechanism behind Scaffnew would be compatible with partial participation, the desirable feature that not all clients need to participate to every round of the training process. We answer this question positively and propose a new algorithm, which handles local training and partial participation, with state-of-the-art communication complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2023

Improving Accelerated Federated Learning with Compression and Importance Sampling

Federated Learning is a collaborative training framework that leverages ...
research
10/28/2022

GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity

In this work, we study distributed optimization algorithms that reduce t...
research
08/16/2021

Reducing the Communication Cost of Federated Learning through Multistage Optimization

A central question in federated learning (FL) is how to design optimizat...
research
10/02/2022

SAGDA: Achieving 𝒪(ε^-2) Communication Complexity in Federated Min-Max Learning

To lower the communication complexity of federated min-max learning, a n...
research
02/18/2022

ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!

We introduce ProxSkip – a surprisingly simple and provably efficient met...
research
02/13/2023

Multi-Carrier NOMA-Empowered Wireless Federated Learning with Optimal Power and Bandwidth Allocation

Wireless federated learning (WFL) undergoes a communication bottleneck i...

Please sign up or login with your details

Forgot password? Click here to reset