Improving Accelerated Federated Learning with Compression and Importance Sampling

06/05/2023
by   Michał Grudzień, et al.
0

Federated Learning is a collaborative training framework that leverages heterogeneous data distributed across a vast number of clients. Since it is practically infeasible to request and process all clients during the aggregation step, partial participation must be supported. In this setting, the communication between the server and clients poses a major bottleneck. To reduce communication loads, there are two main approaches: compression and local steps. Recent work by Mishchenko et al. [2022] introduced the new ProxSkip method, which achieves an accelerated rate using the local steps technique. Follow-up works successfully combined local steps acceleration with partial participation [Grudzień et al., 2023, Condat et al. 2023] and gradient compression [Condat et al. [2022]. In this paper, we finally present a complete method for Federated Learning that incorporates all necessary ingredients: Local Training, Compression, and Partial Participation. We obtain state-of-the-art convergence guarantees in the considered setting. Moreover, we analyze the general sampling framework for partial participation and derive an importance sampling scheme, which leads to even better performance. We experimentally demonstrate the advantages of the proposed method in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2023

TAMUNA: Accelerated Federated Learning with Local Training and Partial Participation

In federated learning, a large number of users are involved in a global ...
research
12/29/2022

Can 5th Generation Local Training Methods Support Client Sampling? Yes!

The celebrated FedAvg algorithm of McMahan et al. (2017) is based on thr...
research
02/15/2021

MARINA: Faster Non-Convex Distributed Learning with Compression

We develop and analyze MARINA: a new communication efficient method for ...
research
10/28/2022

GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity

In this work, we study distributed optimization algorithms that reduce t...
research
04/21/2021

Gradient Masked Federated Optimization

Federated Averaging (FedAVG) has become the most popular federated learn...
research
12/14/2020

Federated Learning under Importance Sampling

Federated learning encapsulates distributed learning strategies that are...
research
06/05/2021

FedNL: Making Newton-Type Methods Applicable to Federated Learning

Inspired by recent work of Islamov et al (2021), we propose a family of ...

Please sign up or login with your details

Forgot password? Click here to reset