CSMAAFL: Client Scheduling and Model Aggregation in Asynchronous Federated Learning

06/01/2023
by   Xiang Ma, et al.
0

Asynchronous federated learning aims to solve the straggler problem in heterogeneous environments, i.e., clients have small computational capacities that could cause aggregation delay. The principle of asynchronous federated learning is to allow the server to aggregate the model once it receives an update from any client rather than waiting for updates from multiple clients or waiting a specified amount of time in the synchronous mode. Due to the asynchronous setting, the stale model problem could occur, where the slow clients could utilize an outdated local model for their local data training. Consequently, when these locally trained models are uploaded to the server, they may impede the convergence of the global training. Therefore, effective model aggregation strategies play a significant role in updating the global model. Besides, client scheduling is also critical when heterogeneous clients with diversified computing capacities are participating in the federated learning process. This work first investigates the impact of the convergence of asynchronous federated learning mode when adopting the aggregation coefficient in synchronous mode. The effective aggregation solutions that can achieve the same convergence result as in the synchronous mode are then proposed, followed by an improved aggregation method with client scheduling. The simulation results in various scenarios demonstrate that the proposed algorithm converges with a similar level of accuracy as the classical synchronous federated learning algorithm but effectively accelerates the learning process, especially in its early stage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2019

Communication-Efficient Federated Deep Learning with Asynchronous Model Update and Temporally Weighted Aggregation

Federated learning obtains a central model on the server by aggregating ...
research
05/27/2022

AsyncFedED: Asynchronous Federated Learning with Euclidean Distance based Adaptive Weight Aggregation

In an asynchronous federated learning framework, the server updates the ...
research
06/21/2022

A General Theory for Federated Optimization with Asynchronous and Heterogeneous Clients Updates

We propose a novel framework to study asynchronous federated learning op...
research
08/01/2023

Asynchronous Federated Learning with Bidirectional Quantized Communications and Buffered Aggregation

Asynchronous Federated Learning with Buffered Aggregation (FedBuff) is a...
research
07/17/2020

Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise

The feasibility of federated learning is highly constrained by the serve...
research
06/15/2023

Opportunistic Transmission of Distributed Learning Models in Mobile UAVs

In this paper, we propose an opportunistic scheme for the transmission o...
research
05/31/2022

Asynchronous Hierarchical Federated Learning

Federated Learning is a rapidly growing area of research and with variou...

Please sign up or login with your details

Forgot password? Click here to reset