Adaptive Control of Client Selection and Gradient Compression for Efficient Federated Learning

12/19/2022
by   Zhida Jiang, et al.
0

Federated learning (FL) allows multiple clients cooperatively train models without disclosing local data. However, the existing works fail to address all these practical concerns in FL: limited communication resources, dynamic network conditions and heterogeneous client properties, which slow down the convergence of FL. To tackle the above challenges, we propose a heterogeneity-aware FL framework, called FedCG, with adaptive client selection and gradient compression. Specifically, the parameter server (PS) selects a representative client subset considering statistical heterogeneity and sends the global model to them. After local training, these selected clients upload compressed model updates matching their capabilities to the PS for aggregation, which significantly alleviates the communication load and mitigates the straggler effect. We theoretically analyze the impact of both client selection and gradient compression on convergence performance. Guided by the derived convergence rate, we develop an iteration-based algorithm to jointly optimize client selection and compression ratio decision using submodular maximization and linear programming. Extensive experiments on both real-world prototypes and simulations show that FedCG can provide up to 5.3× speedup compared to other methods.

READ FULL TEXT
research
10/19/2022

Latency Aware Semi-synchronous Client Selection and Model Aggregation for Wireless Federated Learning

Federated learning (FL) is a collaborative machine learning framework th...
research
08/31/2023

FedDD: Toward Communication-efficient Federated Learning with Differential Parameter Dropout

Federated Learning (FL) requires frequent exchange of model parameters, ...
research
05/27/2022

Client Selection in Nonconvex Federated Learning: Improved Convergence Analysis for Optimal Unbiased Sampling Strategy

Federated learning (FL) is a distributed machine learning paradigm that ...
research
11/25/2022

Analysis of Error Feedback in Federated Non-Convex Optimization with Biased Compression

In federated learning (FL) systems, e.g., wireless networks, the communi...
research
09/04/2023

DRAG: Divergence-based Adaptive Aggregation in Federated learning on Non-IID Data

Local stochastic gradient descent (SGD) is a fundamental approach in ach...
research
03/24/2021

FedGP: Correlation-Based Active Client Selection for Heterogeneous Federated Learning

Client-wise heterogeneity is one of the major issues that hinder effecti...
research
12/25/2021

Towards Federated Learning on Time-Evolving Heterogeneous Data

Federated Learning (FL) is an emerging learning paradigm that preserves ...

Please sign up or login with your details

Forgot password? Click here to reset