Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies

10/03/2020
by   Yae Jee Cho, et al.
0

Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Several works have analyzed the convergence of federated learning by accounting of data heterogeneity, communication and computation limitations, and partial client participation. However, they assume unbiased client participation, where clients are selected at random or in proportion of their data sizes. In this paper, we present the first convergence analysis of federated optimization for biased client selection strategies, and quantify how the selection bias affects convergence speed. We reveal that biasing client selection towards clients with higher local loss achieves faster error convergence. Using this insight, we propose Power-of-Choice, a communication- and computation-efficient client selection framework that can flexibly span the trade-off between convergence speed and solution bias. Our experiments demonstrate that Power-of-Choice strategies converge up to 3 × faster and give 10

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/14/2020

Bandit-based Communication-Efficient Client Selection Strategies for Federated Learning

Due to communication constraints and intermittent client availability in...
research
02/13/2023

FilFL: Accelerating Federated Learning via Client Filtering

Federated learning is an emerging machine learning paradigm that enables...
research
05/13/2022

Federated Learning Under Intermittent Client Availability and Time-Varying Communication Constraints

Federated learning systems facilitate training of global models in setti...
research
06/20/2021

Is Shapley Value fair? Improving Client Selection for Mavericks in Federated Learning

Shapley Value is commonly adopted to measure and incentivize client part...
research
05/18/2023

Client Selection for Federated Policy Optimization with Environment Heterogeneity

The development of Policy Iteration (PI) has inspired many recent algori...
research
08/31/2023

FedDD: Toward Communication-efficient Federated Learning with Differential Parameter Dropout

Federated Learning (FL) requires frequent exchange of model parameters, ...
research
12/11/2022

Client Selection for Federated Bayesian Learning

Distributed Stein Variational Gradient Descent (DSVGD) is a non-parametr...

Please sign up or login with your details

Forgot password? Click here to reset