On The Impact of Client Sampling on Federated Learning Convergence

07/26/2021
by   Yann Fraboni, et al.
0

While clients' sampling is a central operation of current state-of-the-art federated learning (FL) approaches, the impact of this procedure on the convergence and speed of FL remains to date under-investigated. In this work we introduce a novel decomposition theorem for the convergence of FL, allowing to clearly quantify the impact of client sampling on the global model update. Contrarily to previous convergence analyses, our theorem provides the exact decomposition of a given convergence step, thus enabling accurate considerations about the role of client sampling and heterogeneity. First, we provide a theoretical ground for previously reported results on the relationship between FL convergence and the variance of the aggregation weights. Second, we prove for the first time that the quality of FL convergence is also impacted by the resulting covariance between aggregation weights. Third, we establish that the sum of the aggregation weights is another source of slow-down and should be equal to 1 to improve FL convergence speed. Our theory is general, and is here applied to Multinomial Distribution (MD) and Uniform sampling, the two default client sampling in FL, and demonstrated through a series of experiments in non-iid and unbalanced scenarios. Our results suggest that MD sampling should be used as default sampling scheme, due to the resilience to the changes in data ratio during the learning process, while Uniform sampling is superior only in the special case when clients have the same amount of data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2021

Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated Learning

This work addresses the problem of optimizing communications between ser...
research
06/06/2023

A Lightweight Method for Tackling Unknown Participation Probabilities in Federated Averaging

In federated learning (FL), clients usually have diverse participation p...
research
09/21/2022

FedFOR: Stateless Heterogeneous Federated Learning with First-Order Regularization

Federated Learning (FL) seeks to distribute model training across local ...
research
12/21/2021

Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling

Federated learning (FL) algorithms usually sample a fraction of clients ...
research
12/28/2021

Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback

In federated learning (FL) problems, client sampling plays a key role in...
research
11/21/2022

Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization

The aim of Machine Unlearning (MU) is to provide theoretical guarantees ...
research
01/15/2022

Variance-Reduced Heterogeneous Federated Learning via Stratified Client Selection

Client selection strategies are widely adopted to handle the communicati...

Please sign up or login with your details

Forgot password? Click here to reset