More Communication Does Not Result in Smaller Generalization Error in Federated Learning

04/24/2023
by   Romain Chor, et al.
0

We study the generalization error of statistical learning models in a Federated Learning (FL) setting. Specifically, there are K devices or clients, each holding an independent own dataset of size n. Individual models, learned locally via Stochastic Gradient Descent, are aggregated (averaged) by a central server into a global model and then sent back to the devices. We consider multiple (say R ∈ℕ^*) rounds of model aggregation and study the effect of R on the generalization error of the final aggregated model. We establish an upper bound on the generalization error that accounts explicitly for the effect of R (in addition to the number of participating devices K and dataset size n). It is observed that, for fixed (n, K), the bound increases with R, suggesting that the generalization of such learning algorithms is negatively affected by more frequent communication with the parameter server. Combined with the fact that the empirical risk, however, generally decreases for larger values of R, this indicates that R might be a parameter to optimize to reduce the population risk of FL algorithms. The results of this paper, which extend straightforwardly to the heterogeneous data setting, are also illustrated through numerical examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2023

Federated Learning You May Communicate Less Often!

We investigate the generalization error of statistical learning models i...
research
02/09/2023

Delay Sensitive Hierarchical Federated Learning with Stochastic Local Updates

The impact of local averaging on the performance of federated learning (...
research
06/06/2022

Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning

In this paper, we use tools from rate-distortion theory to establish new...
research
04/25/2023

User-Centric Federated Learning: Trading off Wireless Resources for Personalization

Statistical heterogeneity across clients in a Federated Learning (FL) sy...
research
06/21/2023

An Efficient Virtual Data Generation Method for Reducing Communication in Federated Learning

Communication overhead is one of the major challenges in Federated Learn...
research
05/16/2023

Learning from Aggregated Data: Curated Bags versus Random Bags

Protecting user privacy is a major concern for many machine learning sys...
research
10/25/2022

Federated Learning Using Variance Reduced Stochastic Gradient for Probabilistically Activated Agents

This paper proposes an algorithm for Federated Learning (FL) with a two-...

Please sign up or login with your details

Forgot password? Click here to reset