Federated Learning You May Communicate Less Often!

06/09/2023
by   Milad Sefidgaran, et al.
0

We investigate the generalization error of statistical learning models in a Federated Learning (FL) setting. Specifically, we study the evolution of the generalization error with the number of communication rounds between the clients and the parameter server, i.e., the effect on the generalization error of how often the local models as computed by the clients are aggregated at the parameter server. We establish PAC-Bayes and rate-distortion theoretic bounds on the generalization error that account explicitly for the effect of the number of rounds, say R ∈ℕ, in addition to the number of participating devices K and individual datasets size n. The bounds, which apply in their generality for a large class of loss functions and learning algorithms, appear to be the first of their kind for the FL setting. Furthermore, we apply our bounds to FL-type Support Vector Machines (FSVM); and we derive (more) explicit bounds on the generalization error in this case. In particular, we show that the generalization error of FSVM increases with R, suggesting that more frequent communication with the parameter server diminishes the generalization power of such learning algorithms. Combined with that the empirical risk generally decreases for larger values of R, this indicates that R might be a parameter to optimize in order to minimize the population risk of FL algorithms. Moreover, specialized to the case R=1 (sometimes referred to as "one-shot" FL or distributed learning) our bounds suggest that the generalization error of the FL setting decreases faster than that of centralized learning by a factor of 𝒪(√(log(K)/K)), thereby generalizing recent findings in this direction to arbitrary loss functions and algorithms. The results of this paper are also validated on some experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2023

More Communication Does Not Result in Smaller Generalization Error in Federated Learning

We study the generalization error of statistical learning models in a Fe...
research
06/06/2022

Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning

In this paper, we use tools from rate-distortion theory to establish new...
research
02/09/2023

Delay Sensitive Hierarchical Federated Learning with Stochastic Local Updates

The impact of local averaging on the performance of federated learning (...
research
06/06/2023

Understanding Generalization of Federated Learning via Stability: Heterogeneity Matters

Generalization performance is a key metric in evaluating machine learnin...
research
01/31/2023

Truthful Incentive Mechanism for Federated Learning with Crowdsourced Data Labeling

Federated learning (FL) has emerged as a promising paradigm that trains ...
research
03/06/2021

An Effective Approach to Minimize Error in Midpoint Ellipse Drawing Algorithm

The present paper deals with the generalization of Midpoint Ellipse Draw...
research
12/12/2021

Communication-Efficient Federated Learning for Neural Machine Translation

Training neural machine translation (NMT) models in federated learning (...

Please sign up or login with your details

Forgot password? Click here to reset