Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning

06/06/2022
by   Milad Sefidgaran, et al.
0

In this paper, we use tools from rate-distortion theory to establish new upper bounds on the generalization error of statistical distributed learning algorithms. Specifically, there are K clients whose individually chosen models are aggregated by a central server. The bounds depend on the compressibility of each client's algorithm while keeping other clients' algorithms un-compressed, and leverage the fact that small changes in each local model change the aggregated model by a factor of only 1/K. Adopting a recently proposed approach by Sefidgaran et al., and extending it suitably to the distributed setting, this enables smaller rate-distortion terms which are shown to translate into tighter generalization bounds. The bounds are then applied to the distributed support vector machines (SVM), suggesting that the generalization error of the distributed setting decays faster than that of the centralized one with a factor of 𝒪(log(K)/√(K)). This finding is validated also experimentally. A similar conclusion is obtained for a multiple-round federated learning setup where each client uses stochastic gradient Langevin dynamics (SGLD).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2023

Federated Learning You May Communicate Less Often!

We investigate the generalization error of statistical learning models i...
research
04/24/2023

More Communication Does Not Result in Smaller Generalization Error in Federated Learning

We study the generalization error of statistical learning models in a Fe...
research
02/04/2022

Improved Information Theoretic Generalization Bounds for Distributed and Federated Learning

We consider information-theoretic bounds on expected generalization erro...
research
02/10/2021

Learning under Distribution Mismatch and Model Misspecification

We study learning algorithms when there is a mismatch between the distri...
research
01/25/2023

When to Trust Aggregated Gradients: Addressing Negative Client Sampling in Federated Learning

Federated Learning has become a widely-used framework which allows learn...
research
10/09/2019

ExpertMatcher: Automating ML Model Selection for Clients using Hidden Representations

Recently, there has been the development of Split Learning, a framework ...
research
12/03/2007

Pac-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning

This monograph deals with adaptive supervised classification, using tool...

Please sign up or login with your details

Forgot password? Click here to reset