Unexpectedly Useful: Convergence Bounds And Real-World Distributed Learning

12/05/2022
by   Francesco Malandrino, et al.
0

Convergence bounds are one of the main tools to obtain information on the performance of a distributed machine learning task, before running the task itself. In this work, we perform a set of experiments to assess to which extent, and in which way, such bounds can predict and improve the performance of real-world distributed (namely, federated) learning tasks. We find that, as can be expected given the way they are obtained, bounds are quite loose and their relative magnitude reflects the training rather than the testing loss. More unexpectedly, we find that some of the quantities appearing in the bounds turn out to be very useful to identify the clients that are most likely to contribute to the learning process, without requiring the disclosure of any information about the quality or size of their datasets. This suggests that further research is warranted on the ways – often counter-intuitive – in which convergence bounds can be exploited to improve the performance of real-world distributed learning tasks.

READ FULL TEXT
research
01/22/2020

Data Selection for Federated Learning with Relevant and Irrelevant Data at Clients

Federated learning is an effective way of training a machine learning mo...
research
11/03/2022

FedGen: Generalizable Federated Learning

Existing federated learning models that follow the standard risk minimiz...
research
07/26/2020

Fast-Convergent Federated Learning

Federated learning has emerged recently as a promising solution for dist...
research
05/10/2021

Federated Learning with Unreliable Clients: Performance Analysis and Mechanism Design

Owing to the low communication costs and privacy-promoting capabilities,...
research
11/23/2022

Federated Learning on Non-IID Graphs via Structural Knowledge Sharing

Graph neural networks (GNNs) have shown their superiority in modeling gr...
research
12/06/2022

Extending Universal Approximation Guarantees: A Theoretical Justification for the Continuity of Real-World Learning Tasks

Universal Approximation Theorems establish the density of various classe...
research
01/06/2021

Federated Learning at the Network Edge: When Not All Nodes are Created Equal

Under the federated learning paradigm, a set of nodes can cooperatively ...

Please sign up or login with your details

Forgot password? Click here to reset