Reducing Impacts of System Heterogeneity in Federated Learning using Weight Update Magnitudes

08/30/2022
by   Irene Wang, et al.
0

The widespread adoption of handheld devices have fueled rapid growth in new applications. Several of these new applications employ machine learning models to train on user data that is typically private and sensitive. Federated Learning enables machine learning models to train locally on each handheld device while only synchronizing their neuron updates with a server. While this enables user privacy, technology scaling and software advancements have resulted in handheld devices with varying performance capabilities. This results in the training time of federated learning tasks to be dictated by a few low-performance straggler devices, essentially becoming a bottleneck to the entire training process. In this work, we aim to mitigate the performance bottleneck of federated learning by dynamically forming sub-models for stragglers based on their performance and accuracy feedback. To this end, we offer the Invariant Dropout, a dynamic technique that forms a sub-model based on the neuron update threshold. Invariant Dropout uses neuron updates from the non-straggler clients to develop a tailored sub-models for each straggler during each training iteration. All corresponding weights which have a magnitude less than the threshold are dropped for the iteration. We evaluate Invariant Dropout using five real-world mobile clients. Our evaluations show that Invariant Dropout obtains a maximum accuracy gain of 1.4 state-of-the-art Ordered Dropout while mitigating performance bottlenecks of stragglers.

READ FULL TEXT
research
07/05/2023

FLuID: Mitigating Stragglers in Federated Learning using Invariant Dropout

Federated Learning (FL) allows machine learning models to train locally ...
research
10/07/2021

Enabling On-Device Training of Speech Recognition Models with Federated Dropout

Federated learning can be used to train machine learning models on the e...
research
09/11/2023

Towards Federated Learning Under Resource Constraints via Layer-wise Training and Depth Dropout

Large machine learning models trained on diverse data have recently seen...
research
06/12/2023

AnoFel: Supporting Anonymity for Privacy-Preserving Federated Learning

Federated learning enables users to collaboratively train a machine lear...
research
11/03/2022

FedGen: Generalizable Federated Learning

Existing federated learning models that follow the standard risk minimiz...
research
01/26/2022

Fast Server Learning Rate Tuning for Coded Federated Dropout

In cross-device Federated Learning (FL), clients with low computational ...

Please sign up or login with your details

Forgot password? Click here to reset