Expanding the Reach of Federated Learning by Reducing Client Resource Requirements

12/18/2018
by   Sebastian Caldas, et al.
10

Communication on heterogeneous edge networks is a fundamental bottleneck in Federated Learning (FL), restricting both model capacity and user participation. To address this issue, we introduce two novel strategies to reduce communication costs: (1) the use of lossy compression on the global model sent server-to-client; and (2) Federated Dropout, which allows users to efficiently train locally on smaller subsets of the global model and also provides a reduction in both client-to-server communication and local computation. We empirically show that these strategies, combined with existing compression approaches for client-to-server communication, collectively provide up to a 14× reduction in server-to-client communication, a 1.7× reduction in local computation, and a 28× reduction in upload communication, all without degrading the quality of the final model. We thus comprehensively reduce FL's impact on client device resources, allowing higher capacity models to be trained, and a more diverse set of users to be reached.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2021

DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning

Federated Learning (FL) is a powerful technique for training a model on ...
research
11/08/2020

Adaptive Federated Dropout: Improving Communication Efficiency and Generalization for Federated Learning

With more regulations tackling users' privacy-sensitive data protection ...
research
07/19/2022

FedNet2Net: Saving Communication and Computations in Federated Learning with Model Growing

Federated learning (FL) is a recently developed area of machine learning...
research
07/18/2022

Federated Learning for Non-IID Data via Client Variance Reduction and Adaptive Server Update

Federated learning (FL) is an emerging technique used to collaboratively...
research
04/04/2022

FedSynth: Gradient Compression via Synthetic Data in Federated Learning

Model compression is important in federated learning (FL) with large mod...
research
10/23/2022

Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling

As deep learning blooms with growing demand for computation and data res...
research
08/04/2022

ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity

When the available hardware cannot meet the memory and compute requireme...

Please sign up or login with your details

Forgot password? Click here to reset