DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning

10/31/2021
by   Robert Hönig, et al.
0

Federated Learning (FL) is a powerful technique for training a model on a server with data from several clients in a privacy-preserving manner. In FL, a server sends the model to every client, who then train the model locally and send it back to the server. The server aggregates the updated models and repeats the process for several rounds. FL incurs significant communication costs, in particular when transmitting the updated local models from the clients back to the server. Recently proposed algorithms quantize the model parameters to efficiently compress FL communication. These algorithms typically have a quantization level that controls the compression factor. We find that dynamic adaptations of the quantization level can boost compression without sacrificing model quality. First, we introduce a time-adaptive quantization algorithm that increases the quantization level as training progresses. Second, we introduce a client-adaptive quantization algorithm that assigns each individual client the optimal quantization level at every round. Finally, we combine both algorithms into DAdaQuant, the doubly-adaptive quantization algorithm. Our experiments show that DAdaQuant consistently improves client→server compression, outperforming the strongest non-adaptive baselines by up to 2.8×.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2022

Federated Select: A Primitive for Communication- and Memory-Efficient Federated Learning

Federated learning (FL) is a framework for machine learning across heter...
research
12/16/2022

Communication-Efficient Federated Learning for Heterogeneous Edge Devices Based on Adaptive Gradient Quantization

Federated learning (FL) enables geographically dispersed edge devices (i...
research
12/18/2018

Expanding the Reach of Federated Learning by Reducing Client Resource Requirements

Communication on heterogeneous edge networks is a fundamental bottleneck...
research
10/05/2021

FedDQ: Communication-Efficient Federated Learning with Descending Quantization

Federated learning (FL) is an emerging privacy-preserving distributed le...
research
06/21/2022

sqSGD: Locally Private and Communication Efficient Federated Learning

Federated learning (FL) is a technique that trains machine learning mode...
research
12/30/2022

Deep Hierarchy Quantization Compression algorithm based on Dynamic Sampling

Unlike traditional distributed machine learning, federated learning stor...
research
12/11/2022

ResFed: Communication Efficient Federated Learning by Transmitting Deep Compressed Residuals

Federated learning enables cooperative training among massively distribu...

Please sign up or login with your details

Forgot password? Click here to reset