Adaptive Quantization of Model Updates for Communication-Efficient Federated Learning

02/08/2021
by   Divyansh Jhunjhunwala, et al.
0

Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning, especially in bandwidth-limited settings and high-dimensional models. Gradient quantization is an effective way of reducing the number of bits required to communicate each model update, albeit at the cost of having a higher error floor due to the higher variance of the stochastic gradients. In this work, we propose an adaptive quantization strategy called AdaQuantFL that aims to achieve communication efficiency as well as a low error floor by changing the number of quantization levels during the course of training. Experiments on training deep neural networks show that our method can converge in much fewer communicated bits as compared to fixed quantization level setups, with little or no impact on training and test accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2021

FedDQ: Communication-Efficient Federated Learning with Descending Quantization

Federated learning (FL) is an emerging privacy-preserving distributed le...
research
11/12/2019

Hyper-Sphere Quantization: Communication-Efficient SGD for Federated Learning

The high cost of communicating gradients is a major bottleneck for feder...
research
01/07/2022

Optimizing the Communication-Accuracy Trade-off in Federated Learning with Rate-Distortion Theory

A significant bottleneck in federated learning is the network communicat...
research
06/19/2020

DEED: A General Quantization Scheme for Communication Efficiency in Bits

In distributed optimization, a popular technique to reduce communication...
research
04/22/2018

MQGrad: Reinforcement Learning of Gradient Quantization in Parameter Server

One of the most significant bottleneck in training large scale machine l...
research
03/22/2020

Tracking an Auto-Regressive Process with Limited Communication per Unit Time

Samples from a high-dimensional AR[1] process are observed by a sender w...
research
03/15/2023

Communication-Efficient Design for Quantized Decentralized Federated Learning

Decentralized federated learning (DFL) is a variant of federated learnin...

Please sign up or login with your details

Forgot password? Click here to reset