Performance Optimization for Variable Bitwidth Federated Learning in Wireless Networks

09/21/2022
by   Sihua Wang, et al.
0

This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization. In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which, in turn, aggregates them into a quantized global model and synchronizes the devices. The goal is to jointly determine the bitwidths employed for local FL model quantization and the set of devices participating in FL training at each iteration. This problem is posed as an optimization problem whose goal is to minimize the training loss of quantized FL under a per-iteration device sampling budget and delay requirement. To derive the solution, an analytical characterization is performed in order to show how the limited wireless resources and induced quantization errors affect the performance of the proposed FL method. The analytical results show that the improvement of FL training loss between two consecutive iterations depends on the device selection and quantization scheme as well as on several parameters inherent to the model being learned. Given linear regression-based estimates of these model properties, it is shown that the FL training process can be described as a Markov decision process (MDP), and, then, a model-based reinforcement learning (RL) method is proposed to optimize action selection over iterations. Compared to model-free RL, this model-based RL approach leverages the derived mathematical characterization of the FL training process to discover an effective device selection and quantization scheme without imposing additional device communication overhead. Simulation results show that the proposed FL algorithm can reduce 29 convergence time compared to a model free RL method and the standard FL method, respectively.

READ FULL TEXT

page 22

page 24

research
03/11/2022

Wireless Quantized Federated Learning: A Joint Computation and Communication Design

Recently, federated learning (FL) has sparked widespread attention as a ...
research
06/18/2020

Federated Learning With Quantized Global Model Updates

We study federated learning (FL), which enables mobile devices to utiliz...
research
11/14/2022

Optimal Privacy Preserving in Wireless Federated Learning System over Mobile Edge Computing

Federated Learning (FL) with quantization and deliberately added noise o...
research
06/22/2022

Quantization Robust Federated Learning for Efficient Inference on Heterogeneous Devices

Federated Learning (FL) is a machine learning paradigm to distributively...
research
11/15/2021

On the Tradeoff between Energy, Precision, and Accuracy in Federated Quantized Neural Networks

Deploying federated learning (FL) over wireless networks with resource-c...
research
07/19/2022

Green, Quantized Federated Learning over Wireless Networks: An Energy-Efficient Design

In this paper, a green, quantized FL framework, which represents data wi...
research
01/30/2022

Communication-Efficient Consensus Mechanism for Federated Reinforcement Learning

The paper considers independent reinforcement learning (IRL) for multi-a...

Please sign up or login with your details

Forgot password? Click here to reset