Training Time Minimization for Federated Edge Learning with Optimized Gradient Quantization and Bandwidth Allocation

12/29/2021
by   Peixi Liu, et al.
0

Training a machine learning model with federated edge learning (FEEL) is typically time-consuming due to the constrained computation power of edge devices and limited wireless resources in edge networks. In this paper, the training time minimization problem is investigated in a quantized FEEL system, where the heterogeneous edge devices send quantized gradients to the edge server via orthogonal channels. In particular, a stochastic quantization scheme is adopted for compression of uploaded gradients, which can reduce the burden of per-round communication but may come at the cost of increasing number of communication rounds. The training time is modeled by taking into account the communication time, computation time and the number of communication rounds. Based on the proposed training time model, the intrinsic trade-off between the number of communication rounds and per-round latency is characterized. Specifically, we analyze the convergence behavior of the quantized FEEL in terms of the optimality gap. Further, a joint data-and-model-driven fitting method is proposed to obtain the exact optimality gap, based on which the closed-form expressions for the number of communication rounds and the total training time are obtained. Constrained by total bandwidth, the training time minimization problem is formulated as a joint quantization level and bandwidth allocation optimization problem. To this end, an algorithm based on alternating optimization is proposed, which alternatively solves the subproblem of quantization optimization via successive convex approximation and the subproblem of bandwidth allocation via bisection search. With different learning tasks and models, the validation of our analysis and the near-optimal performance of the proposed optimization algorithm are demonstrated by the experimental results.

READ FULL TEXT
research
11/03/2019

Device Scheduling with Fast Convergence for Wireless Federated Learning

Owing to the increasing need for massive data analysis and model trainin...
research
07/24/2021

Accelerating Federated Edge Learning via Optimized Probabilistic Device Scheduling

The popular federated edge learning (FEEL) framework allows privacy-pres...
research
07/14/2020

Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning

In federated learning (FL), devices contribute to the global training by...
research
05/04/2023

Emulation Learning for Neuromimetic Systems

Building on our recent research on neural heuristic quantization systems...
research
05/18/2023

Q-SHED: Distributed Optimization at the Edge via Hessian Eigenvectors Quantization

Edge networks call for communication efficient (low overhead) and robust...
research
06/13/2022

Toward Ambient Intelligence: Federated Edge Learning with Task-Oriented Sensing, Computation, and Communication Integration

In this paper, we address the problem of joint sensing, computation, and...
research
03/10/2020

Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of Partitioned Edge Learning

To leverage data and computation capabilities of mobile devices, machine...

Please sign up or login with your details

Forgot password? Click here to reset