Device Scheduling with Fast Convergence for Wireless Federated Learning

11/03/2019
by   Wenqi Shi, et al.
0

Owing to the increasing need for massive data analysis and model training at the network edge, as well as the rising concerns about the data privacy, a new distributed training framework called federated learning (FL) has emerged. In each iteration of FL (called round), the edge devices update local models based on their own data and contribute to the global training by uploading the model updates via wireless channels. Due to the limited spectrum resources, only a portion of the devices can be scheduled in each round. While most of the existing work on scheduling focuses on the convergence of FL w.r.t. rounds, the convergence performance under a total training time budget is not yet explored. In this paper, a joint bandwidth allocation and scheduling problem is formulated to capture the long-term convergence performance of FL, and is solved by being decoupled into two sub-problems. For the bandwidth allocation sub-problem, the derived optimal solution suggests to allocate more bandwidth to the devices with worse channel conditions or weaker computation capabilities. For the device scheduling sub-problem, by revealing the trade-off between the number of rounds required to attain a certain model accuracy and the latency per round, a greedy policy is inspired, that continuously selects the device that consumes the least time in model updating until achieving a good trade-off between the learning efficiency and latency per round. The experiments show that the proposed policy outperforms other state-of-the-art scheduling policies, with the best achievable model accuracy under training time budgets.

READ FULL TEXT
research
07/14/2020

Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning

In federated learning (FL), devices contribute to the global training by...
research
01/28/2020

Update Aware Device Scheduling for Federated Learning at the Wireless Edge

We study federated learning (FL) at the wireless edge, where power-limit...
research
12/29/2021

Training Time Minimization for Federated Edge Learning with Optimized Gradient Quantization and Bandwidth Allocation

Training a machine learning model with federated edge learning (FEEL) is...
research
07/24/2021

Accelerating Federated Edge Learning via Optimized Probabilistic Device Scheduling

The popular federated edge learning (FEEL) framework allows privacy-pres...
research
10/28/2021

Computational Intelligence and Deep Learning for Next-Generation Edge-Enabled Industrial IoT

In this paper, we investigate how to deploy computational intelligence a...
research
08/17/2019

Scheduling Policies for Federated Learning in Wireless Networks

Motivated by the increasing computational capacity of wireless user equi...
research
09/27/2022

Semi-Synchronous Personalized Federated Learning over Mobile Edge Networks

Personalized Federated Learning (PFL) is a new Federated Learning (FL) a...

Please sign up or login with your details

Forgot password? Click here to reset