Training Latency Minimization for Model-Splitting Allowed Federated Edge Learning

07/21/2023
by   Yao Wen, et al.
0

To alleviate the shortage of computing power faced by clients in training deep neural networks (DNNs) using federated learning (FL), we leverage the edge computing and split learning to propose a model-splitting allowed FL (SFL) framework, with the aim to minimize the training latency without loss of test accuracy. Under the synchronized global update setting, the latency to complete a round of global training is determined by the maximum latency for the clients to complete a local training session. Therefore, the training latency minimization problem (TLMP) is modelled as a minimizing-maximum problem. To solve this mixed integer nonlinear programming problem, we first propose a regression method to fit the quantitative-relationship between the cut-layer and other parameters of an AI-model, and thus, transform the TLMP into a continuous problem. Considering that the two subproblems involved in the TLMP, namely, the cut-layer selection problem for the clients and the computing resource allocation problem for the parameter-server are relative independence, an alternate-optimization-based algorithm with polynomial time complexity is developed to obtain a high-quality solution to the TLMP. Extensive experiments are performed on a popular DNN-model EfficientNetV2 using dataset MNIST, and the results verify the validity and improved performance of the proposed SFL framework.

READ FULL TEXT

page 1

page 4

page 7

page 9

page 10

page 14

research
03/19/2023

Hierarchical Personalized Federated Learning Over Massive Mobile Edge Computing Networks

Personalized Federated Learning (PFL) is a new Federated Learning (FL) p...
research
01/18/2021

Blockchain Assisted Decentralized Federated Learning (BLADE-FL): Performance Analysis and Resource Allocation

Federated learning (FL), as a distributed machine learning paradigm, pro...
research
05/10/2022

Client Selection and Bandwidth Allocation for Federated Learning: An Online Optimization Perspective

Federated learning (FL) can train a global model from clients' local dat...
research
03/26/2023

Efficient Parallel Split Learning over Resource-constrained Wireless Edge Networks

The increasingly deeper neural networks hinder the democratization of pr...
research
05/22/2023

When Computing Power Network Meets Distributed Machine Learning: An Efficient Federated Split Learning Framework

In this paper, we advocate CPN-FedSL, a novel and flexible Federated Spl...
research
09/02/2022

Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated Split Learning

As an edge intelligence algorithm for multi-device collaborative trainin...
research
04/18/2022

Split Learning over Wireless Networks: Parallel Design and Resource Management

Split learning (SL) is a collaborative learning framework, which can tra...

Please sign up or login with your details

Forgot password? Click here to reset