Accelerating DNN Training in Wireless Federated Edge Learning System

05/23/2019
by   Jinke Ren, et al.
0

Training task in classical machine learning models, such as deep neural networks (DNN), is generally implemented at the remote computationally-adequate cloud center for centralized learning, which is typically time-consuming and resource-hungry. It also incurs serious privacy issue and long communication latency since massive data are transmitted to the centralized node. To overcome these shortcomings, we consider a newly-emerged framework, namely federated edge learning (FEEL), to aggregate the local learning updates at the edge server instead of users' raw data. Aiming at accelerating the training process while guaranteeing the learning accuracy, we first define a novel performance evaluation criterion, called learning efficiency and formulate a training acceleration optimization problem in the CPU scenario, where each user device is equipped with CPU. The closed-form expressions for joint batchsize selection and communication resource allocation are developed and some insightful results are also highlighted. Further, we extend our learning framework into the GPU scenario and propose a novel training function to characterize the learning property of general GPU modules. The optimal solution in this case is manifested to have the similar structure as that of the CPU scenario, recommending that our proposed algorithm is applicable in more general systems. Finally, extensive experiments validate our theoretical analysis and demonstrate that our proposal can reduce the training time and improve the learning accuracy simultaneously.

READ FULL TEXT
research
03/22/2020

HierTrain: Fast Hierarchical Edge AI Learning with Hybrid Parallelism in Mobile-Edge-Cloud Computing

Nowadays, deep neural networks (DNNs) are the core enablers for many eme...
research
02/26/2020

HFEL: Joint Edge Association and Resource Allocation for Cost-Efficient Hierarchical Federated Edge Learning

Federated Learning (FL) has been proposed as an appealing approach to ha...
research
07/14/2020

Energy-Efficient Resource Management for Federated Edge Learning with CPU-GPU Heterogeneous Computing

Edge machine learning involves the deployment of learning algorithms at ...
research
11/01/2022

Multi-Resource Allocation for On-Device Distributed Federated Learning Systems

This work poses a distributed multi-resource allocation scheme for minim...
research
01/27/2022

Data-Quality Based Scheduling for Federated Edge Learning

FEderated Edge Learning (FEEL) has emerged as a leading technique for pr...
research
08/17/2023

Optimal Resource Allocation for U-Shaped Parallel Split Learning

Split learning (SL) has emerged as a promising approach for model traini...
research
08/29/2023

Generative Model for Models: Rapid DNN Customization for Diverse Tasks and Resource Constraints

Unlike cloud-based deep learning models that are often large and uniform...

Please sign up or login with your details

Forgot password? Click here to reset