Toward efficient resource utilization at edge nodes in federated learning

09/19/2023
by   Sadi Alawadi, et al.
0

Federated learning (FL) enables edge nodes to collaboratively contribute to constructing a global model without sharing their data. This is accomplished by devices computing local, private model updates that are then aggregated by a server. However, computational resource constraints and network communication can become a severe bottleneck for larger model sizes typical for deep learning applications. Edge nodes tend to have limited hardware resources (RAM, CPU), and the network bandwidth and reliability at the edge is a concern for scaling federated fleet applications. In this paper, we propose and evaluate a FL strategy inspired by transfer learning in order to reduce resource utilization on devices, as well as the load on the server and network in each global training round. For each local model update, we randomly select layers to train, freezing the remaining part of the model. In doing so, we can reduce both server load and communication costs per round by excluding all untrained layer weights from being transferred to the server. The goal of this study is to empirically explore the potential trade-off between resource utilization on devices and global model convergence under the proposed strategy. We implement the approach using the federated learning framework FEDn. A number of experiments were carried out over different datasets (CIFAR-10, CASA, and IMDB), performing different tasks using different deep-learning model architectures. Our results show that training the model partially can accelerate the training process, efficiently utilizes resources on-device, and reduce the data transmission by around 75 of the model layers, respectively, without harming the resulting global model accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2021

Toward Efficient Federated Learning in Multi-Channeled Mobile Edge Network with Layerd Gradient Compression

A fundamental issue for federated learning (FL) is how to achieve optima...
research
02/07/2020

MDLdroid: a ChainSGD-reduce Approach to Mobile Deep Learning for Personal Mobile Sensing

Personal mobile sensing is fast permeating our daily lives to enable act...
research
04/11/2023

Communication Efficient DNN Partitioning-based Federated Learning

Efficiently running federated learning (FL) on resource-constrained devi...
research
09/02/2023

Equitable-FL: Federated Learning with Sparsity for Resource-Constrained Environment

In Federated Learning, model training is performed across multiple compu...
research
11/01/2021

FedFm: Towards a Robust Federated Learning Approach For Fault Mitigation at the Edge Nodes

Federated Learning deviates from the norm of "send data to model" to "se...
research
10/28/2021

Computational Intelligence and Deep Learning for Next-Generation Edge-Enabled Industrial IoT

In this paper, we investigate how to deploy computational intelligence a...
research
07/06/2020

Deep Partial Updating

Emerging edge intelligence applications require the server to continuous...

Please sign up or login with your details

Forgot password? Click here to reset