Optimal Task Allocation for Mobile Edge Learning with Global Training Time Constraints

06/12/2020
by   Umair Mohammad, et al.
0

This paper proposes to minimize the loss of training a distributed machine learning (ML) model on nodes or learners connected via the resource-constrained wireless edge network by jointly optimizing the number of local and global updates and the task size allocation. The optimization is done while taking into account heterogeneous communication and computation capabilities of each learner. It is shown that the problem of interest cannot be solved analytically but by leveraging existing bounds on the difference between the optimal loss and the loss at any given iteration, an expression for the objective function is derived as a function of the number of local updates. It is shown that the problem is convex and can be solved by finding the argument that minimizes the loss. The result is then used to determine the batch sizes for each learner for the next global update step. The merits of the proposed solution, which is heterogeneity aware (HA), are exhibited by comparing its performance to the heterogeneity unaware (HU) approach.

READ FULL TEXT
research
11/30/2020

Task Allocation for Asynchronous Mobile Edge Learning with Delay and Energy Constraints

This paper extends the paradigm of "mobile edge learning (MEL)" by desig...
research
01/13/2022

Experimental Design Networks: A Paradigm for Serving Heterogeneous Learners under Networking Constraints

Significant advances in edge computing capabilities enable learning to o...
research
11/09/2018

Adaptive Task Allocation for Mobile Edge Learning

This paper aims to establish a new optimization paradigm for implementin...
research
11/16/2019

Distributed Machine Learning through Heterogeneous Edge Systems

Many emerging AI applications request distributed machine learning (ML) ...
research
03/10/2020

Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of Partitioned Edge Learning

To leverage data and computation capabilities of mobile devices, machine...
research
09/27/2022

Semi-Synchronous Personalized Federated Learning over Mobile Edge Networks

Personalized Federated Learning (PFL) is a new Federated Learning (FL) a...
research
04/18/2022

FedKL: Tackling Data Heterogeneity in Federated Reinforcement Learning by Penalizing KL Divergence

As a distributed learning paradigm, Federated Learning (FL) faces the co...

Please sign up or login with your details

Forgot password? Click here to reset