Bottleneck Time Minimization for Distributed Iterative Processes: Speeding Up Gossip-Based Federated Learning on Networked Computers

06/29/2021
by   Mehrdad Kiamari, et al.
0

We present a novel task scheduling scheme for accelerating computational applications involving distributed iterative processes that are executed on networked computing resources. Such an application consists of multiple tasks, each of which outputs data at each iteration to be processed by neighboring tasks; these dependencies between the tasks can be represented as a directed graph. We first mathematically formulate the problem as a Binary Quadratic Program (BQP), accounting for both computation and communication costs. We show that the problem is NP-hard. We then relax the problem as a Semi-Definite Program (SDP) and utilize a randomized rounding technique based on sampling from a suitably-formulated multi-variate Gaussian distribution. Furthermore, we derive the expected value of bottleneck time. Finally, we apply our proposed scheme on gossip-based federated learning as an application of iterative processes. Through numerical evaluations on the MNIST and CIFAR-10 datasets, we show that our proposed approach outperforms well-known scheduling techniques from distributed computing. In particular, for arbitrary settings, we show that it reduces bottleneck time by 91% compared to HEFT and 84% compared to throughput HEFT.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

04/29/2021

From Distributed Machine Learning to Federated Learning: A Survey

In recent years, data and computing resources are typically distributed ...
06/14/2022

Matching Pursuit Based Scheduling for Over-the-Air Federated Learning

This paper develops a class of low-complexity device scheduling algorith...
12/10/2020

Communication-Computation Efficient Secure Aggregation for Federated Learning

Federated learning has been spotlighted as a way to train neural network...
03/12/2021

Auction Based Clustered Federated Learning in Mobile Edge Computing System

In recent years, mobile clients' computing ability and storage capacity ...
05/20/2021

A Dispersed Federated Learning Framework for 6G-Enabled Autonomous Driving Cars

Sixth-Generation (6G)-based Internet of Everything applications (e.g. au...
05/25/2020

Towards Efficient Scheduling of Federated Mobile Devices under Computational and Statistical Heterogeneity

Originated from distributed learning, federated learning enables privacy...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.