Partitioning Distributed Compute Jobs with Reinforcement Learning and Graph Neural Networks

From natural language processing to genome sequencing, large-scale machine learning models are bringing advances to a broad range of fields. Many of these models are too large to be trained on a single machine, and instead must be distributed across multiple devices. This has motivated the research of new compute and network systems capable of handling such tasks. In particular, recent work has focused on developing management schemes which decide how to allocate distributed resources such that some overall objective, such as minimising the job completion time (JCT), is optimised. However, such studies omit explicit consideration of how much a job should be distributed, usually assuming that maximum distribution is desirable. In this work, we show that maximum parallelisation is sub-optimal in relation to user-critical metrics such as throughput and blocking rate. To address this, we propose PAC-ML (partitioning for asynchronous computing with machine learning). PAC-ML leverages a graph neural network and reinforcement learning to learn how much to partition computation graphs such that the number of jobs which meet arbitrary user-defined JCT requirements is maximised. In experiments with five real deep learning computation graphs on a recently proposed optical architecture across four user-defined JCT requirement distributions, we demonstrate PAC-ML achieving up to 56.2 arrival settings than the canonical maximum parallelisation strategy used by most prior works.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/26/2021

Large-scale Machine Learning Cluster Scheduling via Multi-agent Graph Reinforcement Learning

Efficient scheduling of distributed deep learning (DL) jobs in large GPU...
research
03/24/2019

TonY: An Orchestrator for Distributed Machine Learning Jobs

Training machine learning (ML) models on large datasets requires conside...
research
08/01/2023

CASSINI: Network-Aware Job Scheduling in Machine Learning Clusters

We present CASSINI, a network-aware job scheduler for machine learning (...
research
10/14/2021

Connection Management xAPP for O-RAN RIC: A Graph Neural Network and Reinforcement Learning Approach

Connection management is an important problem for any wireless network t...
research
08/14/2019

Resolvable Designs for Speeding up Distributed Computing

Distributed computing frameworks such as MapReduce are often used to pro...
research
09/19/2019

When Two is Worse Than One

This note is concerned with the impact on job latency of splitting a tok...
research
01/12/2023

Improving Inference Performance of Machine Learning with the Divide-and-Conquer Principle

Many popular machine learning models scale poorly when deployed on CPUs....

Please sign up or login with your details

Forgot password? Click here to reset