Towards Optimality in Parallel Scheduling

07/22/2017
by   Benjamin Berg, et al.
0

To keep pace with Moore's law, chip designers have focused on increasing the number of cores per chip rather than single core performance. In turn, modern jobs are often designed to run on any number of cores. However, to effectively leverage these multi-core chips, one must address the question of how many cores to assign to each job. Given that jobs receive sublinear speedups from additional cores, there is an obvious tradeoff: allocating more cores to an individual job reduces the job's runtime, but in turn decreases the efficiency of the overall system. We ask how the system should schedule jobs across cores so as to minimize the mean response time over a stream of incoming jobs. To answer this question, we develop an analytical model of jobs running on a multi-core machine. We prove that EQUI, a policy which continuously divides cores evenly across jobs, is optimal when all jobs follow a single speedup curve and have exponentially distributed sizes. EQUI requires jobs to change their level of parallelization while they run. Since this is not possible for all workloads, we consider a class of "fixed-width" policies, which choose a single level of parallelization, k, to use for all jobs. We prove that, surprisingly, it is possible to achieve EQUI's performance without requiring jobs to change their levels of parallelization by using the optimal fixed level of parallelization, k*. We also show how to analytically derive the optimal k* as a function of the system load, the speedup curve, and the job size distribution. In the case where jobs may follow different speedup curves, finding a good scheduling policy is even more challenging. We find that policies like EQUI which performed well in the case of a single speedup function now perform poorly. We propose a very simple policy, GREEDY*, which performs near-optimally when compared to the numerically-derived optimal policy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2019

heSRPT: Optimal Parallel Scheduling of Jobs With Known Sizes

When parallelizing a set of jobs across many servers, one must balance a...
research
11/18/2020

heSRPT: Parallel Scheduling to Minimize Mean Slowdown

Modern data centers serve workloads which are capable of exploiting para...
research
11/04/2022

Optimal Scheduling in the Multiserver-job Model under Heavy Traffic

Multiserver-job systems, where jobs require concurrent service at many s...
research
03/19/2021

Characterization of the Gittins index for sequential multistage jobs

The optimal scheduling problem in single-server queueing systems is a cl...
research
05/19/2020

Optimal Resource Allocation for Elastic and Inelastic Jobs

Modern data centers are tasked with processing heterogeneous workloads c...
research
12/16/2016

Optimizing Stochastic Scheduling in Fork-Join Queueing Models: Bounds and Applications

Fork-Join (FJ) queueing models capture the dynamics of system paralleliz...
research
01/12/2023

Improving Inference Performance of Machine Learning with the Divide-and-Conquer Principle

Many popular machine learning models scale poorly when deployed on CPUs....

Please sign up or login with your details

Forgot password? Click here to reset