1. Introduction
Nearly all modern data centers serve workloads which are capable of exploiting parallelism. When a job parallelizes across multiple servers it will complete more quickly. However, it is unclear how best to share a limited number of servers between many parallelizable jobs.
In this paper we consider a typical scenario where a data center composed of servers will be tasked with completing a set of parallelizable jobs, where typically is much smaller than
. In our scenario, each job has a different inherent size (service requirement) which is known up front to the system. In addition, each job can be run on any number of servers at any moment in time. These assumptions are reasonable for many parallelizable workloads such as training neural networks using TensorFlow
(abadi2016tensorflow, ; lin2018model, ). Our goal in this paper is to allocate servers to jobs so as to minimize the mean flow time across all jobs, where the flow time of a job is the time until the job leaves the system.^{1}^{1}1We will also consider the problem of minimizing makespan, but this turns out to be fairly easy and thus we defer that discussion. What makes this problem difficult is that jobs receive a concave, sublinear speedup from parallelization – jobs have a decreasing marginal benefit from being allocated additional servers (see Figure 1). Hence, in choosing a job to receive each additional server, one must keep the overall efficiency of the system in mind. The goal of this paper is to determine the optimal allocation of servers to jobs where all jobs follow a realistic sublinear speedup function.It is clear that the optimal allocation policy will depend heavily on the jobs’ speedup – how parallelizable the jobs being run are. To see this, first consider the case where jobs are embarrassingly parallel. In this case, we observe that the entire data center can be viewed as a single server that can be perfectly utilized by or shared between jobs. Hence, from the single server scheduling literature, it is known that the Shortest Remaining Processing Time policy (SRPT) will minimize the mean flow time across jobs (smith1978new, ). By contrast, if we consider the case where jobs are hardly parallelizable, a single job receives very little benefit from additional servers. In this case, the optimal policy is to divide the system equally between jobs, a policy called EQUI. In practice, a realistic speedup function usually lies somewhere between these two extremes and thus we must balance a tradeoff between the SRPT and EQUI policies in order to minimize mean flow time. Specifically, since jobs are partially parallelizable, it is still beneficial to allocate more servers to smaller jobs than to large jobs. The optimal policy with respect to mean flow time must split the difference between these policies, figuring out how to favor short jobs while still respecting the overall efficiency of the system.
Prior Work
Despite the prevalence of parallelizable data center workloads, it is not known, in general, how to optimally allocate servers across a set of parallelizable jobs. The stateoftheart in production systems is to let the user decide their job’s allocation by reserving the resources they desire (verma2015large, ), and then allowing the system to pack jobs onto servers (ren2016clairvoyant, ). This allows users to reserve resources greedily, and can lead to low system efficiency. We seek to improve upon the status quo by allowing the system to choose the allocation of servers to each job.
The closest work to the results presented in this paper is (lin2018model, ), which considers jobs which follow a realistic speedup function and have known, generally distributed sizes. Similarly to our work, (lin2018model, ) allows server allocations to change over time. While (lin2018model, )
proposes and evaluates some heuristic policies, they make no theoretical guarantee about the performance of their policies.
Other related work from the performance modeling community, (berg2018, ), assumes that jobs follow a concave speedup function and allows server allocations to change over time. However, unlike our work, (berg2018, ) assumes that job sizes are unknown
and are drawn from an exponential distribution.
(berg2018, ) concludes that EQUI is the optimal allocation policy. However, assuming unknown exponentially distributed job sizes is highly pessimistic since this means job sizes are impossible to predict, even as a job ages.There has also been work on how to allocate servers to jobs which follow arbitrary speedup functions. Of this work, the closest to our model is (im2016competitively, ) which also considers jobs of known size. Inversely, (Edmonds1999SchedulingIT, ; edmonds2009scalably, ; Agrawal:2016:SPJ:2935764.2935782, ) all consider jobs of unknown size. This work is all through the lens of competitive analysis, which assumes that job sizes, arrival times, and even speedup functions are adversarially chosen. This work concludes that a variant of EQUI is speed competitive with the optimal policy when job sizes are unknown (edmonds2009scalably, ), and that a combination of SRPT and EQUI is competitive when job sizes are known (im2016competitively, ).
The SPAA community often models each job as a DAG of interdependent tasks (blumofe1999scheduling, ; bampis2014note, ; bodik2014brief, ; narlikar1999space, ). This DAG encodes precedence constraints between tasks, and thus implies how parallelizable a job is at every moment in time. Given such a detailed model, it is not even clear how to optimally schedule a single DAG job on many servers in order to minimize the job’s completion time (chowdhury2013oblivious, ). The problem only gets harder if tasks are allowed to run on multiple servers (du1989complexity, ; chen2018improved, ). Other prior work considers the scheduling of several DAG jobs onto several servers in order to minimize the mean flow time, makespan, or profit of the set of jobs (chudak1999approximation, ; hall1997scheduling, ; chekuri2001approximation, ; agrawal2018scheduling, ). All of this work is fundamentally different from our model in that it models parallelism in a much more finegrained way. Our hope is that by modeling parallelism through the use of speedup functions, we can address problems that would be intractable in the DAG model.
Our model also shares some similarities with the coflow scheduling problem (jahanjou2017asymptotically, ; chowdhury2014efficient, ; qiu2015minimizing, ; shafiee2017brief, ; khuller2016brief, ). In coflow scheduling, one must allocate a continuously divisible resource, link bandwidth, to a set of network flows to minimize mean flow time. Unlike our model, there is no explicit notion of a flow’s speedup function here. Given that this problem is NPHard, prior work examines the problem via heuristic policies (chowdhury2014efficient, ), and approximation algorithms (qiu2015minimizing, ; jahanjou2017asymptotically, ).
Our Model
Our model assumes that there are identical servers which must be allocated across a set of parallelizable jobs. All jobs are present at time . Job is assumed to have some inherent size where, WLOG,
We assume that all jobs follow the same speedup function, , which is of the form
for some . Specifically, if a job of size is allocated servers, it will complete at time
In general, the number of servers allocated to a job can change over the course of the job’s lifetime. It therefore helps to think of as a rate^{2}^{2}2WLOG we assume the service rate of a single server to be 1. More generally, we could assume the rate of each server to be , which would simply replace by in every formula of service where the remaining size of job after running on servers for a length of time is
We choose the family of functions because they are (i) sublinear and concave, (ii) can be fit to a variety of empirically measured speedup functions (see Figure 2), and (iii) simplify the analysis. Note that (Hill:2008:ALM:1449375.1449387, ) assumes where and explicitly notes that using speedup functions of another form does not significantly impact their results.
In general, we assume that there is some policy, , which allocates servers to jobs at every time, . When we talk about the system at time , we will use to denote the number of remaining jobs in the system, and to denote the remaining size of job . We also denote the completion time of job under policy as . When the policy is implied, we will drop the superscript.
In general, a policy will complete jobs in a particular order. We define
to be a permutation of which specifies a completion order of jobs. If a policy follows the completion order, , then for any , job completes after job . Specifically, job is the last job to complete and job is the first job to complete under the completion order .
A policy is said to be optimal with respect to the completion order if it achieves the lowest mean flow time of any policy which follows the completion order .
We will assume that the number of servers allocated to a job need not be discrete. In general, we will think of the servers as a single, continuously divisible resource. Hence, the policy with completion order can be defined by an allocation function where
Here, for each job , and . An allocation of denotes that under policy , at time , job receives a speedup of .
We will denote the allocation function of the optimal policy which minimizes mean flow time as . Similarly, we let , , , and denote the corresponding quantities under the optimal policy.
Why Server Allocation is Counterintuitive
Consider a simple system with servers and identical jobs of size 1, where , and where we wish to minimize mean flow time. One intuitive argument would be that, since everything in this system is symmetric, the optimal allocation should be symmetric. Hence, one might think to allocate half the servers to job one and half the servers to job two. Interestingly, while this does minimize the makespan of the jobs, it does not minimize their flow time. Alternately, a queueing theorist might look at the same problem and say that to minimize flow time, we should use the SRPT policy, allocating all servers to job one and then all servers to job two. However, this causes the system to be very inefficient. We will show that the optimal policy in this case is to allocate of the servers to job one and of the servers to job two. In our simple, symmetric system, the optimal allocation is very asymmetric! Note that this asymmetry is not an artifact of the form of the speedup function used. If we had instead assumed that was Amdahl’s Law (Hill:2008:ALM:1449375.1449387, ) with a parallelizable fraction of , the optimal split is to allocate of the system to one of the jobs. If we imagine a set of arbitrarily sized jobs, one suspects that the optimal policy again favors shorter jobs, but calculating the exact allocations for this policy is not trivial.
Contributions
The contributions of this paper are as follows:

[leftmargin=.35cm]

We derive the first closedform expression for the optimal allocation of servers to jobs which minimizes mean flow time across jobs. At any moment in time we define
where denotes the number of remaining jobs at time under the optimal allocation, and where denotes the fraction of the servers allocated to job at time . Our optimal allocation balances the sizeawareness of SRPT and the high efficiency of EQUI. We thus refer to our optimal policy as High Efficiency SRPT (heSRPT) (see Theorem 7). We also provide a closedform expression for the mean flow time under heSRPT (see Theorem 8).

While we can analyze the mean flow time under heSRPT, other policies in the literature such as HELL (lin2018model, ), and KNEE (lin2018model, ) are not analytically tractable. We therefore perform a numerical evaluation comparing heSRPT to the other proposed policies in the literature (see Section 4).

We have thus far focused on minimizing mean flow time, however one might also ask how to minimize the makespan of the set of jobs – the completion time of the last job. Minimizing makespan turns out to be easy in our setting. We therefore present makespan minimization as a warmup (see Section 2). We find that minimizing makespan favors long jobs while maintaining high efficiency. Thus, we name the optimal policy for minimizing makespan High Efficiency Longest Remaining Processing Time (heLRPT).
2. A Warmup: Minimizing Makespan
While we have thus far discussed the problem of minimizing the mean flow time of a set of jobs, this section will discuss the simpler problem of instead minimizing the makespan of the jobs – the time until the last job completes. Makespan is important in applications such as MapReduce, where a scheduler tries to minimize the makespan of a set of parallelizable “map tasks” (zhu2014minimizing, ) which must be all be completed before the results of the data analysis can be retrieved. The problem of minimizing makespan, while hard in general, turns out to be fairly simple in our model. We will prove an important property of the optimal policy in Theorem 1, and use this property in Theorem 2 to derive the exact allocation function, , which minimizes makespan.
2.1. Favoring Long Jobs Using heLRPT
We begin by proving that the optimal policy with respect to makespan must complete all jobs at the same time. This is stated formally in Theorem 1.
Theorem 1.
Let be the completion time of job under the allocation function which minimizes makespan. Then,
Proof.
Assume for contradiction that not all jobs complete at the same time under . Let be the first job to complete and let be the last job to complete. We can now imagine an allocation function, , which reallocates some fraction of the system from job to job . We will choose some and say that
then divides the extra fraction of the system equally amongst all jobs that finished after under the optimal policy, reducing the completion time of all of these jobs while only hurting slightly. We can choose to be small enough that . Furthermore, since we can choose to arbitrarily small, the increase in can be made arbitrarily small. Since improves the makespan over , we have a contradiction. ∎
Theorem 1 tells us that, in finding the optimal policy to minimize makespan, we must only consider policies under which all job completion times are equal. Since all job completion times are convex functions of their allocations, there exists only one policy which equalizes the completion time of the jobs. This policy must therefore be optimal. Theorem 2 presents a closedform for the optimal allocation function, .
Theorem 2.
Let and let
be the optimal makespan. Then
and the optimal policy with respect to makespan is given by the allocation function
Proof.
The proof is straightforward, see Appendix A. ∎
Theorem 2 says that the optimal policy will need to allocate a larger fraction of the system to longer jobs at every moment in time. This is similar to the Longest Remaining Processing Time (LRPT) policy, except that LRPT gives strict priority to the (one) largest job in the system. We thus refer to the optimal policy for minimizing makespan as High Efficiency LRPT (heLRPT).
3. Minimizing Total Flow Time
The purpose of this section is to determine the optimal allocation of servers to jobs at every time, , in order minimize the mean flow time of a set of jobs. This is equivalent to minimizing the total flow time for a set of jobs, and thus we consider minimizing total flow time for the remainder of this section. We derive a closedform for the optimal allocation function which defines the allocation for each job at any moment in time, , that minimizes total flow time.
To derive , it helps to first narrow the search space of potential allocation policies by proving some properties of the optimal policy. Specifically, knowing the order in which jobs are completed by the optimal policy greatly constrains the form of the policy and makes the problem of finding a closedform much more tractable. It is tempting to assume that the optimal policy completes jobs in ShortestJobFirst (SJF) order. This intuition is based on the case where the “resource” consists of only a single server. In the single server case, it is known that the optimal policy completes jobs in SJF order. The proof that SJF is the optimal completion order relies on a simple interchange argument: Consider any time when an allocation policy does not allocate the server solely to the smallest job. By instead allocating the whole server to the smallest job at time , the total flow time across jobs is reduced.
Unfortunately, in the case with many servers and parallelizable jobs, it is inefficient and typically undesirable to give the entire system to a single job. Hence, the usual simple interchange argument fails to generalize to the setting considered in this paper. In general, it is not obvious what fraction of the system resources should be given to each job in the system, and it is unclear how this fraction should change as jobs depart the system over time.
Instead of an interchange argument, we present an alternative proof that the optimal completion order in a many server system with parallelizable jobs is SJF. Our proof will require a sequence of theorems. First, given any completion order, , we will consider the policy which is optimal with respect to . We will prove several properties of this policy , including finding an expression for the total flow time under . This will allow us to then optimize over the space of potential completion orders and conclude that the optimal completion order is SJF.
Once we know that the optimal completion order is SJF, we can derive an exact form of the optimal allocation function, . The theorems required in our analysis are outlined in Section 3.1.
3.1. Overview of Our Results
We begin by showing that the optimal allocation does not change between job departures, and hence it will suffice to consider the value of the allocation function only at times just after a job departure occurs. This is stated formally as Theorem 3.
Theorem 3.
Consider any two times and where, WLOG, . Let denote the number of jobs in the system at time under the optimal policy. If then
Another key property of the optimal allocation, which we refer to as the scalefree property, states that for any job, , job ’s allocation relative to jobs completed after job remains constant throughout job ’s lifetime. It turns out that our scalefree property holds for an even more general class of policies. This generalization is stated formally in Theorem 4.
Theorem 4 (Scalefree Property).
Consider any completion order, . Let denote the allocation function of a policy which is optimal with respect to . Let be a time when there are exactly jobs in the system and hence . Consider any such that . Then,
The scalefree property is important because it allows us to derive an expression for the total flow time under any policy where is optimal with respect to a given completion order (see Lemma 2). Theorem 4 is proven in Section 3.3.
Our next step is to minimize the expression from Lemma 2 over the space of all completion orders to prove that the optimal completion order, , is the SJF order. This is stated in Theorem 5.
Theorem 5 (Optimal Completion Order).
The optimal policy follows the completion order, , where
and hence jobs are completed in the ShortestJobFirst (SJF) order.
Since jobs are completed in SJF order, we can conclude that, at time , the jobs left in the system are specifically jobs . Theorem 5 is proven in Section 3.4.
We next derive the optimal allocation policy with respect to the optimal completion order . The first step to deriving the optimal allocation policy is to prove a sizeinvariant property which says that the optimal allocation to jobs at time depends only on , not on the specific remaining sizes of the these jobs. This is a counterintuitive result because one might imagine that the optimal allocation should be different if two very differently sized jobs are in the system instead of two equally sized jobs. The sizeinvariant property is stated in Theorem 6.
Theorem 6 (Sizeinvariant Property).
Consider any time . Imagine two sets of jobs, and , each of size . If and are the optimal allocations to the jobs in sets and , respectively, we have that
Theorem 6 simplifies the computation of the optimal allocation, since it allows us to ignore the actual remaining sizes of the jobs in the system. We need only to derive one optimal allocation for each possible value of . We prove Theorem 6 in Section 3.5.
The consequence of the above results is that we can explicitly compute the optimal allocation function. We are thus finally ready to state Theorem 7, which provides the allocation function for the optimal allocation policy which minimizes total flow time.
Theorem 7 (Optimal Allocation Function).
At time , when jobs remain in the system,
Theorem 7 is proven in Section 3.5. Given the optimal allocation function , we can also explicitly compute the optimal total flow time for any set of jobs. This is stated in Theorem 8.
Theorem 8 (Optimal Total Flow Time).
Given a set of jobs of size , the total flow time, , under the optimal allocation policy is given by
where
and
Note that the optimal allocation policy biases towards short jobs, but does not give strict priority to these jobs in order to maintain the overall efficiency of the system. That is,
We thus refer to the optimal policy derived in Theorem 7 as High Efficiency ShortestRemainingProcessingTime or heSRPT.
3.2. A Property of the Optimal Policy
In determining the optimal completion order of jobs, we first show that the optimal allocation function remains constant between job departures. This allows us to think of the optimal allocation function as being composed of decision points where new allocations must be determined. See 3
Proof.
Consider any time interval during which no job departs the system, and hence . Assume for contradiction that the optimal policy is unique, and that . We will show that the mean flow time under this policy can be improved by using a constant allocation during the time interval , where the constant allocation is equal to the average value of during the interval .
Specifically, consider the allocation function where
Note that is constant during the interval . Furthermore, because for any time , at every time as well, and is therefore a feasible allocation function. Because the speedup function, , is a concave function, using the allocation function on the time interval provides a greater (or equal) average speedup to every active job during this interval. Hence, the residual size of each job under the allocation function at time is at most the residual size of that job under at time . There thus exists a policy which achieves an equal or lower total flow time than the unique optimal policy, but changes allocations only at departure times. This is a contradiction. ∎
For the rest of the paper, we will therefore only consider allocation functions which change only at departure times. The result of Theorem 3 generalizes to the case where some policies which are optimal with respect to any completion order, . The same argument holds in this case because we can always improve a policy by ensuring that it changes only at departure times.
3.3. The ScaleFree Property
Consider any given completion order, , and the policy which is optimal with respect to . Our goal is to characterize strongly enough that we can optimize over the space of all completion orders, , and determine the optimal completion order . Hence, we now prove an interesting invariant of any policy , which we call the scalefree property. We will first need a preliminary lemma.
Lemma 1.
Consider an allocation function which, at all times leaves fraction of the system unused. That is,
The total flow time under is equivalent to the total flow time under an allocation function where
in a system that runs at times the speed of the original system (which runs at rate 1).
Proof.
Is straightforward, see Appendix B. ∎
Using Lemma 1 we can characterize the policy which is optimal with respect to any completion order . Theorem 4 states that a job’s allocation relative to the jobs completed after it will remain constant for the job’s entire lifetime. See 4
Proof.
We will prove this statement by induction on the overall number of jobs, . First, note that the statement is trivially true when . It remains to show that if the theorem holds for , then it also holds for .
Let and let denote the finishing time of job under the policy . Recall that finishes jobs according to the completion order , so . Consider a system which optimally processes jobs, which WLOG are jobs . We will now ask this system to process an additional job, job . From the perspective of the original jobs, there will be some constant portion of the system, , used to process job on the time interval . The remaining fraction of the system will be available during this time period. Just after time , there will be jobs in the system, and hence by the inductive hypothesis the optimal policy will obey the scalefree property on the interval .
Consider the problem of minimizing the total flow time of the jobs given any fixed value of such that the completion order is obeyed. We can write the total flow time of the jobs, , as
where is a constant. Clearly, optimizing total flow time in this case is equivalent to optimizing the total flow time for jobs with the added constraint that is unavailable (and hence “unused” from the perspective of jobs through ) during the interval . By Lemma 1, this is equivalent to having a system that runs at a fraction of the speed of a normal system during the interval .
Thus, for some , we will consider the problem of optimizing total flow time for a set of jobs in a system that runs at a speed times as fast during the interval .
Let be the total flow time under policy of jobs of size . Let be the total flow time of these jobs in a slow system which always runs times as fast as a normal system.
If we let be the finishing time of job in the slow system, it is easy to see that
since we can just factor out a from the expression for the completion time of every job in the slow system. Furthermore, we see that
by the same reasoning. Clearly, then, the allocation function which is optimal with respect to in the slow system, , is equal to at the respective departure times of each job. That is,
We will now consider a mixed system which is “slow” for some interval that ends before , and then runs at normal speed after time . Let denote the total flow time in this mixed system and let denote the allocation function which is optimal with respect to in the mixed system. We can write
Similarly we can write
Let be the finishing time of job in the mixed system under . Since is a constant not dependent on the allocation function, we can see that the optimal allocation function in the mixed system will make the same allocation decisions as the optimal allocation function in the slow system at the corresponding departure times in each system. That is,
By the inductive hypothesis, the optimal allocation function in the slow system obeys the scalefree property. Hence, also obeys the scalefree property for this set of jobs given any fixed value of .
Let denote the total flow time in a system with jobs that runs at normal speed, where a fraction of the system is used to process job . For any value of , we can write
for some . Clearly, this expression would be minimized by except for two things: the system runs at normal speed, and a fraction of the system is unavailable during the interval . By Lemma 1, the mixed system and the system with unavailable servers are equivalent, and hence we can simply renormalize all allocations from by during this interval to account for this difference. This yields the optimal allocation function in a normal speed system for a given value of on the interval . Since obeys the scalefree property, the normalized allocation function clearly also obeys the scalefree property. Hence, the optimal allocation function for processing the jobs obeys the scalefree property. This completes the proof by induction. ∎
Definition 0.
The scalefree property tells us that for any completion order , under the policy which is optimal with respect to , a job’s allocation relative to the jobs completed after it is constant. Hence, for any job, , there exists a scalefree constant where, for any
Note that we define . Let denote the scalefree constants corresponding to each job.
3.4. Finding the Optimal Completion Order,
We will now make use of the scalefree property to find the optimal completion order, . We again consider any given completion order, , and the policy which is optimal with respect to . In Lemma 2 below, we derive an expression for the total flow time under the policy as a function of . Finally, in Theorem 5, we minimize this expression over all completion orders, , to find the optimal completion order, .
Lemma 2.
Consider a policy which is optimal with respect to the completion order . We define
and . We can then write the total flow time under policy as function of as follows
Proof.
To analyze the total flow time under we will relate to a simpler policy, , which is much easier to analyze. We define to be
Importantly, each job receives some initial optimal allocation at time 0 which does not change over time under . Since allocations under are constant we have that
We can now derive equations that relate to .
By Theorem 4, during the interval ,
Note that a fraction of the system is unused during this interval, and hence by Lemma 1, we have that
Let denote the scaling factor during this interval.
If we define and , we can express the total flow time under policy , , as
We can now further expand this expression in terms of the job sizes, using the fact that , as follows:
as desired. ∎
We now have an expression for the total flow time of a policy which is optimal with respect to a given completion order . Next, we use this expression to derive the optimal completion order, , in Theorem 5.
See 5
Proof.
Consider a policy that is optimal with respect to any given completion order . Let be the expression for total flow time for from Lemma 2. Our goal is to find a closedform expression for , and then minimize over the space of possible completion orders. A sufficient condition for finding is that jobs are completed according to and that the following firstorder conditions are satisfied.
Note that the second order conditions are satisfied trivially. These first order conditions are sufficient, but not necessary, since may lie on a boundary of the space which is imposed by the completion order. For now, we will ignore the constraints on imposed by the completion order. That is, we will consider minimizing the function without constraining to follow the completion order . We refer to this as the relaxed minimization problem, and we call the solution to this optimization the relaxed minimum of the function . Let denote the allocation function of the relaxed minimum of . Crucially, note that the value of under the relaxed minimum is a lower bound on the total flow time ^{3}^{3}3This value may not correspond to a total flow time under . It is just a value of the function . under . We will use the solution to the relaxed minimization problem to argue about the optimal completion order, .
The first order conditions give
and hence
We can show that the values of and are increasing in (see Appendix C). Furthermore, the coefficient of in is
which is increasing in (see Appendix C). This implies that the completion order which produces the best relaxed minimum is SJF, since SJF matches the smallest job sizes with the largest coefficients in . In addition, since is increasing in for any completion order, under this allocation function smaller jobs always have larger allocations than larger jobs. This implies that the relaxed minimum for under SJF respects the SJF completion order. Because the solution to the relaxed minimization problem is feasible, it is also optimal for the constrained minimization problem. Furthermore, because the SJF completion order has the best relaxed minimum, the relaxed minimum for the SJF completion order must be the optimal allocation function. We thus conclude that the SJF completion order is the completion order of the optimal policy. ∎
3.5. Finding the Optimal Allocation Function
Now that we know the optimal completion order, , we can compute the optimal allocation function . This computation begins with the interesting observation that the optimal allocation function does not directly depend on the sizes of the jobs. This is stated in Theorem 6. See 6
Proof.
Recall from the proof of Theorem 5 that the optimal allocation function satisfies the following first order conditions
and always completes jobs in SJF order. Note that these conditions do not explicitly depend on any job sizes. Hence, while the value of the optimal allocation function may depend on how many jobs are in the system, given any two sets of jobs, and , which consist of jobs at time , the optimal allocation function for set will be equal to the optimal allocation function for set at time . ∎
We now use our knowledge of the optimal completion order to derive the optimal allocation function in Theorem 7.
See 7
Proof.
We can now solve a system of equations to derive the optimal allocation function. Consider a time, , when there are jobs in the system. Since the optimal completion order is SJF, we know that the jobs in the system are specifically jobs . We know that the allocation to jobs is 0, since these jobs have been completed. Hence, we have that
Furthermore we have constraints provided by the expressions for .
These can be written as
And then rearranged as
We can now plug in the known values of and find that
This argument holds for any , and hence we have fully specified the optimal allocation function. ∎
4. Discussion and Evaluation
We will now put the results of Section 3 in context by examining the allocation decisions made by the optimal allocation policy, heSRPT, and comparing the performance of heSRPT to existing allocation policies from the literature. In Section 4.1, we examine how heSRPT balances overall system efficiency with a desire to bias towards small jobs. In Section 4.2, we next perform a numerical evaluation of heSRPT and several other allocation policies. Finally, in Section 4.3, we will discuss some limitations of our results.
4.1. What is heSRPT Really Doing?
To gain some intuition about the allocation decisions made by heSRPT, consider what our system would look like if the speedup parameter was equal to . In this case, the many server system essentially behaves as one perfectly divisible server. We know that the optimal allocation policy in this case is SRPT which gives strict priority the shortest job in the system at every moment in time. Any allocation which does not give all resources to the smallest job will increase mean flow time.
When , however, there is now a tradeoff. In this case, devoting all the resources to a single job will decrease the total service rate of the system greatly, since a single job will make highly inefficient use of these resources. We refer to the system efficiency as the total service rate of the system, scaled by the number of servers. We see that one can actually decrease the mean flow time by allowing some sharing of resources between jobs in order to increase system efficiency. The heSRPT policy correctly balances this tradeoff between getting short jobs out of the system quickly, and maintaining high system efficiency.
In examining the heSRPT policy, it is surprising that while a job’s allocation depends on the ordering of job sizes, it does not depend on the specific job sizes. One might assume that, given two jobs, the optimal allocation should depend on size difference between the small job and the large job. However, just as SRPT gives strict priority to small jobs without considering the specific sizes of the jobs, heSRPT’s allocation function does not vary with the job size distribution. The intuition for why this is true is that, given that the optimal completion order is SJF (see Theorem 5), changing the size of a job will not affect the tradeoff between SRPT and system efficiency. As long as the ordering of jobs remains the same, the same allocation will properly balance this tradeoff. Although the SRPT policy is similarly insensitive to the specific sizes of jobs, it is very surprising that we get the same insensitivity given parallelizable jobs in a multiserver system.
4.2. Numerical Evaluation
While heSRPT is the optimal allocation policy, it is still interesting to compare the mean flow time under this policy with the mean flow time under other policies from the literature. Although we have a closedform expression for the optimal total flow time under heSRPT (see Theorem 8), we wish to compare heSRPT to policies which do not necessarily emit closedform expressions for total flow time. Hence, we will perform a numerical analysis of these competitor policies on sets of jobs with randomly generated sizes, and for various levels of parallelizability (values of the speedup parameter ).
We compare heSRPT to the following list of competitor policies:

[leftmargin=.35cm]

SRPT allocates the entire system to the single job with shortest remaining processing time. While this is known to be optimal when (see Section 4.1), we expect this policy to perform poorly when jobs make inefficient use of servers.

EQUI allocates an equal fraction of the system resources to each job at every moment in time. This policy has been analyzed through the lens of competitive analysis (edmonds2009scalably, ; Edmonds1999SchedulingIT, ) in similar models of parallelizable jobs, and was shown to be optimal in expectation when job sizes are unknown and exponentially distributed (berg2018, ). Other policies such as IntermediateSRPT (im2016competitively, ) reduce to EQUI in our model where the number of jobs, , is assumed to be less than the number of servers, .

HELL is a heuristic policy proposed in (lin2018model, ) which, similarly to heSRPT, tries to balance system efficiency with biasing towards short jobs. HELL defines a job’s efficiency to be the function . HELL then iteratively allocates servers. In each iteration, HELL identifies the job which can achieve highest ratio of efficiency to remaining processing time, and allocates to this job the servers required to achieve this maximal ratio. This process is repeated until all servers are allocated. While HELL is consistent with the goal of heSRPT, the specific ratio that HELL uses is just a heuristic.

KNEE is the other heuristic policy proposed in (lin2018model, ). KNEE considers the number of servers each job would require before its marginal reduction in runtime from receiving an additional server would fall below some threshold, . The number of servers a job requires in order to reach this threshold is called a job’s knee allocation. KNEE then iteratively allocates servers. In each iteration, KNEE identifies the job with the lowest knee allocation and gives this job its knee allocation. This process is repeated until all servers are allocated. Because there is no principled way to choose this , we perform a bruteforce search of the parameter space and present the results given the best parameter we found. Hence, results for KNEE should be viewed as an optimistic prediction of the KNEE policy’s performance.
We evaluate heSRPT and the competitor policies in a system of servers with a set of jobs whose sizes are drawn from a Pareto distribution with shape parameter . We set the speedup parameter to be 0.05, 0.3, 0.5, 0.9, or 0.99. Each experiment is run with 10 different sets of randomly generated job sizes and we present the median of the mean flow times measured for each case. The results of this analysis are shown in Figure 4.
We see that the optimal policy, heSRPT, outperforms every competitor allocation policy in every case as expected. When is low, EQUI is very close to heSRPT, but EQUI is almost a factor of 2 worse than optimal when . Inversely, when , SRPT is nearly optimal. However, SRPT is an order of magnitude worse than heSRPT when . While HELL performs similarly to SRPT in most cases, it is only worse than optimal when . The KNEE policy is by far the best of the competitor policies that we consider. In the worst case, when , KNEE is roughly worse than heSRPT. Note, however, that these results for KNEE require bruteforce tuning of the allocation policy, and are thus optimistic about KNEE’s performance.
4.3. Limitations
This paper derives the first closedform of the optimal policy for allocating servers to parallelizable jobs of known size, yet we also acknowledge some limitations of our model. First, we note that in practice not all jobs will follow a single speedup function. Due to differences between applications or between input parameters, it will often be the case that one job is more parallelizable than another. The complexity of allocating resources to jobs with multiple speedup functions has been noted in the literature (berg2018, ; edmonds2009scalably, ), and even when jobs follow a single speedup function the problem is clearly nontrivial. Next, we note that in our model all jobs are assumed to be present at time 0. While we believe that heSRPT might provide a good heuristic policy for processing a stream of arriving jobs, finding the optimal policy in the case of arrivals remains an open question for future work. Finally, we constrain the speedup functions in our model to be of the form , instead of accommodating more general functions. While Theorem 3 can be shown to hold for any concave speedup function, it is unclear whether the other results of Section 3 hold for arbitrary or even concave speedup functions. Although our model makes the above assumptions, we believe that the general implication of our results holds — the optimal allocation policy should bias towards short jobs while maintaining overall system efficiency.
5. Conclusion
Modern data centers largely rely on users to decide how many servers to use to run their jobs. When jobs are parallelizable, but follow a sublinear speedup function, allowing users to make allocation decisions can lead to a highly inefficient use of resources. We propose to instead have the system control server allocations, and we derive the first optimal allocation policy which minimizes mean flow time for a set of parallelizable jobs. Our optimal allocation policy, heSRPT, leads to significant improvement over existing allocation policies suggested in the literature. The key to heSRPT is that it finds the correct balance between overall system efficiency and favoring short jobs. We derive an expression for the optimal allocation, at each moment in time, in closedform.
Appendix A Proof of Theorem 2
See 2
Proof.
We wish to find such that, for any jobs and ,
This implies that for any and
Furthermore, we know that
and thus, for any job ,
Comments
There are no comments yet.