A Competitive Algorithm for Throughout Maximization on Identical Machines

This paper considers the basic problem of scheduling jobs online with preemption to maximize the number of jobs completed by their deadline on m identical machines. The main result is an O(1) competitive deterministic algorithm for any number of machines m >1.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/05/2021

Scheduling with Testing on Multiple Identical Parallel Machines

Scheduling with testing is a recent online problem within the framework ...
05/21/2019

Approximation results for makespan minimization with budgeted uncertainty

We study approximation algorithms for the problem of minimizing the make...
07/24/2020

Improved approximation schemes for early work scheduling on identical parallel machines with common due date

We study the early work scheduling problem on identical parallel machine...
03/21/2019

Exploiting Promising Sub-Sequences of Jobs to solve the No-Wait Flowshop Scheduling Problem

The no-wait flowshop scheduling problem is a variant of the classical pe...
05/30/2020

Scheduling in the Random-Order Model

Makespan minimization on identical machines is a fundamental problem in ...
05/06/2019

Non-clairvoyant Precedence Constrained Scheduling

We consider the online problem of scheduling jobs on identical machines,...
05/16/2013

On the periodic behavior of real-time schedulers on identical multiprocessor platforms

This paper is proposing a general periodicity result concerning any dete...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We consider the basic problem of preemptively scheduling jobs that arrive online with sizes and deadlines on identical machines to maximize the number of jobs that complete by their deadline.

Definition 1 (Throughput Maximization).

Let be a collection of jobs such that each has a release time , a processing time (or size) , and a deadline . The jobs arrive online at their release times, at which the scheduler becomes aware of job and its and .

At each moment of time, the scheduler can specify up to

released jobs to run at this time, and the remaining processing time of the jobs that are run is decreased at a unit rate (so we assume that the online scheduler is allowed to produce a migratory schedule.) A job is completed if its remaining processing time drops to zero by the deadline of that job. The objective is to maximize the number completed jobs.

A key concept in designing algorithms for this problem is the laxity of a job. The laxity of a job is , which is the maximum amount of time we can not run and still possibly complete it.

We measure the performance of our algorithm by the competitive ratio, which is the maximum over all instances of the ratio of the objective value of our algorithm to the objective value of the optimal offline schedule that is aware of all jobs in advance.

This problem is well understood for the machine case. No -competitive deterministic algorithm is possible [BaruahKMMRRSW92], but there is a randomized algorithm that is -competitive against an oblivious adversary [KalyanasundaramP03], and there is a scalable (-speed -competitive) deterministic algorithm [KalyanasundaramP00]. The scalability result in [KalyanasundaramP00] was extended to the case of machines in [LucierMNY13].

Whether an -competitive algorithm exists for machines has been open for twenty years. Previous results for the multiple machines setting require resource augmentation or assume that all jobs have high laxity [LucierMNY13, EberleMS20].

The main issue issue in removing these assumptions is how to determine which machine to assign a job to. If an online algorithm could determine which machine each job was assigned to in Opt, we could obtain an -competitive algorithm for machines by a relatively straight-forward adaptation of the results from [KalyanasundaramP03]. However, if the online algorithm ends up assigning some jobs to different machines than Opt, then comparing the number of completed jobs is challenging. Further, if jobs have small laxity, then the algorithm can be severely penalized for small mistakes in this assignment. One way to view the speed augmentation (or high laxity assumption) analyses in [LucierMNY13, EberleMS20] is that the speed augmentation assumption allows one to avoid having to address this issue in the analyses.

1.1 Our Results

Our main result is an -competitive deterministic algorithm for Throughput Maximization on machines.

Theorem 1.1.

For all , there exists a deterministic -competitive algorithm for Throughput Maximization on machines.

We summarize our results and prior work in Table 1. Interestingly, notice that on a single machine there is no constant competitive deterministic algorithm, yet a randomized algorithm exists with constant competitive ratio. Our work shows that once more than one machine is considered, then determinism is sufficient to get a -competitive online algorithm.

Deterministic Randomized
Speed
Augmentation
-speed -competitive
[BaruahKMMRRSW92] [KalyanasundaramP03] [KalyanasundaramP00]
-speed -competitive
[This paper] [This paper] [LucierMNY13]
Table 1: Competitiveness Results

1.2 Scheduling Policies

We give some basic definitions and notations about scheduling policies.

A job is feasible at time (with respect to some schedule) if it can still be feasibly completed, so and , where is the remaining processing time of job at time (with respect to the same schedule.)

Then a schedule of jobs is defined by a map from time/machine pairs to a feasible job that is run on machine at time , with the constraint that no job can be run one two different machines at the same time. . We conflate with the scheduling policy as well as the set of jobs completed by the schedule. Thus, the objective value achieved by this schedule is .

A schedule is non-migratory if for every job there exists a machine such that if is run at time then is run on machine . Otherwise the schedule is migratory.

If is a scheduling algorithm, then denotes the schedule that results from running on instance on machines. Similarly, denotes the optimal schedule on instance on machines. We will sometimes omit the and/or the if they are clear from context. Sometimes we will abuse notation and let Opt denote a nearly-optimal schedule that additionally has some desirable structural property.

1.3 Algorithms and Technical Overview

A simple consequence of the results in [KalyanasundaramP01] and [KalyanasundaramP03] is an -competitive algorithm in the case that . Thus we concentrate on the case that is large. Also note that since there is an -approximate non-migratory schedule [KalyanasundaramP01], changing the number of machines by an factor does not change the optimal objective value by more than an factor. This is because we can always take an optimal non-migratory schedule on machines and create a new schedule on machines whose objective value decreases by at most a factor of , by keeping the machines that complete the most jobs.

These observations about the structure of near-optimal schedules allow us to design a -competitive algorithm that is a combination of various deterministic algorithms. In particular, on instance our algorithm, FinalAlg will run a deterministic algorithm LMNY on machines on the subinstance of high laxity jobs, a deterministic algorithm SRPT on machines on the subinstance of low laxity jobs, and a deterministic algorithm MLax on machines on the subinstance of low laxity jobs. Note that we run SRPT and MLax on the same jobs. To achieve this, if both algorithms decide to run the same job , then the algorithm in which has shorter remaining processing time actually runs job , and the other simulates running .

We will eventually show that for all instances, at least one of these three algorithms is -competitive, from which our main result will follow. Roughly, each of the three algorithms is responsible for a different part of Opt.

Our main theorem about FinalAlg is the following:

Theorem 1.2.

For any , FinalAlg is a -competitive deterministic algorithm for Throughput Maximization on machines.

We now discuss these three component algorithms of FinalAlg.

1.3.1 Lmny

The algorithm LMNY is the algorithm from [LucierMNY13] with the following guarantee.

Lemma 1.3.

[LucierMNY13] For any number of machines , and any job instance , LMNY is an -competitive deterministic algorithm on the instance .

1.3.2 Srpt

The algorithm SRPT is a variation of the standard shortest remaining processing time algorithm:

Definition 2 (Srpt).

At each time, run the feasible jobs with shortest remaining processing time. If there are less than feasible jobs, then all feasible jobs are run.

We will show that SRPT is competitive with the low laxity jobs in that are not preempted in Opt.

1.3.3 MLax

The final, most challenging, component algorithm of FinalAlg is MLax, which intuitively we want to be competitive on low-laxity jobs in Opt that are preempted.

To better understand the challenge of achieving this, consider and an instance of disagreeable jobs, which means that jobs with an earlier release time have a later deadline. Further, suppose all jobs but one in Opt is preempted and completed at a later time.

To be competitive, MLax must preempt almost all the jobs that it completes, but cannot afford to abandon too many jobs that it preempts. Because the jobs have low laxity, this can be challenging as it can only preempt each job for a small amount of time, and its hard to know which of the many options is the “right” job to preempt for. This issue was resolved in [KalyanasundaramP03] for the case of machine, but the issue gets more challenging when , because we also have to choose the “right” machine to assign a job.

We now describe the algorithm MLax. Let be a sufficiently large constant (chosen later.) MLax maintains stacks (last-in-first-out data structures) of jobs (one per machine), . The stacks are initially empty. At all times, MLax runs the top job of stack on machine . We define the frontier to be the set consisting of the top job of each stack (i.e. all currently running jobs.) It remains to describe how the ’s are updated.

There are two types of events that cause MLax to update the ’s: reaching a job’s pseudo-release time (defined below) or completing a job.

Definition 3 (Viable Jobs and Pseudo-Release Time).

The pseudo-release time (if it exists) of job is the earliest time in such that there are at least jobs on the frontier satisfying .

We say a job is viable if exists and non-viable otherwise.

At job ’s pseudo-release time (note can be determined online by MLax), MLax does the following:

  1. If there exists a stack whose top job satisfies , then push onto any such stack.

  2. Else if there exist at least stacks whose second-top job satisfies and further some such stack has top job satisfying , then on such a stack with minimum , replace its top job by .

While the replacement operation in step b can be implemented as a pop and then push, we view it as a separate operation for analysis purposes. To handle corner cases in these descriptions, one can assume that there is a job with infinite size/laxity on the bottom of each .

When MLax completes a job that was on stack , MLax does the following:

  1. Pop off of stack .

  2. Keep popping until the top job of is feasible.

1.3.4 Analysis Sketch

There are three main steps in proving creftypecap 1.2 to show FinalAlg is -competitive:

  • In Section 2, we show how to modify the optimal schedule to obtain certain structural properties that facilitate the comparison with SRPT and MLax.

  • In Section 3, we show that SRPT is competitive with the low-laxity, non-viable jobs. Intuitively, the jobs that MLax is running that prevent a job from becoming viable are so much smaller than job , and they provide a witness that SRPT must also be working on jobs much smaller than .

  • In Section 4, we show that SRPT and MLax together are competitive with the low-laxity, viable jobs. First, we show that SRPT is competitive with the number of non-preempted jobs in Opt. We then essentially show that MLax is competitive with the number of preempted jobs in Opt. The key component is the design of MLax is the condition that a job won’t replace a job on the frontier unless at there are at least stacks whose second-top job satisfies . This is the condition that intuitively most differentiates the MLax from copies of the Lax algorithm in [KalyanasundaramP03]. This also is the condition that allows us to surmount the issue of potentially assigning a job to a “wrong” processor. Jobs that satisfy this condition are highly flexible about where they can go on the frontier. Morally, our analysis shows that a constant fraction of the jobs that Opt preempts and completes must be such flexible jobs.

We combine these results in Section 5 to complete the analysis of FinalAlg.

1.4 Related Work

There is a line of papers that consider a dual version of the problem, where there is a constraint that all jobs must be completed by their deadline, and the objective is to minimize the number of machines used [PhillipsSTW02, CMS18, AzarC18, ImMPS17]. The current best known bound on the competitive ratio for this version is from [ImMPS17].

The speed augmentation results in [KalyanasundaramP00, LucierMNY13] for throughput can be generalized to weighted throughput, where there a profit for each job, and the objective is to maximize the aggregate profit of jobs completed by their deadline. But without speed augmentation, -approximation is not possible for weighted throughput for any , even allowing randomization [KorenS94].

There is also a line of papers that consider variations on online throughput scheduling in which the online scheduler has to commit to completing jobs at some point in time, with there being different variations of when commitment is required [LucierMNY13, EberleMS20, ChenEMSS20]. For example, [EberleMS20] showed that there is a scalable algorithm for online throughput maximization that commits to finishing every job that it begins executing.

2 Structure of Optimal Schedule

The goal of this section is to introduce the key properties of (near-)optimal scheduling policies that we will use in our analysis.

For completeness, we show that by losing a constant factor in the competitive ratio, we can use a constant factor fewer machines than Opt. This justifies FinalAlg running each of three algorithms on machines.

Lemma 2.1.

For any collection of jobs , number of machines , and , we have .

Proof.

It is shown in [KalyanasundaramP01] that for any schedule on machines, there exists a non-migratory schedule on at most machines that completes the same jobs. Applied to , we obtain a non-migratory schedule on machines with . Keeping the machines that complete the most jobs in gives a non-migratory schedule on machines that completes at least jobs. ∎

A non-migratory schedule on machines can be expressed as schedules, each on a single machine and a separate set of jobs. To characterize these single machine schedules, we introduce the concept of forest schedules. Let be any schedule. For any job , we let and denote the first and last times that runs the job , respectively. Note that does not necessarily complete at time .

Definition 4 (Forest Schedule).

We say a single-machine schedule is a forest schedule if for all jobs such that , then does not run during the time interval (so the -intervals form a laminar family.) Then naturally defines a forest (in the graph-theoretic sense), where the nodes are jobs run by and the descendants of a job are the the jobs that are first run in the time interval .

Then a non-migratory -machine schedule is a forest schedule if all of its single-machine schedules are forest schedules.

With these definitions, we are ready to construct the near-optimal policies that we will compare SRPT and MLax to:

Lemma 2.2.

Let be a set of jobs satisfying for all . Then for any times and constant , there exist non-migratory forest schedules and on the jobs such that:

  1. Both and complete every job they run.

  2. Let be the set of jobs that runs on machine . For every machine and time, if there exists a feasible job in , then runs such a job.

  3. For all jobs , we have .

  4. If job is a descendant of job in , then

  5. .

Proof.

We modify the optimal schedule to obtain the desired properties. First, we may assume that is non-migratory by losing a constant factor (creftypecap 2.1.) Thus, it suffices to prove the lemma for a single machine schedule, because we can apply the lemma to each of the single-machine schedules in the non-migratory schedule . The proof for the single-machine case follows from the modifications given in Lemmas 22 and 23 of [KalyanasundaramP03]. We note that [KalyanasundaramP03] only show how to ensure for a particular , but it is straightforward to verify that the same proof holds for any . ∎

Morally, the schedule captures the jobs in Opt that are preempted and captures the jobs in Opt that are not preempted (i.e. the leaves in the forest schedule.)

3 Srpt is Competitive with Non-Viable Jobs

The main result of this section is that SRPT is competitive with the number of non-viable, low-laxity jobs of the optimal schedule (creftypecap 3.1.) We recall that a job is non-viable if for every time in , there are at least jobs on the frontier of MLax satisfying .

Theorem 3.1.

Let be a set of jobs satisfying for all . Then for sufficiently large and number of machines , we have , where is the set of non-viable jobs with respect to .

In the remainder of this section, we prove creftypecap 3.1. The main idea of the proof is that for any non-viable job , MLax is running many jobs that are much smaller than (by at least an -factor.) These jobs give a witness that SRPT must be working on these jobs or even smaller ones.

We begin with a lemma stating that SRPT is competitive with the leaves of any forest schedule. Intuitively this follows because whenever some schedule is running a feasible job, then SRPT  either runs the same job or a job with shorter remaining processing time. We will use this lemma to handle the non-viable jobs that are not preempted.

Lemma 3.2.

Let be any set of jobs and be any forest schedule on machines and jobs that only runs feasible jobs. Let be the set of leaves of . Then

Proof.

It suffices to show that . The main property of SRPT  gives:

Proposition 3.3.

Consider any leaf . Suppose starts running at time . Then SRPT  completes jobs in the time interval .

Proof.

At time in SRPT (J), job has remaining processing time at most and is feasible by assumption. Because , there must exist a first time where is not run by . At this time, must be running jobs with remaining processing time at most . In particular, must complete jobs by time . ∎

Using the proposition, we give a charging scheme: Each job begins with credit. By the proposition, we can find jobs that completes in the time interval . Then transfers credits each to such jobs in SRPT.

It remains to show that each gets at most credit. Note that can only get credits from leaves such that . There are at most such intervals (at most one per machine), because we only consider leaves, whose intervals are disjoint if there are on the same machine. ∎

Now we are ready to prove creftypecap 3.1.

Proof of creftypecap 3.1.

Let be the schedules guaranteed by creftypecap 2.2 on the set of jobs with for all . We re-state the properties of these schedules for convenience:

  1. Both and complete every job they run.

  2. Let be the set of jobs that runs on machine . For every machine and time, if there exists a feasible job in , then runs such a job.

  3. For all jobs , we have .

  4. If job is a descendant of job in , then

  5. .

By creftypecap 3.2, we have . Thus, it remains to show the following:

Lemma 3.4.

For sufficiently large,

Proof.

We first show that for the majority of jobs in ’s forest, we run itself on some machine for at least a constant fraction of the time interval .

Proposition 3.5.

For at least half of the nodes in ’s forest, there exists a closed interval of length at least such that runs on some machine during .

Proof.

We say a node is a non-progenitor if has less than descendants at depth from for all . Because satisfies (1), at least half of the nodes in ’s forest are non-progenitors. This follows from Lemma 7 in [KalyanasundaramP03].

Now consider any non-progenitor node . Because is a forest, is only running or its descendants on some machine in times . Further, because is a non-progenitor and satisfies (1) and (4), we can partition such that and are times where is running , and are times where is running descendants of . By taking sufficiently large, we have . This follows from Lemma 6 in [KalyanasundaramP03]. It follows, at least one of or has length at least . This gives the desired . ∎

Let be the collection of jobs guaranteed by the proposition, so . It suffices to show that . Thus, we argue about in the interval (guaranteed by creftypecap 3.5) for some .

Proposition 3.6.

Consider any job . For sufficiently large , starts running at least jobs during such that each such job satisfies .

Proof.

We let be the prefix of with length exactly . Recall that is non-viable. Thus, because , is always running at least jobs satisfying during .

We define to be the set of jobs that runs during satisfying . We further partition into size classes, such that consists of the jobs in with size in .

For each machine , we let be the times in that is running a job from on machine . Note that each is the union of finitely many intervals. Then because is always running at least jobs satisfying during , we have:

By averaging, there exists some with .

Fix such a . It suffices to show that there exist at least jobs in that Lax  starts in . This is because every job has size at most and . Taking gives that .

Note that every job in has size within a -factor of each other, so there can be at most one such job per stack at any time. This implies that there are at most jobs in that don’t start in (i.e. the start before .) These jobs contribute at most to . Choosing large enough, we can ensure that the jobs in that start in contribute at least to . To conclude, we note that each job in that starts in contributes at most to the same sum, so there must exist at least such jobs. ∎

Using the above proposition, we define a charging scheme to show that . Each job begins with credit. By the proposition, we can find jobs such that is contained in the time when runs . There are two cases to consider. If all jobs we find are contained in , then we transfer credits from to each of the -many jobs. Note that here we are using . Otherwise, there exists some such that is not in . Then will complete at least jobs in . We transfer credits from to each of the -many jobs. To conclude, we note that each gets credits from at most jobs in . ∎

4 Srpt and MLax are Competitive with Viable Jobs

We have shown that SRPT is competitive with the non-viable, low-laxity jobs. Thus, it remains to account for the viable, low-laxity jobs. We recall that a job is viable if there exists a time in such that there are at least jobs on the frontier satisfying . The first such time is the pseudo-release time, of job . For these jobs, we show that SRPT and MLax together are competitive with the viable, low-laxity jobs of the optimal schedule.

Theorem 4.1.

Let be a set of jobs satisfying for all . Then for sufficiently large and number of machines , we have , where is the set of viable jobs with respect to .

Proof of creftypecap 4.1.

Let be the schedules guaranteed by creftypecap 2.2 on the set of jobs with for all . We re-state the properties of these schedules for convenience:

  1. Both and complete every job they run.

  2. Let be the set of jobs that runs on machine . For every machine and time, if there exists a feasible job in , then runs such a job.

  3. For all jobs , we have .

  4. If job is a descendant of job in , then

  5. .

By creftypecap 3.2, we have . Thus, it suffices to show that . We do this with two lemmas, whose proofs we defer until later. First, we show that MLax pushes (not necessarily completes) many jobs. In particular, we show:

Lemma 4.2.

The main idea to prove creftypecap 4.2 is to consider sequences of preemptions in Opt. In particular, suppose Opt preempts job for and then for . Roughly, we use viability to show that the only way MLax doesn’t push any of these jobs is if in between their pseudo-release times, MLax pushes jobs.

Second, we show that the pushes of MLax give a witness that SRPT and MLax together actually complete many jobs.

Lemma 4.3.

.

The main idea to prove creftypecap 4.3 is to upper-bound the number of jobs that MLax pops because they are infeasible (all other pushes lead to completed jobs.) The reason MLax pops a job for being infeasible is because while was on a stack, MLax spent at least units of time running jobs higher than on ’s stack. Either those jobs are completed by MLax, or MLax must have have done many pushes or replacements instead. We show that the replacements give a witness that SRPT must complete many jobs.

Combining these two lemmas completes the proof of creftypecap 4.1. ∎

Now we go back and prove creftypecap 4.2 and creftypecap 4.3.

4.1 Proof of creftypecap 4.2

Recall that is a forest schedule. We say the first child of a job is the child of with the earliest starting time . In other words, if is not a leaf, then its first child is the first job that pre-empts . We first focus on a sequence of first children in .

Lemma 4.4.

Let be jobs such that is the first child of and is the first child of . Then does at least one of the following during the time interval :

  • Push at least jobs

  • Push job

  • Push a job on top of when is on the frontier

  • Push

Proof.

By the properties of , we have . It suffices to show that if during , pushes strictly fewer than jobs, does not push , and does not push any job on top of if is on the frontier, then pushes .

First, because pushes strictly fewer than jobs during , there exists at least stacks that receive no push during this interval. We call such stacks stable. The key property of stable stacks is that the laxities of their top- and second-top jobs never decrease during this interval, because these stacks are only changed by replacements and pops.

Now consider time . By definition of pseudo-release time, at this time, there exist at least stacks whose top job satisfies . Further, for any such stack, let be its second-top job. Then because is a descendant of in , we have:

It follows, there exist at least stable stacks whose second-top job satisfies for the entirety of . We say such stacks are -stable.

Now consider time . We may assume is not pushed at this time. However, there exist at least that are -stable. Thus, if we do not replace the top of some stack with , it must be the case that the top job of every -stable stack satisfies . Because these stacks are stable, their laxities only increase by time , so will push on some stack at that time.

Otherwise, suppose we replace the top job of some stack with . In particular, is on the frontier at . We may assume that no job is pushed directly on top of . If remains on the frontier by time , then will push , because . The remaining case is if leaves the frontier in some time in . We claim that it cannot be the case that is popped, because by (2), could not complete by time , so cannot as well. Thus, it must be the case that is replaced by some job, say at time . At this time, there exist at least stacks whose second-top job satisfies . It follows, there exist at least -stable stacks whose second-top job satisfies at time . Note that because , there exists at least one such stack, say , that is not ’s stack. In particular, because ’s stack has minimum laxity, it must be the case that the top job of stack satisfies . Finally, because stack is stable, at time we will push . ∎

Now using the above lemma, we give a charging scheme to prove creftypecap 4.2. First note that by creftypecap 3.2, we have . Thus, it suffices to give a charging scheme such that each job begins with credit, and charges it to leaves of and completions of so that each job is charged at most credits. Each job distributes its credit as follows:

  • (Leaf Transfer) If is a leaf or parent of a leaf of , say , then charges for credit.

Else let be the first child of and the first child of in

  • (Push Transfer) If pushes or , then charges 1 unit to or , respectively.

  • (Interior Transfer) Else if job is on the frontier, but another job, say , is pushed on top of , then charges unit to .

  • (-Push Transfer) Otherwise, by creftypecap 4.4, must push at least jobs during . In this case, charges units to each of these such jobs.

This completes the description of the charging scheme. It remains to show that each job is charged at most credits. Each job receives at most credits due to Leaf Transfers and at most credits due to Push Transfers and Interior Transfers. As each job is in at most intervals of the form , each job is charged from -Push Transfers.

4.2 Proof of creftypecap 4.3

Recall in MLax, there are two types of pops: a job is popped if it is completed, and then we continue popping until the top job of that stack is feasible. We call the former completion pops and the later infeasible pops. Note that it suffices to prove the next lemma, which bounds the infeasible pops. This is because . To see this, note that every stack is empty at the beginning and end of the algorithm, and the stack size only changes due to pushes and pops.

Lemma 4.5.

For sufficiently large, we have:

Proof.

We define a charging scheme such that the completions of and and the pushes executed by pay for the infeasible pops. Each completion of is given credits, each completion of is given credit, and each job that pushes is given credit. Thus each job begins with at most credits. For any , we say job is -below (at time ) if and are on the same stack in and is positions below on that stack at time . We define -above analogously. A job distributes these initial credits as follows:

  • (SRPT-transfer) If completes job and MLax  also ran at some point, then gives credits to the job that is -below at time for all .

  • (-SRPT-transfer) If completes job at time , then gives credits to the job that is -below the top of each stack in at time for all .

  • (MLax-transfer) If completes a job , then gives credits to the job that is -below at the time is completed for all .

  • (Push-transfer) If pushes a job , then gives credits to the job that is -below at the time is pushed for all .

It remains to show that for sufficiently large, every infeasible pop gets at least credits. We consider any job that is an infeasible pop of . At time when joins some stack in , say , ’s remaining laxity was at least . However, as later became an infeasible pop, it must be the case that while was on stack , was running jobs that are higher than on stack for at least units of time.

Let be the union of intervals of times that runs a job higher than on stack (so is on the stack for the entirety of .) Then we have . Further, we partition based on the height of the job on that is currently running. In particular, we partition , where is the union of intervals of times that runs a job on that is exactly -above .

By averaging, there exists a such that . Fix such a . We can write as the union of disjoint intervals, say . Because during each sub-interval, is running jobs on that are much smaller than itself, these jobs give a witness that completes many jobs as long as these sub-intervals are long enough. We formalize this in the following proposition.

Proposition 4.6.

In each sub-interval of length at least , job earns at least credits from SRPT-transfers and -SRPT-transfers.

Proof.

Because has length at least , we can partition into sub-sub-intervals such that all but at most one sub-sub-interval has length exactly . In particular, we have at least sub-sub intervals of length exactly .

Now consider any such sub-sub-interval. During this time, only runs jobs on that are -above . Let be the set of -above jobs that runs during . For every job , we have . It follows that is on stack for at most units of time. In particular, must start a new -above job, say , in the first half of the sub-sub-interval at some time, say .

At time , is feasible. There are two cases to consider. If also completes at some time, then get credits from in a SRPT-transfer. Otherwise if never completes , then because is feasible at , it must be the case that completes jobs during the sub-sub-interval. Thus, gets credits from separate -SRPT-transfers during this sub-sub-interval. We conclude, job gets at least credits from at least sub-sub-intervals. ∎

On the other hand, even if the sub-intervals are too short, the job still gets credits from MLax-transfers and Push-transfers when the height of the stack changes. We formalize this in the following proposition.

Proposition 4.7.

For every sub-interval , job earns at least credits from MLax-transfers and Push-transfers at time .

Proof.

Up until time , was running a -above job on stack . At time , the height of the stack must change. If the height decreases, then it must be the case that completes the -above job, so will get credits from a -transfer. Otherwise, the height increases, so   must push a job that is -above , which gives credits from a Push-transfer. ∎

Now we combine the above two propositions to complete the proof of Lemma 4.5. We say a sub-interval is long if it has length at least (i.e. we can apply creftypecap 4.6 to it) and short otherwise. First, suppose the aggregate length of all long intervals it at least . Then by creftypecap 4.6, job gets at least credits from the long intervals. Otherwise, the aggregate length of all long intervals is less than . In this case, recall that the long and short intervals partition , which has length at least . It follows, the aggregate length of the short intervals is at least . For large enough, we may assume the aggregate length of the short intervals is at least . Because each short interval has length at most , there are at least short intervals. We conclude, by creftypecap 4.7, job gets at least credits from the short intervals. We conclude, in either case job gets at least credits. ∎

5 Putting it all together

In this section, we prove our main result, creftypecap 1.1, which follows from the next meta-theorem:

Theorem 5.1.

Let be any set of jobs. Then for number of machines , we have , where and partition into high- and low-laxity jobs.

Proof.

We have by