1 Introduction
The setting herein includes [multiple] service queues and selfish agents that arrive online over time and can be processed on one of machines. Agents may have some (private) processing time and/or some private weight .
The goal is to improve service as much as possible. Minimizing the sum of [weighted] completion times is one measure of how good (or bad) service really is.
This problem has long been studied, as a pure optimization problem, without strategic considerations Graham et al. [1979]. Given a collection of jobs, lengths, and weights, the shortest weighted processing time order Smith [1956], also known as Smith’s rule, produces a minimal sum of weighted completion times with a nonpreemptive schedule on a single machine.
Schedules can be preemptive (where jobs may be stopped and restarted over time) or nonpreemptive (where a job, once execution starts, cannot be stopped until the job is done).
To the best of our knowledge, all online algorithms for this problem have the following property: when a job arrives, there are no guarantees as to when it will finish. If preemption is allowed, even if the job starts, there is no guarantee that it will not be preempted, or for how long. If preemption is disallowed, the online algorithm keeps the job ”hanging about” for some unknown length of time, until the algorithm finally decides that it is time to start it.
Essentially, this means that when one requests service, the answer is “OK — just hang around and you will get service at some unknown future date”. It is in fact impossible to achieve any bounded ratio for the sum of [weighted] completion times if one has to start processing the job as soon as possible. Some delay is inevitable. However, the issue we address is “does the job know when it will be served?”. All of these issues are fundamental when considering that every such “job” is a strategic agent. It is not only that one avoids uncertainty, knowing the future schedule allows one to make appropriate plans for the interim.
In this paper we present prompt online algorithms that immediately determine as to when an incoming job will be processed (without preemption). The competitive ratio is the best possible, amongst all prompt online algorithms, even if randomization is allowed (the algorithm is in fact deterministic). The competitive ratio compares the sum of completion times of the online algorithm with the [harder to achieve] sum of completion times of an optimal preemptive schedule. Moreover, viewed in the context of strategic agents, these scheduling algorithms are not only DSIC but of a particularly simple form.
Upon arrival, agents are presented with a menu of possible options, where a menu entry is of the form . This means that the period from to is available on machine and will cost the agent . These menus are anonymous and do not depend on the agent that arrives. The agent then chooses one of the options.
Rational agents will never choose an interval that is shorter than the processing time. (If so the agent cost is ). It is not hard to show that there is no advantage for an agent to delay her arrival.
The cost to the agent is the sum of two components: (a) The time spent waiting, weighted by the agents’ [private] weight. I.e., highly impatient agents will have high weight, less impatient agents will have lower weight. (b) The price, , associated with an option on the menu. Agents seek to minimize their cost.
Consider the case of a single queue, a selfish agent will simply join the queue immediately upon arrival, there is no reason to delay. Thus, jobs will be processed in firstinfirstout (FIFO) order. However, this may be quite bad in terms of the sum of completion times. Imagine a job with processing time , arriving at time zero, followed by jobs of length , all of which arrive immediately after the first. As the first job will only be done at time , the sum of completion times for these jobs is about . Contrawise, if the length one jobs were processed before the length job, the sum of completion times would be about . Obviously it seems a good idea to delay longer jobs and expedite shorter jobs.
Similarly, consider a first batch of jobs, each of length and weight , immediately followed by a single job of length and weight . For FIFO processing, the weighted sum of completion times is (for the weight 1 jobs) plus (for the job of weight ). Optimally, the weight job should be processed first, followed by the length 1 jobs. The weighted sum of completion times is then about . For any constant and sufficiently large , the ratio between the two sums approaches .
The main question addressed in this paper is how to produce such dynamic menus so as to incentivize selfish agents towards behavior that achieves some desirable social goal, specifically, minimizing the sum of completion times. The dynamic menu is produced based on the past decisions of the previous agents and the current time^{2}^{2}2For clarity we describe the menu as though it was infinite. In fact, one can think of the process as though the menu is presented entry by entry. The selfish job will provably choose an option early on..
We measure the quality of the solution achieved by the competitive ratio, the ratio between the sum of completion times of the selfish agents, when presented with the dynamic menus, divided by the minimal sum of completion times, when the future arrivals and their private values are known. In fact, the comparison is with the optimal preemptive schedule (which could definitely be better than the optimal nonpreemptive schedule).
We consider several scenarios:

All agents have weight 1 and arbitrary processing times, nothing known apriori on the processing times. This models cases where all agents are equally impatient but have different processing requirements. The underlying idea here is to offer menu options that delay longer jobs so that they do not overly delay many shorter jobs that arrive later.

All agents have processing time 1 and arbitrary weight, nothing known apriori on the weights. The underlying idea here is to set prices so as to delay jobs of small weight and thus to allow later jobs of large weight to finish early.

Jobs with arbitrary processing times and weights bounded by a known bound . This means that we have to delay long jobs and simultaneously have to leave available time slots for jobs with large weights.
The competitive ratios for the different scenarios appear in Table 1. We remark that the lower bounds hold even if one assumes that the machines used are arbitrarily faster than the machines used by the optimal schedule that minimizes the sum of weighted completion times.












1.1 Related Work
For one machine, weighted jobs, available at time zero, ordering the jobs in order of weight/processing time minimizes the sum of competition times Smith [1956]. For one machine, unweighted jobs with release times, a preemptive schedule that always processes the job with the minimal remaining processing time minimizes the sum of weighted completion times Schrage and Miller [1966]; Schrage [1968]. As an offline problem, where jobs cannot be executed prior to some earliest time, finding an optimal nonpreemptive schedule is computationally hard Hall et al. [1997].
For parallel machines, where jobs arrive over time, a preemptive schedule that always processes the jobs with the highest priority: weight divided by remaining processing time, is a 2 approximation Megow and Schulz [2004], this algorithm is called weighted shortest remaining processing time (WSRPT). If all weights are one this preemptive algorithm is called shortest remaining processing time (SRPT). Other online and offline algorithms to minimize the sum of completion times appear in Bruno et al. [1974]; Shmoys et al. [1995]; Hall et al. [1997].
Phillips et al. [1998] show how to convert a preemptive online algorithm into a nonpreemptive online algorithm while increasing the completion time of the job by no more than a constant factor. This transformation strongly depends on not determining immediately when the job will be executed. This is in comparison to a prompt algorithm that determines when the job is executed immediately upon the job arrival.
When selfish agents are involved, it is valuable to keep things simple Hartline and Roughgarden [2009]. Offering selfish agents an anonymous menu of options is an example of such a simple process. More complicated mechanisms require trust on the part of the agents.
Recently, Feldman et al. [2017] considered a similar question to ours, where a job with private processing time had to choose between multiple FIFO queues, where the servers had different speeds. Here, dynamic posted prices were associated with every queue, with the goal of [approximately] minimizing the makespan, the length of time until the last job would finish. Shortly thereafter, Im et al. [2017] used dynamic pricing to minimize the maximal flow time. Dynamic pricing schemes were considered for nonscheduling cost minimization problems in Cohen et al. [2015].
A constant approximation mechanism for minimizing sum of completion times for selfish jobs was considered in Gkatzelis et al. [2017], where the setting was an offline setting, the processing time was known in advance and the weight was private information. In an online setting, Im and Kulkarni [2016] show a constant approximation preemptive mechanism that gives an approximation to the sum of flow times when using machines that are faster by a factor of .
In this paper our goals are pricing schemes that affect agents as to behave in a manner that [approximately] minimizes the sum of weighted completion times.
There is a vast body of work on machine scheduling problems, in offline and online settings, with strategic agents involved and not, and in a host of models. It is impossible to do justice to this body of work but a very short list of additional relevant papers includes Graham [1966]; Lenstra et al. [1977]; Graham et al. [1979]; Lenstra et al. [1990]; Nisan and Ronen [2001]; Christodoulou et al. [2004]; Immorlica et al. [2009].
2 The Model
We consider a job scheduling setting with machines and jobs that arrive in real time, where , , and are, respectively, the processing time, weight, and release time of the th job to arrive. It may be that , i.e., more than one job arrive at the same time. However, job decisions are made sequentially in index order.
A valid input for this problem can be described as a sequence of jobs
where the release time for , the job weight for , and the job processing time for . We refer to the th job in this sequence as job . We use the terms size and processing time interchangeably. Moreover, if we may say job is smaller than job , etc. Let be the length prefix of . The total volume of a set of jobs , denoted is the sum of processing times of the jobs in , i.e., .
Let be the time at which job starts processing (on some machine ). The completion time of job is .
The objective considered in this paper is to minimize the sum of [weighted] completion times; i.e., we wish to minimize .
For jobs , , with and with , job is assigned (or chooses) machine at time before job is assigned machine at . We say that and overlap, if and ( or ).
A valid (nonpreemptive) schedule for an input is a sequence
where no overlaps occur. An online algorithm determines after seeing and before seeing job .
We consider online mechanisms where jobs are selfish agents, processing times and weights are private information, and job is presented with a menu of options upon arrival. Every option on the menu is of the form where (i) is a time interval , with integer endpoints, and where , (ii) is some machine, and (iii) is the price for choosing this entry. The menu of options presented to job is computed after jobs have all made their choices and also depends on the release time of job , (because one cannot process a job in the past). We assume no feedback from jobs after they choose their menu options, i.e., if a job of size chooses an interval of length , we do not know the interval is only partly used, and specifically, cannot offer the remaining to future jobs.
For job that chooses menu entry we use the following notation (i) for the interval chosen by job , , (ii) for the machine chosen by job , , and (iii) for the price of the entry chosen by , .
Although the menus described above are infinite, one can present the menu items sequentially. With unit weight jobs, a job of processing time will make its choice within the first options presented. With unit length jobs, a job of weight will make its choice within the first options presented. With arbitrary lengths and arbitrary weights, a job of processing time and of weight will make its choice within the first options presented.
The cost to job with weight and processing time for choosing the menu entry is if the time interval is too short: . If then the cost to job is a cost of for every unit of time until job starts processing, plus the extra price from the menu. I.e., the cost to job with release time , processing time and weight , for choosing menu entry , , is
For the specialized cases of weight one jobs or unit length jobs the general model above is somewhat simpler:
2.1 Modeling weight one jobs with arbitrary Processing times
If jobs have weight one, we give (optimal) menus that do not require pricing menu entries. Any entry on the menu is available for free. Therefore, we can simplify the menu structure as follows: The job chooses a time interval and a machine from a menu with entries of the form where the first entry is a time interval, and the second entry is a machine^{3}^{3}3Although the general setting allows pricing menu items, it turns out that for weight 1 jobs the optimal menu does not need to differentiate entries by price.. The crux of the matter is coming up with the right menu.
Jobs choose from the menu one of the entries immediately upon arrival. As above, we say that job chooses menu entry where is an interval, and .
For job with arrival time , and processing time the cost associated with choosing the menu item is if and otherwise. Jobs always seek to minimize their cost.
2.2 Modeling unit length jobs of arbitrary weight
Every job requires one unit of processing time on one of different processors. Every job is a selfish agent that has a private weight , the cost to the job of one unit of delay.
The job chooses a machine and time slot from a menu with entries of the form where the first entry is a time slot, the second entry is a machine, and the third entry is the price of this time slot on the machine.
Jobs choose from the menu one of the entries immediately upon arrival. Job is said to choose menu item where is a length one interval, , and is the price to be paid for choosing this option.
For job with arrival time , and weight the cost associated with choosing the menu item is . Jobs always seek to minimize their cost.
3 Dynamic Menu for Selfish Jobs with Heterogeneous Processing Times
In this section we introduce a dynamic menu based mechanism, for jobs of weight one and heterogeneous processing times, with competitive ratio , where is the maximal job processing time among all jobs.
In Section 3.1 we present a couple of natural algorithms that have competitive ratio of . In Section 3.2 we provide integer sequences and corresponding interval sequences that serve as a building block for our dynamic menu mechanism, which is presented in Section 3.3. Finally, in Section 3.4 we provide the analysis showing that the dynamic menu gives a competitive ratio of .
3.1 Warmup: nonworking algorithms
We present two natural algorithms for prompt scheduling on a single machine, which result in poor competitive ratios. Assume , for some constant , and is known in advance. Assume also that all jobs have release time (but arrive sequentially). In this case, the optimal algorithm sorts jobs from short to long processing times, and schedules them based on this order. In an attempt to mimic this optimal (offline) algorithm by an online algorithm — in case where the input starts with a sequence of long jobs — we would like to introduce delays, keeping some early intervals vacant for short jobs that might come in the future.
Consider an algorithm that sets a static interval sequence (i.e., a sequence that is set once and for all from the outset), and schedules each arriving job on the first interval on which it fits.
One natural algorithm sets (an infinite loop of) the following sequence of intervals: the th interval for is of length .
Consider the following input: for , a job of size arrives (all jobs with release time zero, job follows job ), followed by jobs of length (where is determined later). The cost for the optimal algorithm is:
In the proposed algorithm, the last unitlength jobs will be scheduled after the first jobs, which have total processing time of . This implies:
For , we get that , while , leading to a competitive ratio of .
The proposed algorithm failed because it did not leave enough space for the unit length jobs. A possible attempt to fix this problem would be to have more short intervals than long ones. One natural such sequence is (an infinite loop of) length 1 intervals, followed by length 2 intervals, etc., ending with a single interval of length .
Consider an input sequence in which jobs of size 2 arrive at time 0, followed by one large job of size . The optimal schedule processes the short jobs first, then the large one, resulting in cost:
In the proposed algorithm, every short job will be scheduled after the first unit length intervals (as they do not fit unit length intervals). The obtained cost is thus
resulting in competitive ratio, as before. Thus, saving too much space for short jobs might result in unnecessary delay, which may lead to a poor competitive ratio.
Motivated by the above two failed attempts, we now present our solution:
3.2 The Integer and Interval Sequences
We define sequences of integers , , as follows: Let and for let where denotes concatenation. Ergo,
Let denote the length of (follows inductively from and ). Let , be the th element of . Let be an infinite sequence whose length prefix is (for all ):
Let , be the th element of . Note that for all and all , ergo, is a prefix of for .
Lemma 1.
For all , for all , the sum of all the value items in is equal :
Proof.
Proof via induction over . The claim is obviously true for . Assume the claim is true for . I.e., for all ,
Since is a concatenation of two sequences and the singleton sequence , we get that for all
The claim also holds trivially for . ∎
We use the sequences to define interval sequences. Let be the sum of the first entries in , (i.e., , , , etc.).
We define , , to be a sequence of consecutive intervals, the first of which starts at time , and where the length of the th interval equals . I.e.,
For example
(1) 
For any interval sequence let be the start of the first interval in and let be the end of the last interval in . For example, and .
We say that appears in if there exists some such that the interval sequence is a contiguous subsequence of . In this case we also say that appears in . Note that while is a sequence of integers, both and are interval sequences.
By construction, for any and any if and appear in some , then and are disjoint except, possibly, for their endpoints. Let be an interval of length that appears in . Then there is a unique such that appears in and is the last interval of . It follows from Lemma 1 that
Corollary 1.
For all , for all ,

appears in times.

The sum of the lengths of the intervals in is .
The interval sequences defined above suggests a new possible static algorithm. Divide the timeline of each machine into intervals as in , and let any job that arrives occupy the first unoccupied interval it fits in. Unfortunately, as proved in section B, when the competitive ratio is evaluated as a function of alone, this algorithm is competitive, as the natural algorithms in Section 3.1. (When the competitive ratio may be a function of and , this algorithm is competitive, see Theorem 5).
Definition 1.
A state
is a vector of consecutive interval sequences of the form
for some (which we refer to as the length of ) and integers for , and where for . This means that the interval sequences are disjoint and ordered by their starting times. Note that there might be gaps between two consecutive state entries, i.e., for some .
3.3 Competitive Dynamic Menu
When job arrives the algorithm is in some configuration , where is some state of length , and is the set of intervals occupied by the previous jobs. State represents every machines’ division of into time intervals (same division for all machines). This division will be kept at any future time. For every , is fixed and will be a part of every future state, while might be subject to change. We refer to as the tentative sequence of state . keeps track of all previously allocated intervals (in all machines): means that some job chose the interval on machine . Note that the size of job , , might be strictly smaller than the length of the interval (), yet it is still considered occupied.
Generating the Dynamic Menu
Given a state and a time , we define an interval sequence as follows:
is used to create the menu presented to a job . We present an algorithm for the creation of the menu, based on the previous configuration , and the current time .
Let .
Set to be the length of the first time interval in beginning at time .
Add to the menu for all machines in which is unoccupied (i.e, ).
Set
Repeat until job chooses an interval:
Let be the length of the first interval longer than in that starts at time (it follows that ).
Add to the menu for all machines in which is unoccupied (i.e., ).
Set .
By construction, no job will ever choose a time interval that starts before the job arrival time, nor will it ever choose a slot that has already been chosen.
A selfish job of length always chooses a menu entry of the form ) where is the earliest menu entry with .
Updating States.
After job makes its choice of menu entry, , we update the configuration from to . Clearly, . In the rest of this section we describe how to compute .
Recall that a state is a vector of consecutive and disjoint interval sequences. Initially, with length and is an empty sequence with . always contains all of ’s interval sequences except possibly the tentative sequence . When job of size chooses an interval, the new tentative sequence can be one of the following:

Unchanged from former: The new tentative sequence in is the same as the former tentative sequence in , i.e., . This happens when , see entry in Table 2.

Disjoint from former: The former tentative sequence, becomes fixed, and the new tentative sequence is disjoint from the former. The tentative sequence in , , is the th element in all future states , for . See entries and in Table 2.

Extension of former: The new tentative sequence is an extension of the former tentative sequence. I.e., if then and . See entry in Table 2.
Let be two consecutive interval sequences in a state . If , we say the interval is a gap.
Figure 1 is an example with 5 jobs that arrive over time, and how the configuration changes over time. The jobs in Figure 1 illustrate cases 1–4 from Table 2 in the following order: case 2 for job 1, case 1 for job 2, case 3 for job 3, case 4 for job 4 and case 2 for job 5.
Based on the definition of and its update rule, we observe the following.
Observation 1.
For every ,

If for job , , then is the last interval of the (new) tentative sequence which is of length .

For every , there exists some job such that is the last interval in , and . This means that job occupies the entire last interval in on machine .
Proof.

must have been updated by one of the entries 2,3 or 4 in Table 2. In all these cases, the last interval in is of size and was chosen by job on some machine (follows from case analysis of the menu presented to job and its possible choices).

For all , was the tentative sequence in some past state (). Let be the minimum such that was the tentative sequence in state . Then, .
∎
3.4 Analysis
3.4.1 Simplifying assumptions on the input sequence
For the purpose of analysis we assume an input sequence with integral release times and processing times that are powers of . When going from restricted inputs to the original inputs, the optimal preemptive algorithm cost improves by no more than a constant factor, whereas the online mechanism does not increase the sum of completion times.
Moreover, we assume that the input sequence never creates gaps as such gaps leave all machines free in both the online schedule and the optimal preemptive schedule (as a gap created by job , implies jobs were all fully processed by the online schedule before job j’s arrival. Ergo, the optimal preemptive algorithm must also have completed processing jobs prior to the arrival of job ). Therefore, a gap contribute equally to the sum of completion times of the online schedule and of the optimal preemptive schedule. This improves the competitive ratio. Therefore, an adversary generating such a sequence will never introduce gaps, i.e., for any state , for every .
3.4.2 Comparison to SRPT
We now turn to analyze the performance of our mechanism. This is done by comparing the completion time of each job in our mechanism and in SRPT. Let be a job in the input sequence. We define to be the set of all jobs that arrived no later than job and that are no bigger than it (note ). These jobs are all completed no later than job both in our mechanism and in SRPT, i.e., (where is the completion time of job in SRPT). Our analysis is based on this set.
We start with a with a few simple properties: Recall that the last interval in is the only interval of size in the sequence. The following lemma gives a lower bound on the completion time of a job (in the optimal schedule) that chose the last interval in (under the dynamic menu mechanism).
Lemma 2.
Let . Let and be such that some job with chose the last interval of on machine (). Let be the set of jobs, completing no later than job under SRPT, that execute on the same machine as job , and occupy some interval in . I.e., . Note that . Then,
Proof.
If then as the claim is clearly true. Specifically, the claim is true for since all processing times are so . It remains to consider the case where (i.e., ).
Proof via induction over . The claim is true for as stated above.
Let , and assume the claim is true for all .
If then it must be the case that every interval of length in is occupied on machine (by a job ), otherwise job would have preferred such an interval over its choice. Let be an interval of length in . By our construction, is the last interval of some (unique) appearance in , i.e., for and . is occupied by some job (see top row in Figure 4). It follows that . By the induction hypothesis, .
It now follows from Lemma 1, that appears times in (irrespective of ), so we can conclude that , as desired. ∎
Corollary 2.
For , replacing the condition that in Lemma 2 above with the condition , gives a [weaker] guarantee that .
Proof.
Let , , and be as in Lemma 2. If then the claim is true. Otherwise, . Recall that by construction, where and is a length interval. The last interval in is of length and must be occupied on every machine when job arrived (otherwise it would have chose it on some available machine). Thus, it must by occupied by some job with and . Applying Lemma 2 to job and gives the desired result. ∎
For any job , let be the completion time of job in the SRPT schedule. Our goal is to show that
Consider job , and the final state . Let , be such that
Lemma 3.
Assume . Let be the size of job . Let for some and some . If , then , i.e., the volume of jobs of size and for which is at least .
Proof.
We separate the proof into two cases:

If , then, on all machines, the last interval of every appearance in is occupied by a job that arrived before job (by a job of size ). Otherwise, job would have chosen such an unoccupied interval. By using Lemma 2 on every one of the appearances (on every machine, separately) we get that for every such appearance in (on every machine), , i.e., every such appearance has a volume of at least of jobs in . Taken together with Corollary 1 this implies that .

Otherwise (), consider the minimal index with ; this means that the tentative sequence is disjoint from the tentative sequence . (If was unchanged, or an extension of it contradicts the assumption that .
Since , arrived no later than job and . Let . It must be the case that , otherwise, would be an extension of .
Therefore, since can fit in the last interval of and its release time is no later than , it must be the case that this interval is occupied on every machine, by some job of size that arrived before . By applying Lemma 2, we get that in every machine, there are jobs of size in that arrived no later than of volume at least . we get that in every machine, , i.e., there are jobs of size in that arrived no later than of volume at least . Since arrived no later than , and , the lemma follows.
∎