Job scheduling is one of the fundamental problems in operation research and computer science that broadly speaking the goal is to schedule a collection of jobs with different specifications to a set of machines by minimizing a certain performance metric. Depending on the application various performance metrics have been proposed and analyzed over the past decades with some of the most important ones being the weighted completion time , where denotes the completion time of job , weighted flow time , where is the release time of job , or a generalization of both such as weighted -flow time [1, 2, 3, 4, 5]. In fact, one can capture all of these performance metrics using a general form , where is a general nonnegative and nondecreasing cost function. For instance, by choosing one can recover the weighted flow time cost function. In this paper, we focus on this most general performance metric and develop a greedy online algorithm with a bounded competitive ratio under certain assumptions on the structure of the cost functions . Of course, the performance metrics that we discussed above are written only for a single machine. However, one can naturally extend them to multiple unrelated machines by setting , where denotes the machine to which job is dispatched, and is the cost function associated to job if it is dispatched to machine .
The job scheduling problems have been extensively studied in the past literature under both offline and online settings. In the offline setting, it is assumed that all the job specifications (i.e., processing lengths, release times, and cost functions) are known and given to the scheduler a priori. In the online setting which we consider in this paper, the scheduler only gets to know a job specification upon its arrival to the system in which case the scheduler must make an irrevocable decision. Therefore, an immediate question here is whether an online scheduler can still achieve a performance “close” to that of the offline one despite its lack of information ahead of time. This has been addressed using the notion of competitive ratio which has been used frequently as a standard metric for evaluating the performance of online algorithms. In this paper, we shall also use the competitive ratio to evaluate the performance guarantee of our devised online algorithms.
In this paper we allow preemptive schedules meaning that the processing of a job on a machine can be interrupted due to the existence or arrival of other jobs. This is much needed in a deterministic setting because it is shown in  that even for a single machine there are strong lower bounds for competitive ratio of any online algorithm. It is worth noting that relaxing the deterministic job specifications to stochastic ones one can obtain non-preemptive competitive algorithms but with respect to weaker offline benchmarks . Moreover, we consider nonmigratory schedules in which a dispatched job must stay on the same machine until its completion and it is not allowed to migrate to other machines. In fact, for various reasons such as increasing the lifetime of the machines or reducing the failure in job completions, nonmigratory schedules are quite desirable in practical applications . Furthermore, in this paper, we assume that all the machines have fixed unit processing speed and can process only a unit of job per unit time slot. Note that this is more restrictive than the setting in which a machine can vary its speed over time [8, 9]. In fact, in the latter case, an online scheduler has extra freedom to adjust its speed at various time instances (possibly by incurring an energy cost) to achieve a better competitive ratio. This extra freedom usually makes the analysis of varying speed setting simpler than the more restrictive case of fixed speed machines that we consider in this paper.
Unfortunately, even for a simple weighted flow time problem on three unrelated machines, it is known that no online algorithm can achieve a bounded competitive ratio . To overcome this obstacle, in this paper we adopt the speed augmentation framework which was first proposed by  and subsequently used in various online job scheduling problems [12, 8, 4, 3, 13]. More precisely, in the speed augmentation framework one compares the performance of the online scheduler with a weaker optimal offline benchmark, i.e., the one in which each machine has a fixed slower speed of . In other words, an online scheduler can achieve a bounded competitive ratio if its machines run times faster than those in the optimal offline benchmark.
In general, there are two different approaches to devising competitive algorithms for online job scheduling. The first method is based on the potential function technique where one comes up with a clever potential function and shows that the proposed algorithm behaves well compared to the offline benchmark in an amortized sense. Unfortunately, constructing potential functions can be quite tricky which often requires a good “guess”. Even if one can come up with the right potential function, such analysis provides little insight about the problem with the choice of the potential function being very specific to the problem setup [5, 14, 13, 15]. An alternative and perhaps more powerful technique which we also use in this paper is the one based on linear/convex programming and dual fitting [4, 8, 3]. In this approach, one first models the offline job scheduling problem as a mathematical program and then utilizes this program to develop an online algorithm that preserves KKT optimality conditions as much as possible over the course of the algorithm. Doing in this manner, one can construct an online feasible primal solution (i.e., the solution generated by the algorithm) together with a properly “fitted” dual solution, and show that the cost increments in the primal objective (i.e., increase in the cost of the algorithm due to its decisions) and those of the dual objective are within a constant factor from each other. This implies that the cost increments of the primal and dual feasible solutions due to the arrival of a new job remain within a constant factor from each other, establishing a competitive ratio for the devised algorithm which follows from weak duality. However, one major difficulty here is to carefully select the dual variables which in general could be highly nontrivial. As one of the contributions of this paper, we provide a principled way of setting dual variables using elementary results from optimal control and minimum principle. As a by-product, we show how one can recover some of the earlier dual fitting results which were obtained in an ad hoc manner and even extend them to more complicated heterogeneous settings. We believe that this optimal control perspective has the potential to be applied to many other similar problems and provides a systematic tool for dual-fitting analysis when the choice of “right” dual variables is highly nontrivial.
I-a Related works
It is known that without speed augmentation there is no competitive online algorithm for minimizing weighted flow time . The first online competitive algorithm with speed augmentation for minimizing flow time on a single machine was given by . In  a potential function was used to show that a natural online greedy algorithm is -speed -competitive for minimizing -norm of weighted flow time on unrelated machines. This result was improved by  to -speed -competitive algorithm which was the first analysis of online job scheduling using the dual fitting technique. In their algorithm each machine works based on highest residual density first (HRDF) rule where the residual density of a job on machine at time is given by the ratio of its weight over its remaining length , i.e., . In particular, a newly released job is dispatched to a machine that gives the least increase in the objective of the linear program. Our algorithm for online job scheduling with generalized cost functions is partly inspired from the primal-dual algorithm in  which was given for a different objective of minimizing the sum of energy and weighted flow time on unrelated machines. However, unlike  where the optimal dual variables can be precisely determined using natural KKT conditions, the dual variables in our setting do not admit a simple closed-form characterization. Therefore, we follow a different path to infer our desired properties using a dynamic construction of dual variables which requires completely new ideas.
Online job scheduling on a single machine with general cost functions of the form where is a general nonnegative nondecreasing function has been studied in . In particular, it was shown that the highest density first (HDF) rule is optimal for minimizing the fractional completion time on a single machine and it was left open for multiple machines. Here, the fractional objective means that the contribution of a job to the objective cost is proportional to its remaining length. The analysis in  is based on a primal-dual technique which updates the optimal dual variables upon arrival of a new job using a fairly complex two-phase process. We obtain the same result here using a very simple process mainly inspired by dynamic programming which also motivates our optimal control formulation for extending this result to arbitrary nondecreasing cost functions . The problem of minimizing the fractional generalized flow time on unrelated machines for a convex and nondecreasing cost function has been recently studied in , where it is shown that a greedy dispatching rule similar to that in  together with HRDF scheduling rule provides a competitive online algorithm under speed-augmented setting. The analysis in  is based on nonlinear Lagrangian relaxation and dual fitting along with the same spirit as those given in . However, the competitive ratio in  depends on additional assumptions on the cost function and is a special case of our generalized completion time problem on unrelated machines. In particular, our algorithm is different in nature than the one in  and is based on a simple primal-dual dispatching scheme. In fact, the competitive ratio that we obtain here follows organically from our analysis and requires less stringent assumptions on the cost functions.
The generalized flow problem on a single machine with special cost functions has been studied in . It was shown that for nondecreasing nonnegative function the HDF rule is -speed -competitive and this is essentially the best online algorithm one can hope under the speed-augmented setting.  uses Lagrangian duality for online scheduling problems beyond linear and convex programming. From a different perspective, many scheduling problems can be viewed as allocating rates to jobs subject to certain constraints on the rates. In other words, at a given time a processor can simultaneously work on multiple jobs by splitting its computational resources with different rates among the pending jobs. The problem of rate allocation on a single machine with the objective of minimizing weighted flow/completion time when jobs of unknown size arrive online (nonclairvoyant setting) has been studied in [13, 19, 14]. Moreover,  gives an -speed -competitive algorithm for fair rate allocation over unrelated machines.
The offline version of job scheduling on a single or multiple machines has also received much attention in the past years [1, 21].  uses a convex program to give a -approximation algorithm for minimizing -norm of the loads on unrelated machines.  studied the offline version of a very general scheduling problem on a single machine whose online version is considered in this paper. More precisely,  provides a preemptive -approximation algorithm to minimize the generalized heterogeneous completion time , where is the ratio of the maximum to minimum job length. This result has been recently extended in  to the case of multiple identical machines.  considered the online generalized completion time problem on a single machine and provided a rate allocation algorithm which is -speed -competitive assuming differentiable and monotone concave cost functions . We note that the rate allocation problem is a significant relaxation of the problem we consider here. To the best of our knowledge, this work is the first one which studies the generalized completion time problem under the speed-augmented setting. In particular, for both single and multiple unrelated machines, we provide online preemptive nonmigratory algorithms whose competitive ratios depend on the curvature of the cost functions .
I-B Organization and contributions
We first provide a formal formulation of the heterogeneous generalized fractional completion (HGFC) time problem on a single machine in Section II. In Section III, we consider a special case of HGFC when the cost functions are of the form , where is a constant weight and is an arbitrary nonnegative nondecreasing function. We provide a simple process to update the dual variables upon arrival of each job which in turn implies that the HDF is an optimal online schedule for this special case. To handle the very general HGFC problem, in Section IV we provide an alternative optimal control formulation for the offline HGFC problem with identical release times. This formulation allows us to set our dual variables as close as possible to the optimal dual variables. Using this, in Section V we consider the online HGFC problem on a single machine and design an online algorithm as an iterative application of the offline HGFC with identical release times. In that regard, we deduce our desired properties on our choice of dual variables by making a connection to a network flow problem. These results together will allow us to bound the competitive ratio of our online algorithm for HGFC on a single machine assuming monotonicity of the cost functions . In section VI, we extend this result to the case of online scheduling for HGFC on unrelated machines by assuming convexity of the cost functions . We conclude the paper by some discussions and identifying future directions of research in Section VII. Finally, in Appendix I, we present another application of our optimal control framework in analyzing and generalizing some of the existing dual fitting techniques. We relegate other auxiliary lemmas to Appendix II.
Ii Problem Formulation
In this paper we only focus on devising competitive algorithms for fractional objective functions. This is a common approach in obtaining a competitive, speed-augmented scheduling algorithm for various integral objective functions which first derives an algorithm that is competitive for the fractional objective [3, 14, 17, 8]. In fact, it is known that any -speed -competitive algorithm for fractional generalized flow/completion problem can be converted to a -speed -competitive algorithm for the integral problem, for some . Next we introduce a natural LP formulation for the HGFC problem on a single machine and postpone its extension on multiple unrelated machines to Section VI.
Consider a single machine that can work on at most one unfinished job at any time instance . Moreover, assume that the machine has a fixed unit processing speed meaning that it can process only one unit length of a job per unit time. We consider a clairvoyant setting where each job has a known length and a job dependent cost function which is revealed to the machine only at its arrival time . Note that in the online setting the machine does not know a prior the job specifications , and only learns them upon release of job at time . Given a time instance , let us denote the remaining length of job at time by , such that . We denote the completion time of the job to be the first time at which the job is fully processed, i.e., . Of course, depends on the type of schedule that the machine is using to process the jobs which we have not specified here. The heterogeneous integral generalized completion time problem is then to find a schedule which minimizes the objective cost , where is a nonnegative nondecreasing differentiable function with .
As we mentioned earlier, we consider the fractional relaxation of this problem which admits a natural LP formulation. In the fractional problem only the remaining fraction of job at time contributes amount to the delay cost of job . Note that the fractional cost gives a lower bound to the integral cost in which the entire unit fraction of a job receives a delay cost of such that . Therefore, the objective cost of HGFC problem is given by,
Now denoting the rate of processing job in an infinitesimal interval by , we have . Thus, using integration by part we can rewrite the above objective function as,
where the second equality is by the fact that and . Now for simplicity and by some abuse of notation let us redefine to be its scaled version . Then, the offline HGFC problem on a single machine is given by the following LP which is also a fractional relaxation for the integral generalized completion time problem.
Here the first constraint implies that every job must be fully processed. The second constraint ensures that the machine has unit processing speed at each time instance . Finally, the integral constraints which are necessary to ensure that at each time instance at most one job can be processed are replaced by their relaxed versions . The dual of this LP is given by
Finally, our goal in solving online HGFC problem on a single machine is to devise an online algorithm whose objective cost is competitive with respect to the optimal offline LP cost (1).
Iii A Special Online HGFC Problem on a Single Machine
In this section we consider HGFC problem on a single machine and for simplified cost functions of the form , where is a constant weight reflecting the importance of job , and is a general nonnegative nondecreasing function. Again by some abuse of notation, the scaled cost function is given by , where denotes the density of job . Rewiring (1) and (5) for this special class of cost functions we have,
Next in order to obtain an optimal online schedule for this special case of HGFC problem, we generate an integral feasible solution (i.e., ) to the primal LP (8) together with a feasible dual solution of the same objective cost. The integral feasible solution is simply obtained by following the highest density first (HDF) schedule: among all the alive jobs on the machine process the one which has the highest density. More precisely, denoting the set of alive jobs at time by
the HDF rule schedules the job at time (ties are broken arbitrarily).
Applying the HDF rule on an original instance of online job scheduling with jobs, let be the disjoint time intervals in which job is being processed. We define the splitted instance to be the one with jobs, where all the jobs (subjobs in the original instance) associated with job have the same density , lengths , and release times .
The motivation for introducing the splitted instance is that we do not need to worry about time instances at which a job is interrupted/resumed due to arrival/completion of newly released jobs. Therefore, instead of tracking the preemption times of a job, we can treat each subjob separately as a new job. This allows us to easily generate a dual optimal solution for the splitted instance (Lemma 1) which then can be converted into an optimal dual solution for the original instance (Lemma 2).
HDF is an optimal schedule for the splitted instance whose cost, denoted by OPT, is equal to the cost of HDF applied on the original online instance.
Proof: In the splitted instance each new job (subjob in the original instance) is released right after completion of the previous one. The order that these jobs are released is exactly the one dictated by the HDF. Therefore, any work preserving schedule (and in particular HDF rule) which uses the full unit processing power of the machine would be optimal for the splitted instance. As HDF performs identical on both splitted and original instances (in fact this was the way we defined the splitted instance), so the cost of HDF on both instances is equal to OPT. Furthermore, we can fully characterize the optimal dual solution for the splitted instance in a closed form. To see this, let us relabel all the jobs in increasing order of their processing intervals by . Then,
form optimal dual solutions to the splitted instance
This is because by definition of dual variables in (12), the dual constraint is satisfied by equality for the entire time period that job is scheduled. To see this we note that for . Thus for any such and using the definition of in (12) we can write,
Thus the dual constraint is tight whenever the corresponding primal variable is positive, which shows that the dual variables in (12) together with the integral primal solution generated by the HDF produce an optimal pair of primal-dual solutions to the splitted instance.
We refer to diagrams of the optimal dual solutions (12) in the splitted instance as -plot and -plot, respectively. More precisely, in both plots the -axis represents the time horizon partitioned into the time intervals that HDF processes subjobs. In the -plot we draw a horizontal line segment at the height for subjob and within its processing interval . In the -plot we simply plot as a function of time. We refer to line segments of the subjobs which are associated with job as -steps (see Example 1).
Next we describe a simple process to convert optimal dual solutions in the splitted instance to optimal dual solutions for the original instance with the same objective cost. This process is summarized in Algorithm 1.
Consider an original instance of online job scheduling on a single machine with lengths , release times , and densities . Moreover, assume that so that the HGFC problem reduces to the standard fractional completion time problem:
Now applying HDF on this instance we get a splitted instance with 7 subjobs: two -steps of lengths which are scheduled over time intervals and , a -step of length which is scheduled over , two -steps of equal length which are scheduled over and , one -step of length which is scheduled over , and finally one -step of length which is scheduled over . These steps for the splitted instance are illustrated by blue line segments in the -plot in Figure 1. The corresponding optimal -plot for the splitted instance is also given by the continuous blue curve in Figure 1 which is obtained from (12). Now moving backward in time over the steps, we set as the reference steps for jobs , and , respectively. Note that by Algorithm 1 these steps do not need to be lowered. However, step will be lowed by one unit which is then set as the reference height for job . Consequently, all the steps before will be lowered by one unit in both -plot and -plot. Continuing in this manner by processing all the remaining steps , we eventually obtain the red steps in the -plot and the red piecewise curves in the -plot which correspond to the optimal dual solutions of the original instance. Note that at the end of this process all the steps corresponding to a job are set to the same reference height. For instance the two subjobs and are set to the reference height , i.e., .
The reference heights and values obtained from -plots at the end of Algorithm 1 form feasible dual solutions to the original online instance whose dual cost equals to the optimal cost of the splitted instance OPT.
Proof: First we note that updating -plots do not change the dual objective value. To see this, assume at the current step we update both plots by . Then the first term in the dual objective function of the splitted instance (14) reduces by exactly , which is the size of the area shrinked by lowering the height of all the steps prior to the current time . As we also lower the -plot by the same amount for , the second term in the dual objective function also decreases by the same amount . Thus the overall effect of updates in the dual objective (14) at each iteration is zero. This implies that the dual objective value at the end of Algorithm 1 is the same as its initial value, i.e., OPT.
Next we show that Algorithm 1 terminates properly with a feasible dual solution. Otherwise by contradiction, let be the first step whose update at time violates at least one of the dual constraints, i.e., for some . Now let be the first -step on the right side of , and consider the time at which was set to its reference height . Defining to be the height difference between and at time , we have , where , and refers to all the subjubs (steps) which are scheduled during . This is because first of all the updates prior to do not change the relative height difference between and . Moreover, during the interval if a subjob of density is scheduled over , then from (12) the first term in drops at a negative rate while the second term increases at a positive rate . As all the intermediate steps have higher density than (otherwise, by HDF rule the subjob should have been processed earlier), we can write,
In other words, is larger than the total height decrease that incurs over .
To derive a contradiction, it is sufficient to show that is no less than the total height decrements incurred by the step updates during the interval . Toward this aim let us partition into subintervals as we move backward over . Each subinterval starts with the first subjob outside of the previous one , and it is just long enough to contain all other -steps which are inside . By this partitioning and HDF rule, it is easy to see that the first step of each subinterval must be set as a reference for job . Now by our choice of we know that all the steps in can be properly set to their reference height at the time of update. Thus using a similar argument as above, the total height decrements due to step updates in the subinterval (except the first step which is a reference step) is equal to the total height decrease that incurs over the interval , i.e.,
Finally, we account for total height reduction due to reference updates, denoted by . We do this using a charging argument where we charge height decrements due to reference updates to the subintervals . As a result, the total height reduction due to reference updates would be the total charge over all the subintervals. For this purpose, let be the longest chain of subintervals such that is index of the first subinterval of lower density on the left side of , i.e., , and inductively, denotes the index of the first subinterval of lower density on the left side of , i.e., . Then we charge subinterval by , where is the total variation of over . As it is shown in Lemma 7 the height decrements due to reference updates is bounded above by the total charge . Thus the total height reduction over can be bounded by
where the second inequality holds because . This contradiction establishes the dual feasibility of the generated solution at the end of Algorithm 1. Finally, let denote the values of -plots at the end of Algorithm 1. Since at the end of the algorithm all the -steps are properly set to the same reference height , we have . This shows that and form feasible dual solutions to the original instance.
HDF is an optimal online algorithm for the HGFC problem on a single machine with special cost functions , where is an arbitrary nonnegative nondecreasing function.
Proof: Consider the splitted instance obtained from applying HDF on the original online instance with jobs. From Lemma 1 HDF is an optimal schedule for the splitted instance whose optimal cost OPT equals to the cost of HDF on the original online instance. Let be the optimal dual solution to the splitted instance. By Lemma 2 we can convert to a feasible dual solution for the original instance with the same cost OPT. This shows that the solution generated by HDF together with form a feasible primal-dual solution to the original instance with the same objective cost OPT. By strong duality this implies that HDF is an optimal schedule.
As we saw above, Algorithm 1 provides a simple update rule to generate optimal dual variables for the special case of cost functions . Unfortunately, this quickly becomes intractable when we work with general heterogeneous cost functions . However, a closer look in the structure of Algorithm 1 shows that it is merely dynamic programming that starts from a dual solution (namely optimal dual solution of the splitted instance) and moves backward to fit it into the dual of the original instance. This suggests us to formulate the HGFC problem as an optimal control problem in which the Hamilton-Jacobi-Bellman (HJB) equation plays the role of dynamic programming above and tells us how to fit our dual variables as close as possible to the dual of the HGFC problem. However, a major issue here is that for the online HGFC problem jobs may arrive over time which requires a hybrid optimal control formulation . As working with hybrid systems makes the analysis quite complicated, we instead overcome this issue using the insights obtained from the structure of the optimal -curve generated by Algorithm 1 (see the red discontinuous curve in the -plot of Figure 1). It can be seen that the optimal -plot has discontinuous jumps whenever a new job arrives in the system. To mimic this behavior, we start to solve the offline optimal control problem and whenever a new job arrives we simply update the state of the system and solve the new offline optimal control problem to update our variables. In this fashion one only needs to iteratively solve offline optimal control problems as described next.
Iv An Optimal Control Formulation for Offline HGFC Problem with Identical Release Times
In this section, we cast the offline HGFC problem on a single machine with identical release times as an optimal control problem. This gives us a powerful tool to characterize the structure of the optimal offline schedule and to set the dual variables. Consider the time that a new job is released to the system and let be the set of currently alive jobs (excluding job ). Now if we assume that no new jobs are going to be released in the future, then an optimal schedule must solve an offline instance with a set of jobs and identical release times , where the length of job is given by its residual length at time (note that for job we have as it is released at time ). Since the optimal cost to this offline instance depends on the residual lengths of the alive jobs, we shall refer to them as states of the system at time . More precisely, we define the state of job at time to be the residual length of that job at time
, and the state vector to be, where . Note that since we are looking at the offline HGFC problem at time assuming no future arrivals, the dimension of state vector does not change and equals to the number of alive jobs at time .
Next we define the control input at time to be , where is the rate of processing job at time . Therefore, , or equivalently with the initial condition . Rewriting these equations in a vector form we have,
Moreover, due to the second primal constraints in (1) we note that at any time the control vector must belong to the simplex . Therefore, an equivalent optimal control formulation for (1) given identical release times and initial state is given by
where as before refers to the original cost function scaled by . Note that for any
the loss functionis nonnegative. As s can only increase over time, this implies that any optimal control must finish the jobs in the finite time interval where . So without loss of generality we can replace the upper limit in the integral by , which gives us the following equivalent optimal control problem:
It is worth noting that we do not need to add nonnegativity constraints to (21) as they implicitly follow from the terminal condition. This is because if for some and , then as , the state can only decrease further and remains negative forever, violating the terminal condition . So specifying that already implies that .
Using integration by parts one can write an equivalent expression for (21) in terms of state variables as,
Iv-a Solving the offline HGFC using minimum principle
Next we proceed to solve the optimal control problem (21) using the minimum principle. The Hamiltonian for (21) with a costate vector is given by . Writing the minimum principle optimality conditions we obtain,
Therefore, for every the minimum principle optimality conditions for (21) with free terminal time and fixed endpoints are given by,
with boundary conditions , and