Scheduling a set of jobs over a collection of machines is a fundamental problem that needs to be solved millions of times a day in various computing platforms: in operating systems, in large data clusters, and in data centers. Along with makespan, flow-time, which measures the length of time a job spends in a system before completing, is arguably the most important metric to measure the performance of a scheduling algorithm. In recent years, there has been a remarkable progress in understanding flow-time related objective functions in diverse settings such as unrelated machines scheduling [22, 27, 5, 31, 7], broadcast scheduling [12, 28, 11], multi-dimensional scheduling [32, 30], to name a few.
Yet, our understanding of the flow-time based objective functions is mostly limited to the scenarios where jobs have simple structures; in particular, each job is a single self contained entity. On the other hand, in almost all real world applications, jobs have more complex structures. Consider the MapReduce model for example. Here, each job consists of a set of Map tasks and a set of Reduce tasks. Reduce tasks cannot be processed unless Map tasks are completely processed111In some MapReduce applications, Reduce tasks can begin after the completion of a subset of Map tasks.. A MapReduce job is complete only when all Map and Reduce tasks are completed. Motivated by these considerations, in this paper, we consider two classical scheduling models that capture more complex job structures: 1) concurrent open-shop scheduling (COSSP), and 2) precedence constrained scheduling (PCSP). Our main reason to study these problems specifically comes from their relevance to two scheduling problems that have gained importance in the context of data centers: co-flow scheduling and DAG scheduling. We discuss more about how these problems relate to COSSP and PCSP in Section 1.3.
The objective function we consider in this paper is minimizing the sum of general delay costs of jobs, first introduced in an influential paper by Bansal and Pruhs  in the context of single machine scheduling. In this objective, for each job we are given a non-decreasing function , which gives the cost of completing the job at time . The goal is to minimize , where is the completion time of . A desirable aspect of the general delay cost functions is that they capture several widely studied flow-time and completion time based objective functions.
Minimizing the sum of weighted flow-times of jobs. This is captured by the function , where is the release time of .
Minimizing the sum of weighted th power of flow-times of jobs. This is captured by the function .
Minimizing the sum of weighted tardiness. This is captured by the function , where is the deadline of .
In this paper, we design approximation algorithms for minimizing the sum of general delay costs of jobs for the concurrent open-shop scheduling and the precedence constrained scheduling problems.
1.1 Concurrent Open-shop Scheduling Problem (Cossp)
In COSSP, we are given a set of machines and a set of jobs. Each job has a release time . The main feature of COSSP is that a job consists of operations , one for each machine . Each operation needs units of processing on machine . We allow operations to have zero processing lengths. Throughout the paper, we assume without loss of generality that all our input parameters are positive integers. A job is complete only when all its operations are complete. That is, if denotes the completion time of , then units of operation must be processed in the interval on machine . In the concurrent open-shop scheduling model, multiple operations of the same job can be processed simultaneously across different machines.
The problem has a long history due to its applications in manufacturing, automobile and airplane maintenance and repair , etc., and has been extensively studied both in operations research and approximation algorithm communities [3, 16, 23, 43, 52, 51, 39, 6]. As minimizing makespan in COSSP model is trivial, much of the research has focused on the objective of minimizing the total weighted completion times of jobs. The problem was first considered by Ahmadi and Bagchi , who showed the NP-hardness of the problem. Later, several groups of authors, Chen and Hall , Garg et al. , Leung et al. , and Mastrolilli et al. , designed 2-approximation algorithms for the problem. Under Unique Games Conjecture, Bansal and Khot showed that this approximation factor cannot be improved for the problem .
Garg, Kumar, and Pandit  studied the more difficult objective of minimizing the total flow-time of jobs, and showed that the problem cannot be approximated better than , by giving a reduction from the set cover problem. However, they did not give any approximation algorithm to the problem, and left it as an open problem. To the best of our knowledge, the problem has remained open ever since. In this paper, we make progress on this problem.
Let denote the ratio of the maximum processing length among all the operations to the minimum non-zero processing length of among all the operations; that is, .
For the objective of minimizing the sum of general delay cost functions of jobs in the concurrent open-shop scheduling model, there exists a polynomial time approximation algorithm.
We obtain the above result by generalizing the algorithm in Bansal and Pruhs . Note that when , our result gives a approximation algorithm to the problem, matching the best known polynomial time result in . Recently, for the special case of total weighted flow-time, a constant factor approximation algorithm was obtained by Batra, Garg, and Kumar  when . However, running time of their algorithm is pseudo-polynomial. Since the approach in  is very different from the one in , our result does not generalize .
As we discussed earlier, the general delay cost functions capture several widely studied performance metrics. Thus, we get:
There is a polynomial time approximation algorithm in the concurrent open-shop scheduling model for the following objective functions: 1) Minimizing the sum of weighted flow-times of jobs; 2) Minimizing the weighted -norms of flow-times of jobs; the approximation factor becomes ); 3) Minimizing the sum of weighted tardiness of jobs.
1.2 Precedence Constrained Scheduling Problem (Pcsp)
More complex forms of job structures are captured by the precedence constrained scheduling problem (PCSP), another problem that has a long history dating back to the seminal work of Graham . Here, we have a set of identical machines and a set of jobs; each job has a processing length , a release time . Each job must be scheduled on exactly one of the machines. The important feature of the problem is that they are precedence constraints between jobs that capture the computational dependencies across jobs. The precedence constraints are given by a partial order “”, where a constraint requires that job can only start after job is completed. Our goal is to schedule (preemptively) each job on exactly one machine to minimize .
Precedence constrained scheduling on identical machines to minimize the makespan objective is perhaps the most well-known problem in scheduling theory. Already in 1966, Graham showed that list scheduling gives a 2-approximation algorithm to the problem. Since then several attempts have been made to improve the approximation factor [37, 21]. However, Svensson  showed that problem does not admit a approximation under a strong version of the Unique Games Conjecture introduced by Bansal and Khot . An unconditional hardness of is also known due to Lenstra and Rinnooy Kan . Recently, Levey and Rothvoss  showed that it is possible to overcome these lowerbounds for the case when is fixed. An LP-hierarchy lift of the time-index LP with a slightly super poly-logarithmic number of rounds provides a approximation to the problem.
Another problem that is extensively studied in the precedence constrained scheduling model is the problem of minimizing the total weighted completion times of jobs. Note that this problem strictly generalizes the makespan problem, hence all the lowerbounds also extend to this problem. The current best approximation factor of 3.387 is achieved by a very recent result of Li . The work builds on a -approximation algorithm due to Munier, Queyranne and Schulz (, ).
In a recent work, Agrawal et al  initiated the study of minimizing the total flow-time objective in DAG (Directed Acyclic Graphs) parallelizability model. In this model, each job is a DAG, and a job completes only when all the nodes in the DAG are completed. For this problem, they showed greedy online algorithms that are constant competitive when given -speed augmentation. The DAG parallelizability model is a special case of PCSP. However, as there are no dependencies between jobs, and individual nodes of the DAG do not contribute to the total flow-time unlike in PCSP, complexity of the problem is significantly different from PCSP. For example, when there is only one machine, the DAG structure of individual jobs does not change the cost of the optimal solution, and hence the problem reduces to the standard single machine scheduling problem. Therefore, scheduling jobs using Shortest Remaining Processing Time (SRPT), where processing length of a DAG is its total work across all its nodes, is an optimal algorithm. (Within a DAG, the nodes can be processed in any order respecting the precedence constraints.)
On the other hand, we show a somewhat surprising result for the PCSP problem. We show that the problem of minimizing the total flow-times of jobs does not admit any reasonable approximation factor even on a single machine. This is in sharp contrast to makespan and the sum of weighted completion times objective functions that admit -approximation algorithms even on multiple machines. Our hardness proof is based on a recent breakthrough work of Manurangsi  on approximating the Densest-k-Subgraph (DkS) problem.
In the precedence constrained scheduling model, for the objective of minimizing the total flow-times of jobs on a single machine, no polynomial time algorithm can achieve an approximation factor better than , for some universal constant , assuming the exponential time hypothesis (ETH).
To circumvent this hardness result, we study the problem in the speed augmentation model, which can also be thought of as a bi-criteria analysis. In the speed augmentation model, each machine is given some small extra speed compared to the optimal solution. The speed augmentation model was introduced in the seminal work of Kalyanasundaram and Pruhs 
to analyze the effectiveness of various scheduling heuristics in the context of online algorithms. However, the model has also been used in the offline setting to overcome strong lowerbounds on the approximability of various scheduling problems; see for example results on non-preemptive flow-time scheduling that use-speed augmentation to obtain -approximation ratio [10, 33].
Our second main result is an -approximation algorithm for the problem in the speed augmentation model. Previously, no results were known for the flow-time related objective functions for PCSP.
For the objective of minimizing the sum of general delay cost of jobs in the precedence constrained scheduling model on identical machines, there exists a polynomial time -speed approximation algorithm. Furthermore, the speed augmentation required to achieve an approximation factor better than , for any , has to be at least the best approximation factor of the makespan minimization problem. The lowerbound on speed augmentation extends to any machine environment, such as related and unrelated machine environments.
We give the proofs of above theorems in Section 3.
1.3 Applications of Cossp and Pcsp in Data Center Scheduling
Besides being fundamental optimization problems, COSSP and PCSP models are very closely related to the scheduling problems that arise in the context of data centers. In particular, COSSP is a special case of the Coflow Scheduling problem introduced in a very influential work of Choudary and Stoica [17, 19]. On the other hand, PCSP generalizes the DAG scheduling problem, again a widely studied problem in systems literature. In fact, the DAG scheduling model has been adopted by Yarn, the resource manager of Hadoop . See [26, 25] and references there-in for more details.
Besides being fundamental optimization problems, COSSP and PCSP models are very closely related to the scheduling problems that arise in the context of data centers. We briefly describe the relevance of COSSP and PCSP to scheduling in data centers.
The COSSP problem is a special case of the coflow scheduling abstraction introduced in a very influential work of Choudary and Stoica [17, 19], in the context of scheduling data flows. They defined coflow as a collection of parallel flows with a common performance goal. Their main motivation to introduce coflow was that in big-data systems, MapReduce systems for example, communication is structured and takes place across machines in successive computational stages. In most cases, communication stage of jobs cannot finish until all its flows have completed. For example, in MapReduce jobs, a reduce task cannot begin until all the map tasks finish. Although coflow abstraction was introduced to model scheduling flows in big data networks, it can also be applied to job scheduling in clusters; see  for example.
Therefore, we describe a slightly more general version of the coflow abstraction. Here, we are given a set of machines (or resources). Each job consists of a set of operations for . Associated with each operation
is a demand vector, where each . The demand vector indicates the subset of machines or resources the operation requires. An operation can be executed only when all the machines in the demand vector are allocated to it. Moreover, for each operation we are also given a processing length . The goal is to schedule all operations such that at any time instant the capacity constraints on machines are not violated: that is, each machine is allocated to exactly one operation. A job completes only when all its operations have finished.
The coflow problem studied by Choudary and Stoica [17, 19] corresponds to the case where the demand vectors of all jobs have exactly two 1s. That is, every operation needs two machines to execute. The machines typically correspond to input and output ports in a communication link. On the other hand, if the demand vector of each operation consists of exactly one non-zero entry, then the coflow scheduling is equivalent to COSSP.
In the past few years, coflow scheduling has attracted a lot of research both in theory and systems communities. In practice, several heuristics are known to perform well [17, 19, 18, 25] for the problem. The theoretical study of coflow scheduling was initiated by Qiu, Stein, and Zhong . By exploiting its connections to COSSP, they designed a constant factor approximation algorithm to the objective of minimizing the total weighted completion times of jobs. Building on this work, better approximation algorithms were designed in [36, 4, 48]. Unfortunately, the techniques developed in these works do not seem to extend to the flow-time related objectives.
Another problem that has attracted a lot of research in practice is the DAG scheduling problem. In this problem, we are given a set of machines (or clusters), and a set of jobs. Each job has a weight that captures the priority of a job. Each job is represented by a directed acyclic graph (DAG). Each node of a DAG represents a task – a single unit of computation– that needs to executed on a single machine. Each task has a release time and a processing length. An edge in the DAG indicates that the task depends on the task , and can not begin its execution unless finishes. The goal is to schedule jobs/DAGs on the machines so as to minimize the total weighted flow-time of jobs. Interestingly, this is one of the models of job scheduling that has been adopted by Yarn, the resource manager of Hadoop . (Hadoop is a popular implementation of MapReduce framework.) Because of this, DAG scheduling has been a very active area of research in practice; see [26, 25] and references there-in for more details.
It is not hard to see that DAG scheduling problem is a special case of the precedence constrained scheduling problem. The union of the individual DAGs of jobs can be considered as one DAG, with appropriately defined release times and weights for each node. Furthermore, if jobs have no weight, and the release times of all tasks in the same DAG are equal, then the DAG scheduling model described above is same as the DAG parallelizability model studied by Agrawal et al . In fact, they design -speed -competitive online algorithm for the problem. On the other hand, our approximation algorithm for PCSP gives an approximation algorithm for the DAG scheduling problem even with weights and arbitrary release times for individual tasks.
1.4 Overview of the Algorithms and Techniques
Both our algorithms are based on rounding linear programming relaxations of the problems. However, individual techniques are quite different, and hence we discuss them separately.
Open-shop Scheduling Problem
Our algorithm for COSSP is based on the geometric view of scheduling developed by Bansal and Pruhs  for the problem of minimizing the sum of general delay costs of jobs on a single machine, which is a special case of our problem when . The key observation that leads to this geometric view is that minimizing general delay costs is equivalent to coming-up with a set of feasible deadlines for all jobs. Moreover, testing the feasibility of deadlines further boils down to ensuring that for every interval of time, the total volume of jobs that have (arrival time, deadline) windows within that interval is not large compared to the length. By a string of nice arguments, the authors show that this deadline scheduling problem can be viewed as a capacitated geometric set cover problem called R2C, which stands for capacitated rectangle covering problem in -dimensional space.222“C” stands for the capacitated version in which rectangles have capacities and points have demands. Later, we shall use “M” for the multi-cover version, where rectangles are uncapacitated (or have capacity 1) and points have different demands. We use “U” for the uncapacitated version, where rectangles are uncapacitated and all points have demand 1. Further, they argue that an -approximation algorithm for R2C problem can be used to obtain an -approximation algorithm for the scheduling problem.
In R2C, we are given a set of points in 2 dimensions, where each is specified by coordinates . Associated with each point is a demand . We are also given a set of rectangles, where has the form . Each rectangle has a capacity and a cost . The goal is to choose a minimum-cost set of rectangles, such that for every point , the total capacity of selected rectangles covering is at least .
Our problem, which we call PR2C, can be seen as a parallel version of R2C. In PR2C, we have instances of R2C problem with a common set of rectangles. Namely, the th instance is defined by , where each is associated with a demand , each is associated with a capacity and cost . Notice that a rectangle has the same cost across the instances, but has different capacities in different instances. The goal is to find a minimum cost set of rectangles that is a valid solution for every R2C instance . Using the arguments similar to , we also show that if there is an -approximation to PR2C problem, it gives an -approximation for the COSSP problem.
Thus, much of our work is about designing a good approximation algorithm for the PR2C problem. Our algorithm for PR2C is a natural generalization of the algorithm for R2C of Bansal and Pruhs in . In , Bansal and Pruhs formulated an LP relaxation for R2C based on knapsack cover (KC) inequalities. In the LP, indicates the fraction of the rectangle that is selected. If the LP solution picks a rectangle to some constant fraction, then we can also select the rectangle in our solution without increasing the cost by too much. After selecting these rectangles, some points in are covered, and some other points will still have residual demands . These not-yet-covered points are divided into two categories: light and heavy points. Roughly speaking, a point is heavy if it is mostly covered by rectangles with in the LP solution; a point is light if it is mostly covered by rectangles with . Heavy points and light points are handled separately. The problem of covering heavy points will be reduced to the R3U problem, a geometric weighted set-cover problem where elements are points in 3D space and sets are axis-parallel cuboids. On the other hand, the problem of covering light points can be reduced to R2M instances. Each R2M instance is a geometric weighted set multi-cover instance in 2D-plane. By appealing to the geometry of the objects produced by the scheduling instance,  prove that union complexity of objects in R3U and R2M instances is small. In particular, R3U instance has union complexity and each R2M instance has union complexity . Using the technique of quasi-uniform sampling introduced in [50, 15] for solving geometric weighted set cover instances with small union complexity, Bansal and Pruhs obtain and approximation ratios for the problems of covering heavy and light points, respectively.
In our problem, we have parallel R2C instances with a common set of rectangles. As in , for each instance, we categorize the points into heavy and light points based on the LP solution. The problem of handling heavy points then can be reduced to R3U instances, with the sets of cuboids identified. However, we cannot solve these R3U instances separately, because such a solution cannot be mapped back to a valid schedule for COSSP. Therefore, we combine the instances of R3C into a single instance of a 4 dimensional problem. So, in the combined instance, our geometric objects, which we call hyper-4cuboids, contain (at most) one 4-cuboid (a cuboid in 4 dimensions) from each of the instances. The goal is to choose a minimum cost set of objects to cover all the points. On the other hand, for the light points,  reduced the problem to R2M instances. Again, this is approach is not viable for our case as we need to solve all the instances in parallel. By a simple trick, we first merge the instances into one R2M instance. We then have R2M instances with a new sets of rectangles identified, which we map into a single 3-dimensional geometric multi-set cover problem.
In both cases we show that the union complexity of the objects in our geometric problems increase at most by a factor of compared to the objects in . Thus, we have union complexity for the problem for heavy points and union complexity for the problem for light points. Using the technique of , we obtain and approximation ratios for heavy and light points respectively, resulting in an overall approximation ratio.
Precedence Constrained Scheduling
Our algorithm for the precedence constrained scheduling problem works in two steps. In the first step, we construct a migratory schedule, in which a job may be processed on multiple machines. For migratory schedules, we can assume that all jobs have unit size by replacing each job of size with a precedence chain of unit length jobs. Solving a natural LP relaxation for the problem gives us a completion time vector . Then we run the list-scheduling algorithm of Munier, Queyranne and Schulz , and Queyranne and Schulz , that works for the problem with weighted completion time objective. Specifically, for each job in non-decreasing order of values, we insert to the earliest available slot after without violating the precedence constraints.
To analyze the completion time of job , we focus on the schedule constructed by our algorithm after the insertion of . A simple lemma is that at any time slot after , we are making progress towards scheduling in the schedule: either all machines are busy in the time slot, or we are processing a set of jobs whose removal will decrease the “depth” of in the precedence graph. This lemma was used in [45, 47] to give their -approximation for the problem of minimizing weighted completion time (for unit-size jobs), and recently by Li  to give an improved -approximation for the same problem. With -speed augmentation, this leads to a schedule that completes every job by the time . With additional speed augmentation, this leads to an -approximation for the problem with general delay cost functions. In the second step, we convert the migratory schedule into a non-migratory one, using some known techniques (e.g. , , ). The conversion does not increase the completion times of jobs, but requires some extra -speed augmentation.
2 Concurrent Open-shop Scheduling
In this section we consider the concurrent open-shop scheduling problem. Recall that in COSSP, we are given a set of machines and a set of jobs. Each job has a release time . A job consists of operations , one for each machine . Each operation needs units of processing on machine . A job finishes only when all its operations are completed. The goal is to construct a preemptive schedule that minimizes the sum of costs incurred by jobs: .
As we mentioned earlier, our algorithm for COSSP is based on the geometric view of scheduling developed in the work of Bansal and Pruhs . Similar to , we first reduce our problem to a geometric covering problem that we call Parallel Capacitated Rectangle Covering Problem (PR2C). We argue that an -approximation to PR2C will give an approximation to our problem. Then, we design a -factor approximation algorithm to PR2C.
2.1 Reduction to Pr2c problem.
In PR2C, we have instances of R2C problem with a common set of rectangles. Input to the problem consists of sets , where each is a set of points in 2-dimensional space. Each point is specified by its coordinates and has a demand . The input also consists of a set of rectangles. Each rectangle has the form , and has a cost . Each rectangle also has a capacity which depends on the point set . Notice that a rectangle has the same cost across the instances, but has different capacities in different instances. The goal is to find a minimum cost set of rectangles that is a valid solution for every R2C instance . Recall that a set of rectangles is a valid solution to an instance of R2C, if for every point the total capacity of rectangles that cover the point is at least the demand of the point.
We can capture the PR2C problem using an integer program. Let
be a binary variable that indicates if the rectangleis picked in the solution or not. Then, the following integer program IP (1 - 3) captures the PR2C problem.
To see the connection between COSSP and PR2C, we need to understand the structure of feasible solutions to COSSP. Consider a feasible schedule to an instance of COSSP. Suppose the completion time of a job is in . This implies that on every machine , the job completes its operation in the interval . If is a feasible schedule and is the completion times of jobs, then processing jobs using the Earliest Deadline First (EDF) algorithm on each machine ensures that all jobs finish by their deadline . Thus, one of the main observations behind our reduction is that minimizing the sum of costs incurred by jobs is equivalent to coming up with a deadline for each job . Thus a natural approach is to formulate COSSP as a deadline scheduling problem. However, this raises the question: Is there a way to test if a set of deadlines is a feasible solution to COSSP? An answer to the question is given by the characterization of when EDF will find a feasible schedule (one where all jobs meet their deadlines) on a single machine. To proceed, we need to set up some notation.
For an interval , which consists of all the time slots between and , let denote the set of jobs that have release times in : . For a set of jobs and a machine , let denote the total processing length of operations of the jobs in the set . Now, we introduce the notion of excess for an interval.
For a given interval and a machine , the excess of on machine , denoted by , is defined as .
Following lemma states the condition under which EDF can schedule all the jobs within their deadlines.
Given a set of jobs with release times and deadlines, scheduling operations on each machine according to EDF is feasible if and only if, for every machine and every interval , the total processing lengths of operations corresponding to jobs in that have deadlines greater than is at least .
Clearly, if the total processing lengths of operations corresponding to jobs in that have deadlines greater than is less than , then no algorithm can meet all the deadlines, as the length of operations scheduled in the interval is greater than . Sufficiency of the above lemma follows from a bipartite graph matching argument, and we refer the reader to  for more details.
Lemma 2.2 leads to an integer programming formulation for COSSP, and we shall use this IP to define the PR2C instance. Let ; note that every reasonable schedule finishes by . For every job , and every integer , let be the largest integer such that (if such does not exist, then ). Let . For every and , we have a variable indicating whether . Therefore, for a job , the total number of variables in our IP will be at most . For every and every , let be the integer such that .
Our IP for COSSP is as follows:
We shall argue that the value of above IP is within a factor of times the optimum cost of the scheduling problem. First, given a feasible schedule for COSSP with completion time vector , we construct a solution to the above integer program with a small cost. For each and integer such that , we let ; set all other variables to . Clearly, the cost of the solution to the IP is at most times the cost of . Moreover, Constraint (5) is satisfied: we have that . If for some , then , as .
On the other hand, if we are given an optimum solution to the above IP, we can convert it into a schedule for COSSP of cost at most the cost of the IP. For every , let the completion time of the job to be , where is the largest number such that ; such a must exist in order to satisfy the Constraint (5) for the interval . Then the cost of our schedule is at most the cost of IP. On the other hand, Constraint (5) says that , implying , as implies .
We will not discuss the representation of arbitrary delay functions and how to compute here. We refer the readers to  for more details. Running time of all our algorithms will be polynomial in and .
The above integer program hints towards how we can interpret COSSP geometrically as PR2C. Indeed, it is equivalent to PR2C problem. For every machine in COSSP, we create a set in PR2C. For every interval with , we associate a point in 2-dimensional space. The demand of a point is equal to , the excess of interval on machine . This completes the description of the point sets in PR2C. Now we define the rectangle set . We shall create a rectangle for each job and each for which is defined. Notice that appears on the left side of Constraint (5) if and . Thus, we shall let the rectangle for the variable to be . This rectangle has cost ; for the instance , it has a capacity . Notice that the rectangle for covers a point if and only if ; in other words, the job is released in the interval , and its completion time is greater than , which is exactly what we want. Thus, the IP (4 - 6) is equivalent to our PR2C problem. We now forget about the COSSP problem and focus exclusively on designing a good algorithm for the PR2C problem.
2.2 Algorithm For Pr2c Problem
Our algorithm for PR2C is based on rounding a linear programming relaxation of the IP (1 - 3). As pointed out in , the linear programing relaxation of the IP (1 - 3) obtained by relaxing the variables has a large integrality gap even when there is only one set of points. We strengthen our LP by adding the so called KC inequalities, introduced first in the work of Carr et al (). Towards that we need to define , which indicates the total capacity of rectangles in with respect to a point set : . We are ready to write the LP.
Let us focus on the KC inequalities Eq.(8) for a point set . Fix a point and a set of rectangles . Recall that has a demand of . Suppose, in an integral solution all the rectangles in are chosen. Then, they contribute at most towards satisfying the demand of the point . The remaining rectangles still need to account for . Notice also that in the KC constraints we truncate the capacity of a rectangle to . This ensures that the LP solution does not cheat by picking a rectangle with a large capacity to a tiny fraction. Clearly, the truncation has no effect on the integral solution.
There are exponentially many KC constraints in our LP. However, using standard arguments, we can solve the LP in polynomial time to get a -approximate solution, for any , which suffices for our purposes to obtain a logarithmic approximation to COSSP; see [8, 14] for more details. The rest of this section is devoted to rounding the LP solution.
Weighted Geometric Set Multi-Cover Problem. The main tool used in our rounding algorithm is the result by  for the weighted geometric set multi-cover problem. Similar to the standard set cover problem, in this problem, we are given a set of points and a set consisting of subsets of ; typically, the sets in are defined by geometric objects. Further, each point has a demand , and a set has a weight The goal is to find the minimum weight set such that every point is covered by at least number of subsets in . Note crucially that the sets in do not have capacities. If the sets have capacities, then the problem becomes similar to PR2C. The interesting aspect of geometric set cover problems is that sets in are defined by geometric objects, hence subsets in have more structure than in the standard set cover problem. In particular, if the sets in have low union complexity, then the problem admits a better than approximation algorithm. Now we introduce the concept of union complexity of sets.
Given a set of geometric objects, the union complexity of is the number of edges in the arrangement of the boundary of .
We will not get into the details of the definition, and we refer the readers interested in knowing more to  for an excellent introduction or to [50, 15, 9]. For our purposes, it suffices to know that the union complexity of 3-dimensional objects is the total number of vertices, edges, and faces on the boundary. It turns out that the geometric objects in our rounding algorithm are 4-dimensional objects. However, by appropriate projection, we reduce our problem of bounding the union complexity of 4-dimensional objects to that of 3-dimensional ones.
Suppose the union complexity of every of size is at most . Then, there is a polynomial time approximation algorithm for the weighted geometric set multi-cover problem. Further, the approximation factor holds even against a feasible fractional solution.
Our rounding algorithm is based on reducing our problem to several instances of the geometric set cover problem, and then appealing to Theorem 2.5. Consider an optimal solution to the LP (7 - 9). Let be some constant. We first scale the solution by a factor of , and let be this scaled solution. Clearly, scaling increases the cost of LP solution at most by a factor of . Our rounding consists of three steps.
In the first step, we pick all rectangles for which . Let denote this set. Let denote the set of points that are covered by rectangles in . We modify the point sets for , by removing the points that are already covered by . For all , let . In the second step, for each
we classify the points in the set into two types,heavy or light, based on the LP solution. For heavy points, we create an instance of the geometric set cover problem, and then appeal to Theorem 2.5. The main technical difficulty here is to bound the union complexity of the geometric objects in our instance. For the light points, we create another separate instance of the geometric set multi-cover problem, and then apply the Theorem 2.5. Finally, we obtain our solution by taking the union of the rectangles picked in all three steps.
Fix a set of uncovered points. For a point , let denote the set of rectangles in that contain . That is, . For a point , define the residual demand of as . From the definition of set , for every point , we note that . We now apply the KC inequalities on the set for each point . From Eq.(8) we have, for all
which implies that our scaled solution satisfies for all :
Note that for all , ; otherwise, we would have picked those rectangles in . Next we round the residual demands of points and the capacities of rectangles as follows: For a point , let denote the residual demand of rounded up to the nearest power of 2. On the other hand, we round down the capacities of rectangles to the nearest power of 2; let denote this new rounded down capacities. Since is scaled by a factor of we still have that for all
To classify the points into heavy and light, we need the notion of class.
Let . A rectangle is a class rectangle with respect to a point set if . We say that a point is a class point if .
Recall that the capacities of rectangles for different point sets can be different. Therefore, the class of a rectangle depends on the point set . Now we categorize points in into heavy and light as follows.
A point belonging to class is heavy if its demand is satisfied by the rectangles of class at least in the LP solution. Otherwise, we say that the point is light.
From the definition, for all heavy points we have
and from Eq.(10) all light points satisfy
Let denote the set of heavy points and let denote the set of light points. Let . We create two separate instances, a heavy instance and a light instance of the PR2C problem, corresponding to the set of heavy points and the set of light points. The capacities of rectangles in these instances will be their rounded down values: that is, the capacity of a rectangle for the point sets will be . Similarly, the demand of a point will be its residual demand . Notice that the LP solution restricted to is a feasible solution for both and . We round the solution for these two instances of PR2C problem separately, and take their union.
2.2.1 Heavy Instance and Hyper-4cuboid Covering Problem
We round the heavy instance by reducing it to a weighted geometric set cover problem called hyper-4cuboid covering problem (HCCP). HCCP is a generalization of the R3U problem considered in . In HCCP, we are given a set of points in 4-dimensional space. We are also given a set of geometric objects. Each object is a set of disjoint 4-dimensional cuboids. Formally, , where each is a 4-dimensional cuboid (4-cuboid) of the form . We call a hyper-4cuboid. Each hyper-4cubiod has a cost . The goal is to find the minimum cost set such that covers all the points in . We say that a hyper-4cuboid covers a point if .
Now we show that there is a reduction from the PR2C problem on the heavy instance to the HCCP problem. For a point with coordinates , we create a point with coordinates . Note that the last coordinate of is determined by , which is the residual demand of point . And the first coordinate of is determined by the index of the set . For every rectangle , we create a hyper-4cuboid , which contains exactly one 4cuboid for each point set . For a given index and a rectangle , there is a 4cuboid of the form . Note that the last coordinate is determined by the capacity of rectangle towards the point set . The cost of hyper-4cuboid is same as the cost of rectangle .
Suppose there is a feasible solution of cost to the PR2C problem on the heavy instance, where for each and for each , the demand of the point is completely satisfied by the rectangles of class at least the class of . Then there is a feasible solution of cost at most for the corresponding instance of HCCP. Similarly, a solution of cost to the HCCP problem gives a solution of the same cost for the heavy instance of PR2C.
Suppose is a feasible solution of cost satisfying the condition stated in the lemma. Then, we claim that the set of hyper-4 cuboids corresponding to the rectangles is a feasible solution to HCCP. Consider a point in the instance of HCCP produced by our reduction. Suppose has coordinates . Then, there must be a point with coordinates and the demand . Suppose covers this point , and has dimensions . Then, we claim that covers the point . This is true, since contains a 4-cuboid with coordinates . It is easy to verify that as , which follows from the condition of the lemma.
The opposite direction also follows from the one-to-one correspondence between the rectangles in and the hyper-4cuboids. Suppose is a feasible solution to the instance of HCCP problem. Then, it is easy to verify that picking the rectangles corresponding to the hyber-4cuboids defines a feasible a solution to . ∎
Now, observe crucially that there is a fractional solution of cost at most the cost of LP solution to the heavy instance satisfying the requirements of Lemma 2.8. This is true because the LP solution satisfies the inequality Eq.(11). Therefore to apply Theorem 2.5, it remains to quantify the union complexity of hyper-4cuboids in our reduction.
The union complexity of any hyper-4 cuboids in is at most , where .
Consider a set of hyper-4 cuboids. For , define as the set of all 4-cuboids at level ; Formally, , such that every 4-cuboid has the first dimension equal to . In other words, is the set of all 4-cuboids which have the same first dimension . Now, observe from our construction that for any pair , the objects in and do not intersect. This is because 4-cuboids at different levels are separated by a distance of 1 in the first dimension. Therefore, the union complexity of the subset is equal to times the maximum of union complexity of 4-cuboids in . However, 4-cuboids in the sets all have the same form; thus, it suffices to bound the union complexity for any .
Fix some , and consider the 4-cuboids in the set . Notice carefully that all the 4-cuboids in share the same first dimension . This implies that we can ignore the first dimension as it does not add any edges to the arrangement of objects in . Now consider the projection of 4-cuboids to the remaining 3 dimensions; These are cuboids of the form . For these type of cuboids,  proves that the union complexity is at most , where is the number of distinct values taken by . In our reduction, the values of correspond to the capacities of rectangles. Recall that the capacities of rectangles in PR2C are defined based on the processing lengths of operations. As we round down the capacities to the nearest power of 2, the number of distinct values taken by is at most , where . Putting everything together, we complete the proof. ∎
There is a solution of cost times the cost of LP solution for the HCCP problem, and hence for the PR2C on the heavy instance.
2.3 Covering Light Points and Geometric Multi Cover by Hyper-cuboids
Here, we design a approximation algorithm to PR2C problem restricted to the light instance: . There is one main technical difference between our algorithm for the light case from the algorithm of Bansal and Pruhs . Bansal and Pruhs divide the light instance into different levels, where in each level they solve 2-dimensional geometric multi-cover problem, and show a approximation to each level. This in turn leads to a constant factor approximation algorithm to the light instance. However, in our problem we cannot solve the instances separately. Therefore, using a simple trick, we map our problem into a single instance of 3-dimensional problem, and show a approximation to it. This is precisely the reason we loose the factor for the light case, as the union complexity of our objects becomes times the union complexity of the objects in .
Recall that for every point , the LP solution satisfies the inequality Eq.(12). Again, our idea is to reduce the instance to a geometric uncapacitated set multi-cover problem, and then appeal to Theorem 2.5. The geometric objects produced by our reduction are sets of cuboids. Hence we abbreviate this problem as GMCC.
In GMCC, we are given a set of points in 3-dimensional space, and a set of geometric objects. Each geometric object is a collection of disjoint cuboids, which we call as hyper-cuboids. Each point has a demand , and each hyper-cuboid has a cost . The individual cuboids constituting a hyper-cuboid have the form . The objective is to find the minimum cost subset of hyper-cuboids such that for every point , there are at least number of hyper-cuboids in that contain the point .
Now we give the reduction from an instance of PR2C to an instance of GMCC. Fix some . Let be some large constant, and recall . For every light point with coordinates , we create different points, each of them shifted in the second dimension by a distance of . The different points corresponding to a point have coordinates , for . To complete the description of point set , we need to define the demands of each point, which we will do after describing the cuboids in .
For every rectangle , with dimensions , we create one hyper-cuboid . The cost of hyper-cuboid is same the as the cost of rectangle . The hyper-cuboid contains different cuboids, one for each point set . Fix an index . The cuboid corresponding to the set has dimensions , where denotes the class of rectangle with respect to . Note that there is one-to-one correspondence between rectangles to hyper-cuboids . Let ; that is, the rectangle that corresponds to the hyper-cuboid . We say that is picked to an extent of in the LP solution to mean the extent to which the rectangle is picked in the LP solution.
Fix a point . We say that is contained in the hyper-cuboid , if it is contained in any of the cuboids that constitute . Now we are ready to define the demand of a point as .
To understand the above definition, let us consider a point with coordinates . The cuboids that can contain should have form , where and . These cuboids correspond to the class rectangles in . Therefore, demand of a point with coordinates is exactly equal to the number of class rectangles that cover the point in the LP solution.
This immediately tells us that the LP solution is a feasible fractional solution to the instance of GMCC problem produced by our reduction . Now, we show the opposite direction.
If there is an integral solution to the instance of GMCC produced by our reduction of cost at most times the LP cost, then there is an integral solution of the same cost to the instance of PR2C on the light instance.
Suppose is a solution to GMCC problem. We show that there is a corresponding solution to PR2C problem of exactly the same cost. Our solution is simple. For every hyper-cuboid , we pick the corresponding rectangle in our solution to PR2C. From our reduction , the costs of hyper-cuboids is same as the costs of rectangles, which implies that solution costs of the two problems are the same. It remains to show that is a feasible solution for the light instance.
Fix a point set , and consider any point . Let . In our reduction, there are different points , where the point has coordinates . The demand of is exactly equal to the total number of class rectangles that cover the point in the LP solution. Recall that in our reduction , each rectangle produces exactly one cubiod at the level (that is with first dimension ) in the corresponding hyper-cuboid . By abusing notation, let us call denote this cuboid by .
Let class of point be . The total capacity of rectangles that cover point in our solutin is
From the one-to-one correspondence between the rectangles and the cuboids, the term is exactly equal to the number of cuboids that cover the point , which is at least the demand . Now, in our reduction, the the demand of the point (which has coordinates ) is exactly equal to the rounded down value of the total number of class rectangles that cover the point in the LP solution. Therefore,
Simplifying the righthand side of the above equation,
where the last inequality follows from the Eq.(12), and the fact that is a light point. Therefore, is a feasible solution to GMCC. ∎
Thus, to complete our algorithm for the light instance, it remains to design an approximation algorithm to GMCC. For that we once again plan to rely on the Theorem 2.5, which requires to us to bound the union complexity of objects in .
The union complexity of any hyper-cuboids in is at most .
Consider any of hyper-cuboids. From our definition, each hyper-cuboid is a set of cuboids. For , define as the set of all cuboids with first dimension . Clearly, for any two indices , the cuboids in and do not intersect, as they are separated by a distance of 1 in the first dimension. Moreover, the cuboids in different have same form except that they are shifted in the first dimension. Thus, the union complexity of hyper-cuboids in is at most times the union complexity of cuboids in , for any .
Fix some and consider the set of cuboids in . Since all of the cuboids have exactly the same side in the first dimension, , the union complexity of cuboids in is equal to the union complexity of the projection of the cuboids to the last two dimensions. This projection of the cubiods produces a set of axis parallel rectangles. Furthermore, these axis parallel rectangles partition into sets, , such that no two rectangles from different sets intersect. This is true as these rectangles have coordinates in the second dimension. In each set , the rectangles are abutting the -axis (or the axis of second dimension). Bansal and Pruhs  showed that the union of complexity such axis parallel rectangles abutting -axis is at most . Therefore, the union complexity all the axis parallel rectangles is at most . Thus we conclude that the union complexity hyper-cuboids is equal to , which completes the proof. ∎
Thus, from Theorem 2.5, we get a approximation algorithm for the light instance of PR2C.
2.3.1 Proof of Theorem 1.1
Our final solution for PR2C problem is obtained by taking the union of all the rectangles picked in our solutions for the heavy and the light instance, and the set . Recall that in , we pick all the rectangles for which , for . The cost of our solution is at most times the cost of LP (7 - 9), which implies an approximation algorithm. This completes the proof.
3 Precedence Constrained Scheduling
In this problem, each job has a processing length