Many models for scheduling problems assume jobs to be atomic. There are, however, numerous natural situations where it is more suitable to model jobs as compositions and consider the problem as an order scheduling formulation: In this case a job is called order and is assumed to be composed of a set of operations, which are requests for products. A job is considered to be finished as soon as all its operations are finished and a natural objective is the minimization of the sum of completion times of all jobs.
Another important aspect in such scenarios can be the consideration of setup times that may occur due to the change of tools on a machine, the reconfiguration of hardware, cleaning activities or any other preparation work [2, 3, 1]. We model this aspect by assuming the set of operations to be partitioned into several families. The machine needs to perform a setup whenever it switches from processing an operation belonging to one family to an operation of a different family. Between operations of the same family, however, no setup is required.
This kind of order scheduling (with setups) has several applications, which have been reported in the literature and we name a few of them here: It can model situations in a food manufacturing environment . Here, several base ingredients need to be produced on a single machine and then assembled to end products and setup times effectively occur due to cleaning activities between producing different base ingredients. Another example  are customer orders where each order requests several products, which need to be produced by a single machine, and an order can be shipped to the customer only as soon as all products have been produced. Finally, our primary motivation for considering multioperation jobs comes from its applicability within our project on “On-The-Fly Computing” . Here, the main idea is that IT-services are (automatically) composed of several small, elementary services that together provide the desired functionality. Setup times occur due to the reconfiguration of hardware or for the provisioning of data that needs to be available depending on the type of elementary service to be executed.
1.1 Contribution & Results
We consider the aforementioned problem, which is formally introduced in Section 2 and which in the survey  by Leung et al. was termed fully flexible case of order scheduling with arbitrary setup times, for the case of a single machine. Because it is known that the problem is NP-hard as mentioned in Section 3 where we summarize relevant related work, we study the problem with respect to its approximability. The key ingredient of our approach is based on the following idea. We define a simplified variant of the considered problem, in which we only require that, before any operation of a given family is processed, a setup for this family is performed once at some (arbitrary) earlier time. Solutions to this simplified variant already carry a lot of information for solving the original problem. We show that they can be transformed into -approximate solutions for our original problem in polynomial time in Section 4. We then provide an algorithm that solves the simplified variant optimally leading to a -approximation for the original problem in Section 5. The runtime of the approach, however, is , where denotes the number of families. Thus, it is only polynomial for a constant number of families, which turns out to be no coincidence as we also observe that solving the simplified variant optimally for non-constant is NP-hard. We then show how an algorithm by Hall et al.  can be combined with our approach from Section 4 to obtain a runtime of while worsening the approximation factor to in Section 6. We complement this result by a hardness result for approximations with a factor less than assuming a certain variant of the Unique Games Conjecture.
Finally, we present some results of a simulation-based evaluation of our approach in Section 7. We show that on randomly created instances our algorithm even performs better than suggested by the approximation factor of and we show how our approach can be tuned to improve its performance in such settings.
We consider a scheduling problem in which a set of jobs needs to be processed by a single machine. Each job has a weight and consists of a set of operations . Each operation is characterized by a processing time and belongs to a family . If the schedule starts with an operation of family and whenever the machine switches from processing operations of one family to an operation of another family , a setup taking time needs to take place first. Given this setting, the goal is to compute a schedule that minimizes the weighted sum of job completion times, where a job is considered to be completed as soon as all its operations are completed. More formally, a schedule is implicitly given by a permutation on and the completion time of an operation is given by the accumulated setup times and processing times of jobs preceding operation . That is, for the completion time of operation is given by , where is an indicator being if and only if its parameters are the same and otherwise. Then, the completion time of a job is given by and the goal is to minimize the total weighted completion time given by .
Using the classical three-field notation for scheduling problems and following Gerodimos et al. , we denote the problem by . We study this problem in terms of its approximability. A polynomial-time algorithm has an approximation factor of if, on any instance, , where and denotes the total weighted completion time of and an optimal solution, respectively.
3 Related Work
The problem , and the more general version with multiple machines, are also known as order scheduling. More precisely, it was termed order scheduling in the flexible case with setup times in the survey  by Leung et al. As previously mentioned, it is known that this problem is already NP-hard for the unweighted case and a single machine as proven by Ng et al. . Besides this hardness result, only one single positive result is known. Due to Gerodimos et al. , a special case can be solved optimally in time . This special case requires that the jobs can be renamed so that job contains, for each operation , an operation such that and . A related positive result is due to Divakaran and Saks . They designed a -approximation algorithm for our problem in case all jobs consist of a single operation. Monma and Potts worked on algorithms for the same model with a variety of objective functions. One result is an optimal algorithm for total weighted completion time with the constraint that the number of families is constant . Their approach however relies on the fact that there is a trivial order inside each family, and the problem only arises in interleaving the families. Since we are dealing with multi-operation jobs we cannot assume such an order.
Taking a broader perspective of the problem, it can be seen as a generalization of the classical problem of minimizing the total (weighted) completion time when there are no setups and all jobs are atomic (i.e., we only have single-operation jobs). It is well-known that sequencing all jobs in the order of non-decreasing processing times (shortest processing time ordering, SPT) minimizes the total completion time on a single machine . In case jobs have weights and the objective is to minimize the total weighted completion time, a popular result is due to Smith . He showed that weighted shortest processing time (WSPT), that is, sort the jobs non-decreasingly with respect to their ratio of processing time and weight, is optimal for this objective. Besides these two results, the problem has been studied quite a lot and in different variants with respect to the number of machines, potential precedences between jobs, the existence of release times and even more. For a single machine it was shown by Lenstra and Kan  and independently by Lawler  that adding precedences among jobs to the (unweighted) problem makes it NP-hard. In their paper, Hall et al. 
analyzed algorithms based on different linear programming formulations and obtained constant factor approximations for several variants including the minimization of the total weighted completion time on a single machine with precedences. Particularly, they obtained a-approximation for this problem, which we will later make use of for our approximation algorithm for non-constant . Actually, the factor they achieve is essentially optimal, as Bansal and Khot  were able to show that a -approximation is impossible for any assuming a stronger version of the Unique Games Conjecture.
More loosely related is a model due to Correa et al.  in which jobs can be split into arbitrary parts (that can be processed in parallel) and where each part requires a setup time to start working on it. They proposed a constant factor approximation for weighted total completion time on parallel machines. Recently, some approximation results for the minimization of the makespan for single operation jobs have been achieved for different machine environments with setup times [6, 12, 13]
. Finally, scheduling with setup times in general is a large field of research, primarily with respect to heuristics and exact algorithms, and the interested reader is referred to the three exhaustive surveys due to Allahverdi et al.[2, 3, 1].
4 The One-Time Setup Problem
In this section, we introduce a relaxation of and show how solutions to this relaxation can be transformed into solutions to the original problem by losing a small constant factor. The one-time setup problem ( ) is a relaxation of in which setups are not required on each change to operations of a different family. Instead we only require that, for any family , a setup for is performed once at some time before any operation belonging to is processed. Formally, we introduce a new (setup) operation for each family with , and a precedence relation between and each operation belonging to that ensures that is processed before the respective operations. A schedule is then implicitly given by a permutation on all operations (those belonging to jobs as well as those representing setups). We only consider those permutations, which adhere to the precedence constraints. The completion time of an operation under schedule is given by . The remaining definitions such as the completion time of a job and total weighted completion time remain unchanged. Note that this problem is indeed a relaxation of our original problem in the sense that the total weighted completion time cannot increase when only requiring one-time setups.
Before we turn to our approach to transform solutions to into feasible solutions for , we first give a simple observation. It shows that we can, intuitively speaking, glue all operations of a job together and focus on determining the order of such glued jobs. Formally, given a schedule , a job is glued if all of ’s operations are processed consecutively without other operations in between. We have the following lemma.
Every schedule can be transformed into one in which all jobs are glued without increasing the total weighted completion time.
Consider some schedule with total weighted completion time . Without loss of generality assume the jobs to be finished in the order . Consider job and let be the operation of processed last. We move all operations of so that they are processed consecutively in a block and directly before . This does not change the completion time of and does not increase the completion time of any other job. Also, because we only move operations to later points in time, all precedence constraints are still satisfied. We repeat this process for each job in the schedule in the order . Thereby, we obtain a schedule with completion time in which all jobs are glued. ∎
Due to the previous result, we assume in the rest of the paper that each job in an instance of only consists of a single operation . This operation has a processing time and the precedence relation is extended so that each setup operation with a precedence to some now has a precedence to .
4.1 Transforming One-Time Setup Solutions
In this section, we present our algorithm Transform to transform a solution for the one-time setup problem into a solution for our original problem . Initially, is the sequence of operations as implied by the solution after splitting the glued jobs into its original operations again and not including setup operations. A batch is a (maximal) subsequence of consecutive (non-setup) operations of the same family. Intuitively, the schedule implied by probably already has a useful order for the jobs but is missing a good batching of operations of the same family. This would lead to way too many setups to obtain a good schedule. Therefore, the main idea of Transform is to keep the general ordering of the schedule but to make sure that each setup is “worth it”, i.e., that each batch is sufficiently large to justify a setup. We will achieve that by filling up each batch with operations of the same family scheduled later until the next operation would increase the length of the batch too much. More precisely, let be the batches in (in this order). We iterate over the batches in the order of increasing and for each batch of family do the following: Move as many operations of family from the closest batches , to as possible while ensuring that , where (we call it the pull factor) is some fixed constant and . If a batch gets empty before being considered, it is removed from (and hence, not considered in later iterations). We show the following theorem on the quality of Transform.
If , then , for any .
We only need to show that . For the analysis we will compare the completion time of each operation in to the one in (in their respective cost model). We denote by the schedule up to and including operation and by that some operation in is of family . We have
We will now analyze the contribution of some family to the completion time of in . We have
due to the following reasoning. The first three summands describe the contribution of class ’s jobs to the completion time of in . Compared to , we move operations of length at most belonging to family in front of (recall that empty batches are removed in the process of Transform; only the last batch of some family before pulls operations from behind in front of ) and we do not consider the one-time setup operation. The last summand represents the contribution due to setups for family . We need to do at most many setups for operations of family that contribute to the completion time of in . This is true because of the following reasoning. From our construction we know that for two batches of the same family, with no other batches of the same family in between, the processing time of those batches combined has to be at least
, otherwise they would have been combined. If there is an odd number of batches we cannot say anything about the last batch, except that it has a nonzero processing time. This factor is captured by the rounding. Therefore we obtain
where the last inequality holds because a family can only contribute to the completion time of in if it contributed to the completion of in and in this case by definition. (If itself got moved to the front there might be a family that contributed to but does not to ). Since each operation’s completion time in is at most (1+) times as big as in , we know that for each job , . ∎
Actually, one can show that there are instances in which and therefore, that the analysis of Transform is indeed tight (cf. Section 4.1.1). However, it is also worth mentioning that these instances are rather “artificial” as the jobs’ processing times (and even their sum) are negligible while setup operations essentially dominate the completion times. In less nastily constructed instances, we would expect that even for moderate values , significantly dominates for most of the operations as grows the later is scheduled while stays constant. This would then lead to . This observation is also discussed and supported by our simulations (cf. Section 7).
4.1.1 Tightness of the Analysis of Transform
In Section 4.1 we mentioned that the loss of factor that we analyzed our Transform algorithm to achieve is tight, for . We prove that statement formally by providing an instance achieving this factor.
The analysis of Transform is tight.
As we can see in Figure 1, for an infinitely small , the analysis of and is tight for the blue operation. Imagine that both orange operations, as well as the blue operation belong to single operation jobs. We replace the singular blue operation job with many of those operations, each with processing time . We again compare the total completion time of the schedule, both before and after Transform (in the respective cost models). We get while . For to infinity and to zero we get that our analysis of is tight. Note that we use instead of zero in this example to show that the analysis is tight even when we do not allow processing times of zero. ∎
5 Approximations for Constant Number of Families
In this section, we study approximations for the problem and a fixed number of families . The general idea is to first solve the problem optimally and then to use the Transform algorithm as described in the previous section, leading to -approximate solutions for instances of . To solve the problem optimally, we describe a two-step algorithm and two possible approaches for its second step. The first one is a direct application of a known approach by Lawler . We also propose a new, alternative approach, which is much simpler as it is specifically tailored to our problem.
To solve optimally, in the first step, we exhaustively enumerate all possible permutations of setups. In the second step, we then find, for each permutation, the optimal schedule under the assumption that the order of setup operations is fixed according to the permutation. After we have performed both of these steps, we can simply select the best result, which is the optimal solution to the problem.
5.1 Series Parallel Digraph and Lawler’s Algorithm
Lawler  proposed an algorithm that optimally solves in polynomial time under the condition that the precedences can be described by a series parallel digraph. To solve , the general idea is to modify the precedence graph of a given instance so that it becomes series parallel and then apply Lawler’s algorithm. We create a series parallel digraph that represents both the jobs reliance on setups as well as the predetermined order of setup operations as follows. Given a permutation of setup operations, we create a precedence chain of nodes . Then for each operation , we add an edge from to such that is the smallest value for which all operations in belong to a family in . Intuitively, since we have fixed the order of setups for each operation we can easily see which setup operation is the last one that is necessary to process the operation. We do not care about the other precedences because they became redundant after fixing the setup order.
Having done this we have a problem with a series parallel digraph that is equivalent to the problem. At this point we can use the result by Lawler  to solve this in polynomial time.
5.2 Simple Local Search Algorithm
In this section, we propose a simple algorithm to solve optimally in polynomial time given that is fixed. Since our algorithm is tailored to this specific problem, it is a lot simpler and works with less overhead which justifies introducing it here alongside the aforementioned solution.
We first show that we can assume optimal schedules to fulfill a natural generalization of the weighted SPT-order to the setting with setup times. We define this notion for our original problem as follows: A schedule is in generalized weighted SPT-order if the following is true: For every with , is scheduled before or is scheduled at a position where cannot be scheduled (because precedences would be violated). If , is scheduled before or is scheduled at a position where cannot be scheduled, where and .
Any schedule with total weighted completion time can be transformed into one in generalized weighted SPT-order without increasing the total weighted completion time.
If there are two jobs , with that do not fulfill the desired property, cannot be optimal due to the following reasoning. Let be the set of operations scheduled after and before . Let and denote the processing time of all jobs and setups and all weights in , respectively. We show that moving directly behind () or moving directly before () reduces the total weighted completion time of .
The change of the total weighted completion time due to is given by . If , decreases the total weighted completion time and we are done.
Otherwise, if , we show that leads to a decrease. Since we know that . Therefore, . The change in total weighted completion time of is given by . Since there exist with and . Plugging those in we get
Therefore, in both cases we get a contradiction to the optimality of and hence, no such jobs and can exist.
It remains to argue about pairs of jobs and such that . For pairs of jobs and such that we use an argument analogous to the one in the proof above, or does not increase the total weighted completion time. Additionally, it establishes the desired property between and and one can easily verify that such a move cannot lead to a new violation of the property for any other pair of jobs. Therefore, the number of pairs of jobs violating the desired property strictly decreases. Repeated application of this process leads to a schedule with the desired properties. ∎
Due to the previous lemma, we will restrict ourselves to schedules that are in generalized weighted SPT-order. We call the (possibly empty) sequence of jobs between two consecutive setup operations in a schedule a block. We therefore particularly require that in any schedule we consider, the jobs within a block are ordered according to the weighted SPT-order.
We execute a local search algorithm started on the initial schedule given by the input setup operation order followed by all jobs in weighted SPT-order (ties are broken in favor of jobs with lower index). An optimal schedule is then computed by iteratively improving this schedule by a local search algorithm. Given a schedule , a move of job is given by the block into which is placed subject to the constraint that the resulting schedule remains feasible. Note that due to our assumption that we only consider schedules in generalized weighted SPT-order, a schedule and a move of a job uniquely determine a new feasible schedule. A move of job is called a Greedy move if it improves the total weighted completion time and among all moves of , no other move leads to a larger improvement. Among all greedy moves for job we call the one that places closest to the beginning of move. Our local search algorithm iteratively applies, in weighted SPT-order, one single move for each job. For ease of presentation, we assume in the following that we have guessed the permutation of setup operations correctly and that in the following the initial schedule in all considerations is always assumed to be .
Each schedule in generalized weighted SPT-order can be reached by applying, in weighted SPT-order, a single move for each job. Additionally, each intermediate schedule is in generalized weighted SPT-order.
Consider the initial schedule and let be the jobs in weighted SPT-order. Now move for job is performed after the moves for , have been performed and it moves to the respective position (i.e., block) to which it belongs in . Note that after any move , the current schedule is in generalized weighted SPT-order since the jobs form a subschedule of , the jobs form a subschedule of , and for all and . ∎
Due to the previous lemma, from now on we assume the following. A sequence of moves defines the schedule obtained by applying the moves (in this order) to the respective first jobs in weighted SPT-order to the initial schedule . Our next step is to show that an optimal schedule can be found by Greedy moves.
Suppose there is a sequence of moves such that the resulting schedule has total weighted completion time . Then all moves are Greedy moves.
Suppose to the contrary that the total weighted completion time is but there is a move among that is not a Greedy one. Let be the last move not being a Greedy one and let be the block to which is moved by . Consider all blocks that can be the destination of a Greedy move of in . Among them let be the one closest to if all are behind and otherwise let be the last one in front of . Let the move be the move of to . Observe that moving to or any block between and is not a Greedy move by the definition of and the fact that is not a Greedy move. Therefore, the total weighted completion time of the schedule is larger than the one of the schedule . We show that also the total weighted completion time of the schedule is smaller than the one of , which is a contradiction. To this end, we distinguish two cases depending on the position of compared to .
We start with the case that is in front of . Let be the set of operations in processed before and after the -th job after the first job of block . We then deduce by the above observations that for every with it holds
We claim that each job with is by moved so that it is within or behind or in front of . Assuming the claim to be true, this concludes the proof of the first case as the improvement due to moves is independent of whether applied to or and hence, has a smaller total weighted completion time than , contradicting its optimality. It remains to prove the claim. Suppose to the contrary that the claim is not true due to a job (if there are several ones take the first one). Let be the last job processed before in . Let be the set of operations in processed after and not later than . Scheduling in instead would increase the total weighted completion time by
where the last inequality follows from Equation 1 together with the fact that for some and the fact that . Therefore, is not a Greedy move, which contradicts the assumption that all moves after are Greedy moves.
In case is behind we can argue as follows. Because is in generalized weighted SPT-order, any job with can only be placed between and if cannot be placed there. This, however, is not true due to the definition of . Therefore, by similar arguments as in the previous case, also has a smaller total weighted completion time than , which contradicts its optimality. ∎
The next corollary follows by the previous three lemmas.
There is an optimal schedule that can be reached by applying, in weighted SPT-order, a single Greedy move per job.
Using similar arguments as in the proof of the previous lemma, we can finally show that our tie breaker (by which Greedy and moves differ) does not do any harm when searching for an optimal solution.
Applying, in weighted SPT-order, a move for each job, leads to an optimal schedule.
We know by the previous result that there are Greedy moves such that is an optimal solution, which is in generalized weighted SPT-order.
Assume are all moves. Let be the move for given . Obviously, and have the same total weighted completion time. We claim that the improvement of each with is the same independent of whether it is applied to or . The claim together with the previous lemma leads to the fact that are all Greedy moves. Consequently, is an optimal solution, which is in generalized weighted SPT-order. Applying the argument iteratively, we obtain the lemma.
It remains to argue why the claim indeed holds. Let be the block that is the destination of and let be the job in front of in . Consider the case . By the assumption that is in generalized weighted SPT-order, can move to a block between (inclusive) and only if at the respective position cannot be scheduled. This, however, cannot be true due to the definition of . Therefore, the improvement of is the same independent of whether it is applied in or . The claim follows by inductively applying the argument to all . ∎
By the previous lemma, we have the final theorem of this section.
The local search algorithm computes optimal solutions for the one-time setup problem in time . In combination with the Transform algorithm from Section 4, this yields an approximation algorithm with approximation factor for our original problem.
6 Arbitrary Number of Families
In the previous section, we have seen that can be solved in time , which is polynomial for a fixed number of families. At this point, one might ask whether there are approximation algorithms running in time , and whether the non-polynomial dependence on is inherent to . The latter is indeed true because Woeginger has shown in his paper  that different special cases of the model, including one being equivalent to our (glued) with the restriction that all job weights are , are equally hard to approximate. Therefore, optimally solving the one-time setup problem is indeed NP-hard for non-constant . On the positive side, we show how problems can be approximated in time in Section 6.1. This approach, however, worsens the approximation by a factor of from to . Lastly we show that is inapproximable within factor , assuming a version of the Unique Games Conjecture, by applying results from Woeginger  and Bansal and Khot .
6.1 Approximation Algorithm
The general idea of our approximation algorithm is the same as for the case of a constant : We first solve and then use Transform from Section 4.1 to obtain a feasible schedule for our original problem. Recall that is a special case of . As this problem has been studied a lot, there are different approximation algorithms in the literature and, for example, [10, 5] provide -approximation algorithms. Therefore, we conclude with the following theorem.
can be approximated with an approximation factor of in polynomial time.
6.2 Lower Bound on the Approximability
Assuming a stronger version of the Unique Games Conjecture , is inapproximable within for any .
Woeginger  showed that the general and some special cases of the problem have the same approximability threshold. Bansal and Khot  could prove that, assuming a stronger version of the Unique Games Conjecture, , and therefore also the special cases in , are inapproximable within for any . The special case we are interested in was defined by Woeginger as: [the] special case where every job has either and , or and , and where the existence of a precedence constraint implies that and , and that and . It is easy to see that an -approximation for also yields an -approximation for the stated special case by transforming an instance of the special case in the following way: For every job with add a family with . For every job with add a job with . For every precedence add an operation to with and . It is easy to see that the optimal solutions of both problems have the same weight. In both representations the difficult part is to decide the order of weight jobs or setups, respectively. All jobs or operations with processing time can be scheduled as early as possible in an optimal solution.
Therefore we can conclude that has at least an equally high appoximability threshold as . ∎
7 Simulation Results
To conclude our study of the problem, we also performed a simulation-based analysis of our approach to complement the theoretical results. To this end, we have taken a look at the conjecture from Section 4.1 on the approximation quality of our algorithm for a constant number of families. Additionally, we propose a way to improve its performance on randomly created instances.
7.1 Approximation Quality for Constant
In Section 4.1 we conjectured that on instances less artificial than the instances constructed to show the tightness of the analysis of Transform, the approximation factor of our approach should rather be upper bounded by than by for moderate values of . To give evidence for this conjecture, we simulated our approach on randomly created instances with small constant pull factors . We evaluated the approximation quality given by the ratio of the total completion time and a lower bound on
given by the solution to the one-time setup problem. The randomly created instances are based on processing times for operations that are randomly drawn from a normal distribution. However, for other distributions like log-normal, uniform, and Weibull we obtained very similar results. The setup costs of each family were set to the average processing time of that family’s operations multiplied by asetup cost factor, which we varied in different experiments. Similarly, each job contains an operation of a family with probability probability per family, also varied in different experiments. Figure 2 shows the typical behavior we observed in our simulations, here exemplarily for the case of a pull factor . As one can see, the observed approximation quality always stayed below and never came close to the theoretically possible in our simulations.
7.2 Heuristical Improvements
In our Transform algorithm, we build batches given a solution to the one-time setup problem by moving operations to the front until a batch becomes “sufficiently large” to justify a setup for the respective family. The term “sufficiently large” is thereby determined based on the pull factor . Our analysis of Transform was shown to be tight and theoretically the best is . However, we also already conjectured in Section 4.1 that practically other values for might lead to superior performance. Therefore, this parameter gives a natural option to tune the algorithm. One might expect that the best pull factor depends on various parameters of an instance such as the processing times or number of operations per family. Figure 3 shows a typical result of our simulations proving that there indeed is much room for improvement compared to a pull factor of if randomly created instances are considered.
8 Future Work
For future work it might be interesting whether there is a better algorithm for transforming solutions for the one-time setup problem to their respective original problem. One could also try to improve the approximation factor by designing algorithms that directly solve our original problem without the detour via the one-time setup problem. Another interesting direction for the future is the question whether our lower bound can be increased. For the special case with a constant number of families, the question whether that problem is already NP-hard also remains open.
We thank the anonymous reviewers who helped us improve the quality of this paper with useful comments and pointing us towards important reference material.
-  Allahverdi, A.: The third comprehensive survey on scheduling problems with setup times/costs. European Journal of Operational Research 246(2), 345–378 (2015)
-  Allahverdi, A., Gupta, J.N., Aldowaisan, T.: A review of scheduling research involving setup considerations. Omega 27(2), 219–239 (1999)
-  Allahverdi, A., Ng, C.T., Cheng, T.C.E., Kovalyov, M.Y.: A survey of scheduling problems with setup times or costs. European Journal of Operational Research 187(3), 985–1032 (2008)
-  Bansal, N., Khot, S.: Optimal long code test with one free bit. In: Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science (FOCS). pp. 453–462. IEEE (2009)
-  Chekuri, C., Motwani, R.: Precedence constrained scheduling to minimize sum of weighted completion times on a single machine. Discrete Applied Mathematics 98(1-2), 29–38 (1999)
-  Correa, J.R., Marchetti-Spaccamela, A., Matuschke, J., Stougie, L., Svensson, O., Verdugo, V., Verschae, J.: Strong LP formulations for scheduling splittable jobs on unrelated machines. Mathematical Programming 154(1-2), 305–328 (2015)
-  Correa, J.R., Verdugo, V., Verschae, J.: Splitting versus setup trade-offs for scheduling to minimize weighted completion time. Operations Research Letters 44(4), 469–473 (2016)
-  Divakaran, S., Saks, M.E.: Approximation algorithms for problems in scheduling with set-ups. Discrete Applied Mathematics 156(5), 719–729 (2008)
-  Gerodimos, A.E., Glass, C.A., Potts, C.N., Tautenhahn, T.: Scheduling multi-operation jobs on a single machine. Annals OR 92, 87–105 (1999)
-  Hall, L.A., Schulz, A.S., Shmoys, D.B., Wein, J.: Scheduling to minimize average completion time: Off-line and on-line approximation algorithms. Mathematics of Operations Research 22(3), 513–544 (1997)
-  Happe, M., Meyer auf der Heide, F., Kling, P., Platzner, M., Plessl, C.: On-The-Fly Computing: A Novel Paradigm for Individualized IT Services. In: Proceedings of the 16th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing (ISORC). pp. 1–10. IEEE Computer Society (2013)
-  Jansen, K., Klein, K., Maack, M., Rau, M.: Empowering the configuration-ip - new PTAS results for scheduling with setups times. In: Proceedings of the 10th Innovations in Theoretical Computer Science Conference (ITCS). LIPIcs, vol. 124, pp. 44:1–44:19. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2019)
-  Jansen, K., Maack, M., Mäcker, A.: Scheduling on (un-)related machines with setup times. In: Proceedings of the 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS). pp. 145–154
-  Lawler, E.L.: Sequencing jobs to minimize total weighted completion time subject to precedence constraints. In: Annals of Discrete Mathematics, vol. 2, pp. 75–90. Elsevier (1978)
-  Lenstra, J.K., Kan, A.H.G.R.: Complexity of scheduling under precedence constraints. Operations Research 26(1), 22–35 (1978)
-  Leung, J.Y., Li, H., Pinedo, M.: Order scheduling models: an overview. In: Multidisciplinary scheduling: theory and applications, pp. 37–53. Springer (2005)
-  Monma, C.L., Potts, C.N.: On the complexity of scheduling with batch setup times. Operations research 37(5), 798–804 (1989)
-  Ng, C.T., Cheng, T.C.E., Yuan, J.J.: Strong NP-hardness of the single machine multi-operation jobs total completion time scheduling problem. Information Processing Letters 82(4), 187–191 (2002)
-  Smith, W.E.: Various optimizers for single-stage production. Naval Research Logistics Quarterly 3(1-2), 59–66 (1956)
-  Woeginger, G.J.: On the approximability of average completion time scheduling under precedence constraints. Discrete Applied Mathematics 131(1), 237–252 (2003)