Compact enumeration for scheduling one machine

03/17/2021
by   Nodari Vakhania, et al.
0

A strongly NP-hard scheduling problem in which non-simultaneously released jobs with delivery times are to be scheduled on a single machine with the objective to minimize the maximum job full completion time is considered. We describe an exact implicit enumeration algorithm (IEA) and a polynomial-time approximation scheme (PTAS) for the single-machine environment. Although the worst-case complexity analysis of IEA yields a factor of ν!, ν>n, large sets of the permutations of the critical jobs can be discarded by incorporating a heuristic search strategy, in which the permutations of the so-called critical jobs are considered in a special priority order. Not less importantly, in practice, the number ν turns out to be several times smaller than the total number of jobs n, and it becomes smaller when n increases. The above characteristics also apply to the proposed PTAS, which worst-case time complexity can be expressed as O(κ!κ k n log n), where κ is the number of the long critical jobs (κ<<ν) and the corresponding approximation factor is 1+1/k, where κ<k. We show that the probability that a considerable number of permutations (far less than κ!) are enumerated is close to 0. Hence, with a high probability, the running time of PTAS is fully polynomial.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/31/2017

Scheduling Monotone Moldable Jobs in Linear Time

A moldable job is a job that can be executed on an arbitrary number of p...
12/06/2017

Exact Algorithms With Worst-case Guarantee For Scheduling: From Theory to Practice

This PhD thesis summarizes research works on the design of exact algorit...
12/30/2020

New Partitioning Techniques and Faster Algorithms for Approximate Interval Scheduling

Interval scheduling is a basic problem in the theory of algorithms and a...
03/03/2018

A Swift Heuristic Method for Work Order Scheduling under the Skilled-Workforce Constraint

The considered problem is how to optimally allocate a set of jobs to tec...
05/12/2020

Data-driven Algorithm for Scheduling with Total Tardiness

In this paper, we investigate the use of deep learning for solving a cla...
10/18/2019

Approximating Weighted Completion Time for Order Scheduling with Setup Times

Consider a scheduling problem in which jobs need to be processed on a si...
10/09/2020

Equitable Scheduling on a Single Machine

We introduce a natural but seemingly yet unstudied generalization of the...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A classical problem of scheduling jobs with release times and due-dates on a single machine with the objective to minimize the maximum job lateness is addressed: job becomes available from its release time (only an already released job can be assigned to the machine), and the due-date is the desirable time for the completion of job . Job needs to be processed uninterruptedly on the machine during time units (which is its processing time). All job parameters are integer numbers. A main restriction of the problem is caused by the fact that the machine can handle at most one job at a time. A feasible schedule is a mapping that assigns every job a starting time on the machine so that just specified restrictions are satisfied. The penalty for the late completion of job is measured by its lateness , which is the difference between its completion time on the machine and its due date . The objective is to find an optimal schedule, a feasible one with the minimum maximum job lateness .

For the approximation purposes, it is more appropriate to consider an equivalent (less common) version of this problem in which the objective is to minimize the maximum job completion time instead of the maximum job lateness (this version has an evident meaning for the approximation unlike the first one in which the minimum job lateness can be 0 or negative). In this second setting job due-dates are replaced by job delivery times. The delivery time of job is the additional amount of time units needed for the full completion of job once it completes on the machine (it is assumed that job is delivered by an independent unit immediately after its completion on the machine, hence its delivery takes no machine time; in particular, the delivery of two different jobs might be accomplished in parallel whereas the machine can process some other job during these deliveries). Thus the delivery time contributes to the full completion time of job , and the delivery of any two or more of jobs can be accomplished in parallel.

A feasible schedule for the second setting is defined similarly as in the first setting. In a given schedule , is the completion time of job on the machine, and is the full completion time of job in schedule . The objective is to find a feasible schedule that minimizes the maximum job full completion time (the so-called makespan commonly denoted by ).

To see the equivalence between the two settings, given an instance of the second version, take a suitably large number and define due-date of job as ; this completely defines a corresponding instance of the first setting. Vice-versa, given an instance of the first setting, take a magnitude and define job delivery times as (see Bratley et al. [2]). Note that jobs with larger job delivery times tend to have larger full completion times, hence the larger is job delivery time, the more urgent it is. Likewise for the first setting, the smaller job due date is, the more urgent it is.

According to the conventional three-field notation introduced by Graham et al. [4] the above two settings are abbreviated as and , respectively (in the first field the single-machine environment is indicated, the second field specifies distinguished job parameters (except job processing times which are always present and job due-dates which are always present with the criterion), and in the third field the objective criteria is given). The problem, which is known to be strongly NP-hard Garey and Johnson [3], is important on its own right because of the numerous real-life applications it has, and also is important because it provides strong lower bounds for a more complex job shop scheduling problem.

An important class of approximation algorithms for NP-hard problems is formed by polynomial time approximation schemes (PTASs) which provide an approximation solution in polynomial time for any fixed approximation ratio. Because of this powerful property, a PTAS is of an essential interest for NP-hard problems, in general. Let be an integer number and be the optimal schedule makespan. A PTAS, for a given , finds a solution with the makespan no greater than (an -approximation solution) and its time complexity is polynomial in , i.e., is not an argument of an exponential function in the formula expressing its time complexity, but there may be an exponential dependence on . If neither appears as an exponential argument in that formula, then the scheme is referred to as fully polynomial.

Brief overview of some related work. The first efficient exact implicit enumeration branch-and-bound algorithm for our scheduling problem was proposed in 70s by McMahon & Florian [11]. There exist two polynomial time approximation schemes for the problem Hall and Shmoys [7] and Mastrolilli [10]). As to the polynomially solvable special cases, if all the delivery times (due-dates) are equal, then the problem is easily solvable by a venerable greedy heuristics proposed by Jackson [9]. The heuristics is straightforwardly adopted for the exact solution of the related version in which all job release times are equal. Jackson’s heuristics, iteratively, includes the next unscheduled job with the largest delivery time (or the smallest due-date). An extension of this heuristic, described by Schrage [14]), gives a 2-approximation solution for problem with job release times (and an exact solution for the preemptive case ). Iteratavely, at each scheduling time given by job release or completion time, among the jobs released by that time, the extended heuristics schedules a job with the largest delivery time. This extended heuristics, to which we shall refer to as LDT (Largest Delivery Time) heuristic, has the same time complexity as its predecessor. As it is observed in Horn [8], it delivers an optimal solution in case all the jobs have unit processing time (given that the job parameters are integers), and it is easy to see that the adaptation of the heuristics for the preemptive version of the problem is also optimal. The (non-preemptive) problem becomes more complicated for equal (not necessarily unit) job processing times, intuitively, because during the execution of a job another more urgent job may now be released. Note that this cannot happen for the unit-time setting as job release times are integers. The setting with equal (non-unit) length jobs can still be solved in polynomial time. Garey et al. [5] used a union and find tree with path compression and have achieved to improve the time complexity to (not an easily accomplished achievement). The author in [16] has described an algorithm for a more general setting , in which a job processing time can be either or , and has recently shown that the problem remains polynomial for divisible job processing times [18], whereas it is strongly -hard if job processing times are from the set , for any integer [17].

There are better than 2-approximation polynomial-time algorithms for the general setting . Potts [13] showed that by the repeated application of LDT-heuristic times, the performance ratio can be improved to 3/2, resulting in an time performance. Nowicki and Smutnicki [12] have proposed another 3/2-approximation algorithm with time complexity . Hall and Shmoys [7] illustrated that the application of the LDT-heuristics to the original and a specially-defined reversed problems leads to a further improved approximation of 4/3 in time .

Our contributions. Our results essentially rely on an important observation that only some group of jobs contribute to the complexity status of the problem, in the sense that if there were no such jobs the problem would be solvable in polynomial time. These “critical” jobs are identified and the solution process is separated in two main phases so that at the first phase the non-critical jobs are scheduled in a low degree polynomial time, and at the second phase an enumeration procedure for at most permutations of the critical jobs is combined with a procedure that inserts these jobs according to the order in such a permutation into the earlier constructed partial schedule of the non-critical jobs. Based on this approach, here we propose two algorithms for problem , an implicit enumeration algorithm (IEA) that constructs an optimal solution, and a polynomial time approximation scheme (PTAS) that guarantees any desired approximation. According the the above made observation, the whole set of jobs is partitioned into four types of jobs (one of which consisting of the critical jobs). Roughly, the major part of the jobs is constituted by the type (2), type (3) and type (4) jobs (the non-critical ones) which are scheduled with a low degree polynomial cost. After that, only the critical type (1) jobs remain unscheduled. Non-dominated permutations of these jobs are considered. Such a permutation is incorporated into the earlier obtained partial schedule of the type (2), type (3) and type (4) jobs, resulting in a complete solution respecting that permutation. This kind of framework can be applied to other scheduling problems with release and delivery times and it permits deeper structural analysis of these problems. For example, we easily establish ties of the single-machine scheduling problem with the SUBSET SUM problem and arrive at a condition when the scheduling problem can be solved in pseudo-polynomial time (see Section xxxx).

By the construction, worst-case time complexity of the proposed algorithms is exponential only in the number of the type (1) (critical) jobs. To the best of our knowledge, no earlier existing implicit enumeration algorithm or polynomial approximation scheme has such property. For the purpose of polynomial-time approximation, the partition of the set of jobs into the small (short) and the large (long) jobs is commonly used for job scheduling problems, in particular, for problem . Such partition is done so that the number of the long jobs is bounded from above by a constant . Then the complete enumeration of all possible distributions of these long jobs takes a constant time, for any fixed . In particular, this kind of enumeration yields a factor of in the earlier approximation schemes for problem because of the possible assignments of the starting times to the long jobs.

The proposed PTAS also uses the partition of jobs into short and long jobs. At the same time, both, PTAS and IEA are based on the partition of the given set of jobs into the above mentioned four types of jobs. IEA applies the partitioning to the whole set of jobs and PTAS partitions only the set of the long jobs; the short jobs are scheduled afterwords without violating a desired approximation factor. In both, PTAS and IEA a low degree polynomial-time procedure schedules the type (2)-(4) jobs. An exponential-time subroutine is required to verify the permutations of only the type (1) jobs that may potentially be consistent with an optimal solution. Throughout we use for the number of all type (1) jobs (related to IEA), and for the number of only long type (1) jobs (related to PTAS). Hence and we also maintain . Note that parameter is part of the input for PTAS, and the parameters and are not part of the inputs for PTAS and IEA, respectively, but they are bounded from above by and , respectively.

Worst-case time time complexities of IEA and PTAS are and , respectively, where with the latter cost an -approximation to the optimum is obtained (though from the first glance the worst-case time complexity of PTAS looks somewhat similar to that of IEA, our job partitioning in case of PTAS applies to the set of the long jobs only, hence ). The exponential-time subroutine augments the partial schedule of the type (2), type (3) and type (4) jobs created by the earlier mentioned polynomial-time procedure with the type (1) jobs in different (potentially optimal) ways, which are represented by the corresponding permutations of the type (1) jobs. Up to and , respectively, permutations of the type (1) jobs can theoretically be considered in IEA, and PTAS respectively (recall that the enumeration procedure of PTAS considers only the long jobs), and at least one of them is consistent with a complete optimal solution . While both, IEA and PTAS incorporate such a permutation into the partial schedule of the type (2)-(4) jobs, PTAS has one additional stage at which a partial schedule of the type (1)-(4) jobs consisting of solely the long jobs is completed with the short jobs, again in a low degree polynomial time. So the total number of the enumerated partial schedules is bounded from above by in IEA and by in PTAS instead of in the earlier approximation schemes.

Worst-case bounds and of IEA and PTAS do not actually reflect the practical behavior of these algorithms. As to PTAS, the employed algorithmic framework allows a deeper analysis of its running time. In particular, the probability that PTAS enumerates consecutive permutations is about , which approaches 0 for already moderate -s (we have ), and it is practically 0 for an exponent of , in particular, for . Thus with a very high probability, PTAS finds an -approximation solution in fully polynomial time.

Independently of the above probabilistic estimation, which applies to only PTAS, the following observations support the construction ideas of both, IEA and PTAS. First, it is worth of mentioning that the order in which the permutations of the type (1) jobs are considered matters, and both IEA and PTAS are flexible in this respect. In practice, a properly selected order may drastically speed up the performance as large subsets of the set of all permutations can be discarded. Both algorithms start with a natural steady permutation of the type (1) jobs and try first to obtain a desired solution based on this single permutation. Explicit conditions are derived when this is possible.

Since problem is strongly NP-hard, a pseudo-polynomial time algorithm is unlikely to exist for it, hence an exponential time dependence on in the time complexity expression of any exact algorithm is unavoidable. Likewise, since there may exist no fully polynomial approximation scheme for a strongly NP-hard problem unless (see Garey and Johnson [3]), a fully polynomial time approximation scheme is unlikely to exist for the scheduling problem , hence an exponential time dependence on is unavoidable. Note that the worst-case exponential dependence on and in the proposed algorithms is expressed explicitly without using the big -notation, i.e., no constant is hidden in the exponent (such “clean” computational complexity with respect to parameters and is not only beneficial in the complexity analysis but also it is helpful in the analysis of the inherent properties of the problem and in finding approximation solutions in polynomial time). Although an accurate theoretical estimation of the values of parameters and is difficult, a recently conducted experimental study for about thousand randomly generated instances of problem in [1] has revealed, in particular, the values that parameter takes in practice. As it can be seen from the experimental study reported in [1], for some amount of the tested instances, independently of their size, turned out to be a relatively small integer number. For small problem instances, in average, was about . However, for larger instances up to 10000 jobs it became less than (Table 1 in [1]).

A brief overview of the earlier polynomial time approximation schemes. Here we briefly describe the earlier mentioned polynomial-time approximation schemes to outline the similarities and the differences between these schemes and our PTAS. The first PTAS for problem was suggested by Hall and Shmoys [7]. They described two PTASs with time complexities and . Mastrolilli [10] has proposed another approximation scheme for a more general identical processor setting and showed that his algorithm has an aggregated time complexity .

The above approximation schemes rely on the fact that the set of job release times can be reduced to a subset with a constant number of release times, i.e., a number, dependent only on and not dependent on (the better is the approximation, the larger is ). As it is shown in [7], this can be done without affecting essentially the approximation ratio. In the first approximation scheme from [7], job processing times are also rounded to the nearest multiples of the total job processing time divided by . Job release and delivery times are scaled respectively and an instance in which the total number of job processing times is restricted by (instead of ) is obtained. This instance is solved by a dynamic programming algorithm in which the whole scheduling horizon in divided into sub-intervals, and all possible distributions of the total processing time of jobs assigned to each of these intervals is considered (given a particular such distribution, jobs in each interval are scheduled by LDT-heuristics that includes all the released jobs optimally in non-increasing order of their delivery times). Since every interval is at most long and there are intervals, there are at most possible distributions to be considered by the dynamic programming algorithm. But because of the accomplished rounding of job processing times, this magnitude converts to .

In the second approximation scheme described in [7], as in one from [10], two types of jobs, short and long (or small and large) ones, are treated differently. The essential difference between these two types of jobs makes the fact that the total number of long jobs is bounded by (it is not dependent of ). The whole scheduling horizon is again divided into (disjoint) subintervals according to the reduced set of job release times. All possible ways in which long jobs can be accommodated into these subintervals are considered. The set of short jobs to be included in every subinterval is determined by “guessing” again their total processing time. LDT-rule is applied to schedule every set of short jobs together with the corresponding long jobs in each subinterval (again, since all these jobs are released within this subinterval, LDT-heuristic will include them in an optimal order).

In the approximation scheme from [10] short and long jobs are formed in different ways yielding a constant number of also small jobs. The original problem instance is again converted into one in which the number of possible job release, processing and delivery times is bounded by , and the whole set of jobs is partitioned into (disjoint) subsets in such a way that the jobs in each subset have the same release and delivery times. The number of such subsets becomes bounded by in the modified problem instance. Then jobs from every subset are merged resulting in longer composed jobs with a common release and delivery time as the component (merged) jobs. In every subset at most one small (non-composed) job remains. It is shown that the approximation ratio will be kept within the allowable range by considering so formed set of jobs. Since the number of the above subsets is constant, the total number of small jobs is also constant, and the number of large (composed) jobs is also constant. Then the complete enumeration already yields a non-exponential time dependence on . The time complexity is adjusted by considering only a constant number of possible starting times solely for large jobs, similarly as in the second approximation scheme from [7] (and this still guarantees the desired approximation ratio). Again, for every such potentially useful configuration, the corresponding feasible schedule is created by LDT-heuristics.

In summary, the schemes from [7] and [10] carry out a complete enumeration of all possible combinations yielding a tight exponential-time dependence . The proposed algorithms are more flexible in this respect and allow to discard subsets of permutations, hence an explicit exponential bounds and are generally non-attainable.

In the next section we give the preliminaries, in Sections 3 and 4 we describe IEA and PTAS, respectively. Section 5 contains the concluding notes.

2 Preliminaries

In this section we outline some previously known concepts omitting the proofs for the earlier known results (the reader is referred to [15], [16] and [18] for the details and illustrative examples). In an LDT-schedule , consider a longest consecutive job sequence ending with, say, job such that and no job from this sequence has the delivery time less than . Then this sequence is called a kernel in schedule , and its latest scheduled job is the corresponding overflow job (consecutive sequence is a sequence of the successively scheduled jobs without idle-time intervals in between them). Abusing a little bit the terminology, we will refer to a kernel interchangeably as a sequence and as the corresponding job-set, and denote the earliest (the first) kernel in schedule by .

We observe that: (i) the number of kernels and the overflow jobs in schedule is the same; (ii) no gap (an idle-time interval) within a kernel exists; (iii) the overflow job is either succeeded by a gap or it is succeeded by job with (hence, ).

Suppose job precedes job in LDT-schedule . We will say that job pushes job in schedule if LDT-heuristic could have been scheduled job earlier if were forced to be scheduled behind .

A block in an LDT-schedule is its consecutive part consisting of the successively scheduled jobs without any gap in between, which is preceded and succeeded by a (possibly a 0-length) gap (in this sense, intuitively, a block is an independent part in a schedule). Note that every kernel in schedule is contained within some block and that not necessarily block starts with kernel (the latter will be the case, for example, if the corresponding kernel is immediately preceded and pushed by job with ; alternatively, there may exist job scheduled before kernel in block with but with ). Similarly, not necessarily kernel ends block (see the above point (iii)).

Thus block may contain one or more kernels. If it contains two or more kernels it also contains at least one non-kernel job; if it contains a single kernel then it may or may not contain other non-kernel jobs (in the latter case that block and the latter kernel coincide).

Suppose kernel () is immediately preceded and pushed by job with (job pushes the earliest scheduled job of this kernel). In general, more than one job with scheduled in block may exist. We call such a job an emerging job, and emerging job above (one scheduled immediately before the first job of kernel in schedule ) the delaying emerging job for kernel (so an emerging job appears, in general, either before or after kernel in schedule ).

We note that if there is job with and with scheduled before kernel in block , then that job is neither a kernel nor an emerging job. Note also that an emerging job for kernel is scheduled within block . In general, it is easy to see that only the rearrangement of jobs from block may potentially reduce .

Given LDT-schedule with the delaying emerging job for kernel , let

be the delay of kernel in that schedule (i.e., the forced right-shift imposed by the delaying job for the jobs of kernel ), and let be the makespan of schedule . The following known property that implicitly defines a lower bound on the optimal schedule makespan easily follows from the above formula and the observation that no job of kernel could have been released by the time when job was started in schedule (as otherwise LDT-heuristic would have been included the former job instead of job ):

Property 1

, hence .

There are known easily seen values for , e.g.,

where stands for the total processing time of all jobs. We may also easily observe that there are certain values for constant for which an -approximation is always achieved ( and are defined as above):

Property 2

if .

Proof. By Property 1,

From the above two properties it is apparent that the approximation factor of solution depends on , which, in turn depends on the length of the delaying emerging job . Based on this observation, we will distinguish two groups of jobs, consisting of the short and the long ones. Job is short if ; otherwise, it is long.

Lemma 1

An LDT-schedule containing a kernel with a short delaying emerging job is an -approximation schedule.

Proof. Similar to the proof of Proposition 2 with the difference that the last inequality immediately follows from .

Lemma 2

, i.e., there are less than long jobs.

Proof. If there were or more long jobs then their total length would exceed , a contradiction.

Because of the above properties, we use for the approximation parameter and for the number of the long jobs.

2.1 Alternative LDT-schedules

LDT-heuristics is a useful tool for both, the generation and also the analysis of feasible schedules. IEA starts by the creation of the first feasible LDT-schedule obtained by applying LDT-heuristics to the initially given problem instance. Alternative LDT-schedules are created by the activation of the emerging jobs, as it is described just a bit later. During the construction of the next LDT-schedule , scheduling time is iteratively determined as either the completion time of the job scheduled the last before time or/and the release time of the earliest released yet unscheduled job, whichever magnitude is larger. The (partial) LDT-schedule constructed so far by time is denoted by , and denotes the job that is scheduled at time by LDT-heuristic.

is said to be a conflict scheduling time in schedule if within the execution interval of job (including its right endpoint) another job with gets released (job pushes a more urgent job ). The earliest released job conflicting with job will be denoted by (ties being broken arbitrarily).

Lemma 3

If during the construction of LDT-schedule no conflict scheduling time occurs then this schedule is optimal. In particular, at any scheduling time , no job released within the execution interval of job can initiate a kernel in schedule unless it conflicts with job .

Proof. The first part is a well-known nice property of LDT-heuristics (see, for example, [15]). The second part follows from the definition of a kernel.

If the above lemma is satisfied already for schedule then there is no need in any enumerative procedure. Otherwise, a special pre-processing step to detect the potential kernels is carried out. We first specify how alternative LDT-schedules are created.

Given an LDT-schedule with kernel with the corresponding delaying emerging job , in an alternative LDT-schedule job is forced to be scheduled after all the jobs of kernel so all the emerging jobs included after kernel in schedule remain to be included after that kernel in the newly created LDT-schedule . The alternative LDT-schedule, in which the earliest job of kernel will be scheduled at its release time is constructed by the application of LDT-heuristics to a modification of the original problem instance, in which the release time of job and that of any emerging job included after kernel in schedule is increased to that of any job of kernel . By LDT-heuristics, job and any emerging job included after kernel in schedule will appear after all jobs of kernel in the resultant LDT-schedule . As a result, a job not from kernel may not again push a job of that kernel in schedule .

In LDT-schedule , kernel can similarly be determined. If kernel possesses the delaying emerging job, let be that job. Then job is activated for kernel resulting in another LDT-schedule . One may proceed similarly creating the next LDT-schedule (where is the delaying emerging job for kernel ), and so on. For notational simplicity, LDT-schedule is denoted by .

2.2 Decomposition of kernels

In this subsection we expose important inherent structural properties of kernels that we use later in IEA. We will consider the jobs of a kernel independently and construct an independent LDT-schedule solely for these jobs. Intuitively, we aim to extract the “atomic components” of a sequence consisting of the most urgent jobs of that sequence, based on which we will partition the set of jobs. Below we introduce briefly the relevant terminology. The reader is referred to Section 3 of [18] for a more detailed introduction to the presented in this subsection concepts and the kernel decomposition procedure, which we briefly describe here.

First, we observe that a kernel may not be an atomic component, in the sense that it may further be split. Indeed, let be the delaying emerging job of kernel in the initial LDT-schedule , and let be the fragment of schedule containing the delaying emerging job and the kernel (abusing the notation, we will use also for the fragment of schedule without job ). Suppose we (temporarily) omit the delaying emerging job and apply LDT-heuristics solely to the set of job from kernel . Then we obtain a partial LDT-schedule that we denote by . We may easily observe that the processing order of jobs of kernel in partial schedule not necessarily coincides with the processing order of these jobs in partial schedule .

We call job an anticipated job in schedule if it is rescheduled to an earlier position in that schedule compared to its prior position in partial schedule ; more precisely, there is job , , such that precedes in partial schedule (kernel ) and precedes in partial schedule . It immediately follows that if the overflow job in schedule is pushed by an anticipated job then that anticipated job becomes an emerging job in that partial schedule, and that there may exist no merging job which is not an anticipated one.

It is also easy to see that if there occurs an anticipated job in schedule , then jobs of kernel are redistributed into one or more continuous parts separated by the gaps in that schedule; we will refer to these parts also as the substructure components of kernel and will say that kernel collapses into these components.

If a substructure component contains no anticipated job then it is called uniform, otherwise it is mixed. It is easy to see that if kernel collapses to a single substructure component then this component is mixed. Looking at as an independent schedule, a sequence of substructure components, we can recurrently determine the kernels and the corresponding delaying emerging jobs in it and apply the above definitions to the newly created substructure components. We consider each substructure component also as an independent schedule with its own kernel(s).

Observe that a kernel in a mixed substructure component possesses the delaying emerging job, whereas a kernel in a uniform substructure component, that we call an atomic kernel, does not possess the delaying emerging job (as any emerging job is an anticipated job).

Intuitively, we may look at the decomposition procedure of kernel as the process of “filtering” the jobs of that kernel from all the potential emerging jobs “hidden” within this kernel. The decomposition procedure applied for kernel , in a recurrent fashion, obtains from every mixed component from the current mixed substructure component (initially, from kernel , then from partial schedule , etc.) a uniform component by iteratively omitting the (anticipated) delaying emerging job and rescheduling the remaining jobs by LDT-heuristic. These delaying emerging jobs occurred during the decomposition process are . The substructure components are considered recursively in their precedence order in partial schedule . If the next considered component is already uniform, it remains unchanged and the next (in the recurrence order) substructure component is processed. The decomposition process continues by the recursive calls of the procedure for all the arisen mixed substructure components until there remains no mixed substructure component. The output of the decomposition procedure invoked for kernel is a partial schedule denoted by , the sequence of the obtained uniform substructure components merged in time axes according to the absolute time scale (relative to the starting time of each original substructure component in schedule ). Every substructure component in schedule is an atomic component in the sense that it cannot be further split and the maximum job full completion time in every such uniform component is a lower bound on the optimal makespan.

Lemma 4

(i) For any kernel , the decomposition procedure runs in recursive steps in time , where () is the total number of jobs in kernel sequence .
(ii) The maximum job full completion time in schedule is a lower bound on the optimum schedule makespan.

Proof. The number of the recursive calls of the procedure is clearly less than the total number of the occurred anticipated jobs. Part (i) follows as to every mixed substructure component processed in the procedure LDT-heuristics is applied (and it contains less than jobs). Part (ii) follows from the fact that the full completion time of the last scheduled job of every created uniform substructure component is the minimum possible full completion time of all the jobs from that component: Indeed, the jobs of that component are scheduled in an optimal LDT-order and the earliest scheduled job of this component starts at its release time. Hence, no rearrangement of the jobs of such component may yield a smaller maximum full completion time of a job of that component.

2.3 Forming initial configuration

The pre-processing procedure creates the initial configuration consisting of the initial set of kernels and the initial job partition at the initial step which also determines the initial set of permutations. As we describe in the next section, the iterative step of the pre-processing procedure keeps track on the current configuration that includes the current set of kernels (consisting of the kernels detected so far), the corresponding job partition (with the associated parameter ) and the yielded set of permutations of the type (1) jobs.

forming the initial set of kernels. The initial step of the pre-processing procedure detects the initial set of kernels (based on which the initial partition of jobs into four types of jobs is created) as follows. At the initial iteration 0 schedule is created. Iteratively, at iteration , kernel with the delaying emerging job in the LDT-schedule of iteration is detected; LDT-schedule of iteration is . The initial step of the pre-processing procedure halts at iteration if there is no kernel in schedule with the delaying emerging job or/and any such kernel contains the jobs of the kernel of an earlier iteration. It outputs the formed set of kernels , ().

The decomposition procedure from the previous subsection is applied to every kernel from set and the partial schedules , are created. These partial schedules substitute the corresponding kernels in the further generated schedules. When this will cause no confusion, from here on we will refer to kernel and partial schedule interchangeably.

The four types of jobs. Based on the decomposition of each kernel and the partial schedule , the initial partition of the set of jobs into the four disjoint subsets is formed as follows.

Type (1) jobs are the emerging jobs which are divided into three categories. The first two categories are defined below, and the third category of the type (1) jobs is defined later. A type (1.1) job is an emerging job for any of the kernels (i.e., it is an emerging job from schedule ). For each kernel , the type (1.2) jobs associated with that kernel are formed from the delaying emerging (anticipated) jobs omitted during the decomposition of kernel ; i.e., these are the (“internal” for kernel ) delaying emerging jobs . Note again that a type (1.2) job associated with kernel does not belong to schedule .

The type (2) jobs associated with kernel are the jobs of the atomic kernel of schedule , and the type (3) jobs associated with that kernel are formed by the jobs from the remaining uniform substructure components from that schedule. The remaining jobs are type (4) jobs (these are neither emerging nor kernel jobs).

One may look at a complete LDT-schedule as a sequence consisting of sequences of the above four types of jobs. Note again that with every kernel , its own type (1.1), type (2) and type (3) jobs are associated, whereas a type (1.1) job can be associated with one or more (successive) kernels (if it is an emerging job for all these kernels). Likewise, a type (4) job is associated with a particular kernel or with a pair of the neighboring kernels: if a type (4) job is scheduled between two adjacent kernels and (before the first kernel or after the last kernel ) then it is not an emerging job for none of these (two) kernels and is to be scheduled in between these kernels (before or after the first and the last kernel, respectively).

3 Implicit enumeration algorithm

Now we are ready to start the description of IEA that constructs an optimal solution to problem . The complete schedules created by IEA are formed step-by-step, the jobs of each type being added at a particular step. Hence, the partitioning of the set of jobs is crucial for the functioning of the algorithm.

3.1 Stage 0: Constructing initial partial schedule of the type (2)-(4) jobs

In this subsection we describe the initial stage 0 of IEA that, based on the initial configuration created at the initial step of the pre-processing procedure, creates a partial schedule of the type (2)-(4) jobs (type (2), type (3) and type (4) jobs) from the initial job partition, as follows. First, the partial schedules , , are merged on the time axes (the time interval of each partial schedule being determined in absolute time scale). Since no two adjacent partial schedules may intersect in time, the partial schedule (containing all the type (2) and type (3) jobs) constructed in this way is well-defined. Next, all the type (4) jobs are pasted from schedule to schedule respecting its execution interval in schedule (i.e., it is inserted within the same time interval as in schedule ). The resultant partial schedule is again well-defined since the execution interval of each type (4) job in schedule is idle in schedule (see point (a) in the following lemma and also the proof of Lemma 6).

Lemma 5

There is an optimal schedule in which:

(a) Any type (4) job is included between the intervals of partial schedules and (before the interval of the first partial schedule and after that of the last partial schedule ). In particular, there is no intersection of the execution interval of a type (4) job in schedule with the time interval of any uniform substructure component from that schedule.

(b) If a type (1.1) job is an emerging job for more than one kernel, it scheduled either before or after the jobs of each of these kernels in (but it does not appear after the jobs of any kernel for which it is not an emerging job in schedule ).

(c) A type (1.2) job associated with kernel is scheduled within the interval of partial schedule or after that interval but before the interval of the succeeding kernel.

(d) Any type (2) and type (3) job associated with a kernel is scheduled within the interval of partial schedule or after that time interval. In the later case, it is pushed by a type (1.2) job associated with kernel or/and by a corresponding type (1.1) job.

Proof. Part (a) holds as no type (4) job can be an emerging job for any substructure component of the corresponding kernel: a type (4) job scheduled between partial schedules and is less urgent than any job from schedule but it is more urgent than any job from schedule (as otherwise it would be a type (1.1) job for kernel ). Hence, it cannot be included after partial schedule in schedule , and it is not difficult to see that there is no benefit in including such jobs in between the jobs of this partial schedule. Likewise, the type (4) jobs included after partial schedule in schedule can be included after all jobs of that sequence (since they are less urgent than all jobs from schedule and hence there will be no benefit in rescheduling them earlier). This proves part (a). Part (b), stating that the type (1.1) jobs can be dispelled in between the corresponding kernel sequences easily follows. As to part (c), note that no type (1.2) job associated with kernel is released before the interval of partial schedule and it cannot be scheduled after the original execution interval of kernel (as otherwise the resultant makespan will surpass an earlier obtained one). Part (d) similarly follows.

Depending on the structure of the initial LDT-schedule , some updates in the current configuration might be necessary. At stage 0, the iterative step of the pre-processing procedure carries out the analysis of schedule and the following generated partial schedules of the type (2)-(4) jobs and repeatedly updates the current configuration until the last created such partial schedule possesses the desired properties. In the rest of this subsection, we give a detailed description of the iterative step of stage 0. First, we analyze the relevant structural properties of the partial schedules of the type (2)-(4) jobs.

The initial partial schedule may yield the creation of alternative partial LDT-schedules at the iterative step of the pre-processing procedure. Intuitively, in this schedule potential kernels formed by the type (4) jobs might be “hidden”, which are “extracted” and the current configuration is updated accordingly: although no type (4) job formed part of any kernel in schedule , this may happen for schedule since, in that schedule, the maximum full completion time of a job of each kernel from the initial set of kernels is reduced compared to schedule (recall that kernel gets “transformed to a sequence of uniform substructure components from partial schedule in schedule ). So a sequence of type (4) jobs may form a new kernel in schedule , where the corresponding delaying emerging job is also a type (4) job. If such delaying emerging job is activated, then it may push a uniform substructure component of a kernel from the current set of kernels because of the newly arisen gap. Hence, it may again become an emerging job, now for the pushed component(s) (at the same time, recall that by Lemma 5, the execution interval of any type (4) job in schedule cannot overlap with any uniform substructure component, i.e., with any of the partial schedules ).

Before we proceed with the description, we define formally a “new kernel”: A kernel in an LDT-schedule of the type (2)-(4) jobs will be referred to as a new kernel if it contains a job which does not belong to any of the schedules , for some kernel from the current set of kernels. Note that a non-new kernel not necessarily coincides with the kernel from the current set of kernels (the latter set consists of the originally arisen kernels, ones from which the partial schedules consisting of the substructure components arisen during the decomposition of each kernel were created). We will refer to the former kind of kernel as a secondary kernel and will denote by the secondary kernel of kernel at a current step of computations.

Iterative step invoked for a partial schedule of type (2)-(4) jobs. The iterative step of the pre-processing procedure is initially invoked for partial schedule ; iteratively, it is invoked for the last generated LDT-schedule until no new kernel in that schedule is detected. If the last generated LDT-schedule contains a new kernel then the next partial LDT-schedule is created which becomes . Since the rise of a new kernel yields new type (1), type (2) and type (3) jobs, the current configuration, including the current job partition, is updated accordingly. This process continues until no new kernel in the last created partial schedule is detected.

If a sequence of the type (4) jobs from the current job partition forms a new kernel in schedule (see the observation below) then this schedule is updated by omitting all jobs of that kernel and incorporating instead partial schedule into the updated schedule (similarly to the update performed at the initial step of the pre-processing procedure). Observe that this operation yields no conflicts since the interval of the newly created partial schedule does not overlap with that of schedule , for any kernel from the current set of kernels (observe also that if the newly arisen kernel possesses no delaying emerging job then partial schedule will coincide with kernel ). The current job partition is updated similarly by “transferring” the former type (4) jobs of kernel into the type (1.2), type (2) and type (3) jobs correspondingly, and the corresponding type (4) emerging job(s) (which do not belong to kernel ) to the type (1) jobs. The iterative step outputs the last updated schedule (with no new kernel) that, for notational simplicity, is again denoted by .

The total number of all calls to the iterative step at all stages of IEA is clearly bounded by since, in total, no more than different kernels may occur. We summarize our discussion in the next observation.

Observation 1

(i) If a new kernel of the type (2)-(4) jobs in a partial LDT-schedule arises, then it consists of some type (4) jobs from the current job partition. Furthermore, if it possesses the delaying emerging job then it is a type (4) job from the current job partition.
(ii) The last updated schedule returned by stage 0 contains no kernel with the delaying emerging job.

One may look at schedule as a sequence of the uniform substructure components obtained by the decomposition procedure for each of the arisen kernels, where the remaining type (4) jobs are included in between these sequences. There may exist a gap in schedule before the earliest scheduled type (2), type (3) or type (4) job, in between these jobs and after the last scheduled type (2), type (3) or type (4) job; recall that no uniform substructure component may contain a gap, but there is a gap before and after each uniform substructure component in schedule .

Lemma 6

The makespan of schedule delivered at stage 0 is a lower bound on the optimal schedule makespan.

Proof. By Lemma 4, the makespan (the maximum full job completion time) in schedule is a lower bound on the optimal schedule makespan. By the same lemma, the maximum full job completion time in a substructure component from the decomposition of any of the kernels is also a lower bound on the optimal schedule makespan. The latter magnitude cannot be surpassed by any other job from schedule . Indeed, any job is either from partial schedule , for some kernel from the set of kernels delivered by stage 0 or is a type (4) job from the job partition delivered by stage 0. In the latter case, our claim follows since job could not belong to any new kernel detected by the iterative step. In the former case, we show that no type (4) job may push job in schedule (hence the full completion time of job is a lower bound on the optimum makespan). Indeed, job may theoretically be pushed by only a type (4) job in schedule . But any type (4) job originally scheduled before the delaying emerging job of kernel completes before the starting time of that delaying emerging job. But the latter job is omitted in schedule and hence job cannot push job .

3.2 Incorporating the type (1) jobs

Schedule delivered at stage 0 does not contain type (1) jobs from the current job partition. From here on, we deal with the permutations of these type (1) jobs and use for the number of the type (1) jobs in the current job partition (which, in fact, is a non-decreasing function of the number of steps in IEA). A complete feasible schedule, besides the type (2)-(4) jobs from schedule , contains all the type (1) jobs from such permutation . We will say that a complete schedule respects permutation if the type (1) jobs appear in the order of permutation in that schedule. IEA constructs one or more complete feasible schedules respecting permutation , as we describe in the following subsections. (Since in PTAS, IEA is used for the construction of partial schedules with only the long jobs, the corresponding permutations are formed by only the long type (1) jobs, whereas the yet unscheduled short jobs are incorporated into the former schedule at a later stage by PTAS, as described in the next section.)

3.2.1 Permutations of the type (1) jobs

IEA avoids a brutal enumeration of all possible permutations of the type (1) jobs by considering these permutations in a special priority order. Given a permutation , Procedure NextPermutation() returns the next “promising” permutation from the priority list of the permutations of the type (1) jobs. We give a basic skeleton of that procedure later in Section xxxxx. We will refer to this priority list also as the current (ordered) set of permutations. This set does not contain permutations which cannot be consistent with solution .

Obtaining the first steady permutation. The first steady permutation of the type (1) jobs is obtained by a procedure that constructs the first complete schedule of the long jobs respecting that permutation (intuitively, is a most natural and promising permutation). The procedure augments schedule with the released over time type (1) jobs as follows. Starting with schedule , iteratively, the current partial schedule is extended with the next yet unscheduled and already released by the current scheduling time type (1) job with the maximum delivery time, ties being broken by selecting any shortest job (intuitively, such tie breaking rules increases the probability that the forced right-shift for the following long type (2)-(4) long jobs imposed by the next included long type (1) job will not violate a required approximation factor, applicable to PTAS, see Lemma 1). The scheduling time , at which job

is scheduled, is iteratively determined as the earliest idle-time moment in the current augmented schedule such that there is yet unscheduled long type (1) job released by that time; in case job

overlaps with the following (non-type (1)) job from the current augmented schedule, the following jobs are right-shifted correspondingly.

Consistent permutations. IEA incorporates an additional step for the filtration of the set of permutations. Recall that type (1.2) jobs associated with a particular kernel are to be scheduled either immediately before or within or immediately after the time interval of schedule . These jobs cannot be scheduled before any type (1.2) job associated with a kernel preceding kernel and after any type (1.2) job associated with a kernel succeeding kernel , i.e., no other type (1) job is to be included in between these type (1.2) jobs (see point (c) in Lemma 5). Hence, in any permutation that is consistent with any optimal solution, these precedence relations are respected. We will refer to a permutation of the type (1) jobs in which these restrictions are respected as a consistent permutation.

The order of the type (1) jobs imposed by a consistent permutation may yield the creation of a dominated (the so-called non-active) complete schedule, in which case permutation is also discarded, as we describe in the following subsection.

3.2.2 Description of stage 1

Stage 1 generates a complete feasible schedule respecting next permutation from the current set of permutations. Schedule delivered by stage 0 is completed with the type (1) jobs according to the order in permutation , which is incorporated into partial schedule using list scheduling strategy: the idle-time intervals in schedule are filled in by the type (1.1) and (1.2) jobs according to the order in permutation .

Dominated permutations. Let be a permutation of the type (1) jobs from the current job partition. Stage 1 works on at most iterations so that job is included in th iteration, for (unless permutation gets discarded at an earlier iteration). The partial schedule of iteration is denoted by .

The order of the type (1) jobs imposed by a consistent permutation may impose the creation of an avoidable gap. Such a gap may potentially occur at iteration if job is released earlier than the previously included job , so that job can potentially be included before job without causing any non-permissible delay. Then a permutation in which job comes after job is neglected. Below is a related definition.

Suppose at iteration there is a gap in schedule before time within which job may feasibly be included. If there are several such gaps, consider the earliest occurred one, say . Let schedule be an extension of schedule in which job is included at the starting time of gap or at time , whichever magnitude is larger, and the following jobs from schedule are correspondingly right-shifted (so job appears before job in that schedule). If the makespan of schedule is no larger than that of schedule then the latter schedule (the corresponding permutation ) dominates the former schedule (permutation ) and job is damped by job . The next lemma easily follows from the above discussion.

Lemma 7

There is an optimal solution which respects a consistent non-dominated permutation of the type (1) jobs.

Based on this lemma, IEA evaluates only consistent non-dominated permutations of the type (1) jobs. Job is incorporated into the current partial schedule in case the consistency restrictions are not violated and job is not damped by job . Otherwise, permutation is discarded at iteration . Initially at iteration 0, ; iteratively (if job is not damped by job and the consistency restrictions are not violated) schedule is obtained from schedule by including job at the earliest possible idle-time interval at or after time in schedule ; if this forces the right-shift of the following jobs, the latter jobs are right-shifted correspondingly (their processing order in schedule being respected). Job is scheduled at the completion time of the last scheduled job of schedule if there is no such an idle-time interval. If at iteration either it turns out that permutation is not consistent or job is damped by job , then permutation is discarded. If there occurs no such iteration, then complete schedule respecting permutation is returned.

It can be readily verified that schedule and the complete schedule delivered by the earlier described procedure that generates the steady permutation are the same. In particular, there is no need to run the just described procedure of stage 1 for permutation , i.e., IEA proceeds with complete schedule without generating a priory permutation (observe that the latter procedure incorporates a permutation into partial schedule according to the order imposed a priory by permutation , whereas the former procedure explicitly generates a complete schedule respecting permutation without having permutation as an input).

Lemma 8

Suppose there is a kernel in schedule with no delaying emerging job. Then (i) if kernel includes no type (1) job from the current job partition (i.e., it consists of only the type (2)-(3) jobs), then schedule is optimal; (ii) otherwise, has the minimum makespan among all feasible schedules respecting permutation .

Proof. By the conditions of part (i) kernel is a uniform substructure component obtained as a result of the decomposition of some kernel from the current set of kernels. Hence, its makespan is a lower bound on the optimum schedule makespan by Lemma 4 and part (i) follows. For part (ii), let be a type (1) job from kernel . Note that the first job of kernel starts at its release time (since there exists no delaying emerging job for that kernel), whereas job and the jobs from the corresponding uniform part are included in the LDT-sequence. Suppose first kernel contains no other type (1) job. Using an interchange argument, we can easily see that the LDT-sequence provides the minimum full job completion time for the jobs of kernel and part (ii) follows (as job cannot be omitted, and by rescheduling it at a later time the makespan can only be increased). If now kernel contains other type (1) jobs, then these jobs have to be included according to the order in permutation . Hence, no reordering of these jobs is possible and claim (ii) similarly follows.

Now we continue the description of stage 1, which verifies the following conditions.

If there is a kernel in schedule with no delaying emerging job and this kernel contains no type (1) job, then IEA outputs schedule and halts (Lemma 8, part (i)).

If there is a kernel in schedule with no delaying emerging job and this kernel contains a type (1) job, then is the only enumerated complete schedule for permutation , stage 1 returns this single schedule for that permutation (no update of the current configuration is required) and IEA calls procedure NEXTPERMUTATION() to process the next permutation (Lemma 8, part (ii)).

If all kernels in schedule possess the delaying emerging job and no new kernel in that schedule arises, then stage 1 invokes stage 2 (which creates one or more additional complete schedules respecting permutation , as described in the next subsection.)

Before we complete the description of stage 1, it can be helpful to observe that similarly to stage 0, a new kernel in schedule may arise since in schedule the partial schedule is completed with the type (1) jobs from permutation . In particular, a newly included type (1) job from permutation may push a type (4) job from the current job partition. In general, note that the delaying emerging job for kernel can be either a type (1) or a type (4) job from the current job partition (recall also that there is no kernel with the delaying emerging job in partial schedule , see Observation 1). Now we can specify the last remained condition:

If all kernels in schedule possess the delaying emerging job and there arises a new kernel in that schedule, then stage 1 invokes the iterative step for schedule (which updates the current configuration). The iterative step is described in the next subsection.

3.2.3 The iterative step for a complete schedule

Given, in general, an already created complete schedule respecting permutation (initially, schedule ), the iterative step for that schedule is invoked if a new kernel in it arises (given that all kernels in that schedule possess the delaying emerging job). The iterative step invoked for schedule updates the current configuration. Suppose is a newly arisen kernel in schedule . Since this kernel consists of the type (4) jobs from the current job partition, these type (4) jobs may convert to the type (1.2), type (2) and type (3) jobs. Hence, the current configuration (including the current set of permutations) needs to be updated.

Besides the type (1.2) jobs, there may exist a type (4) job included before the jobs of a kernel and pushing the jobs of that kernel (job , in turn, may be pushed by a newly included type (1) job). We easily observe that in schedule , job (as any type (1.2) job) is included in between the corresponding pair of kernels, the kernel immediately preceding kernel and the kernel immediately succeeding kernel (given that they exist) in schedule . As any type (1) job, cannot be included in between the jobs of the corresponding kernel (in particular, in between the type (1.2) jobs associated with kernel ) in schedule . From here on, we will refer to job as a type (1.3) job associated with kernel . The set of the type (1.3) jobs associated with kernel is formed by the delaying emerging job of kernel (a former type (4) job) and any other (former) type (4) job which is also an emerging job for kernel . The following lemma (a complement to Lemma 5) follows.

Lemma 9

A type (1.3) job associated with kernel is included between kernels and in schedule , and it is not included in between the jobs of kernel (in particular, in between the type (1.2) jobs associated with kernel ) in that schedule.

According to the introduced type (1.3) jobs, we extend the notion of a consistent permutation straightforwardly, requiring, in addition, any type (1.3) job to satisfy the restrictions from the above lemma (Lemma 7 remains true for the extended notion of a consistent permutation).

Updating the current set of permutations. As we saw, a new kernel implies new type (1) jobs. This, in turn, yields new permutations that may potentially be consistent with schedule . Accordingly, permutation gives rise to new permutations, one or more its offspings. Due to the positioning restrictions for the type (1.2) and the type (1.3) jobs from Lemmas 5 and 9 (particularly, the condition that no type (1.3) job associated with kernel is to be scheduled in between the type (1.2) jobs associated with the same kernel), the total number of the offsprings of permutation is easily seen to be , where (, respectively) is the number of the (newly declared) type (1.2) (type (1.3) jobs, respectively) associated with kernel .

The iterative step completes the current set of permutations with the offspings of permutation and IEA repeatedly invokes step 1 with the updated set of permutations.

3.2.4 Description of stage 2

As above specified, stage 1 invokes stage 2 if no new kernel in schedule arises and all kernels in that schedule posses the delaying emerging job (hence, the iterative step for schedule is not performed). Remind that the fragment containing the jobs of kernel from the current set of kernels is substituted by partial schedule , which does not contain the type (1.2) and type (1.3) jobs associated with kernel . By the above condition, the jobs from at least one uniform substructure component from schedule are pushed by the corresponding delaying emerging job. The corresponding secondary kernel, from schedule contains all jobs of that substructure component (in general, it may contain the jobs of one or more substructure components from partial schedule ).

Stage 2 invoked for schedule works on one or more iterations, it repeatedly activates the delaying emerging job for each next arisen secondary kernel, creating one or more alternative LDT-schedules respecting permutation (at each iteration one such LDT-schedule is generated); if a new kernel in the last created alternative LDT-schedule arises, stage 2 invokes the iterative step from the previous subsection.

Whenever stage 2 reaches iteration , it has already created alternative LDT-schedules at the previous iterations, , where is the delaying emerging job of kernel of iteration , for . For , the LDT-schedule of iteration 0 is , hence is the delaying emerging job of the (earliest) kernel in that schedule, (at every iteration , ties are also broken by selecting the earliest kernel in the LDT-schedule of iteration ). Note that is the current secondary kernel of kernel from the current set of kernels, for ; here we must note that kernel can be a new kernel (in which case the iterative step is invoked), however, for notational simplicity, we use the same notation as is used for a secondary kernel, as an exception. At iteration , schedule of iteration is obtained from schedule of iteration by activating the delaying emerging job for kernel ; i.e., job and any type (1) job included after jobs of kernel in schedule are forced to be scheduled after all jobs of that kernel, whereas job is included before the next to job type (1) job from permutation .

Repeatedly, at iteration :

(i) If there is a kernel in LDT-schedule with no delaying emerging job, then a schedule with the smallest makespan among schedules is returned.

(ii) If all kernels in LDT-schedule posses the (type 1)) delaying emerging job, then:

(ii,1) If is a secondary kernel then schedule is generated; and stage 2 is repeated from step (i).

(ii,2) If is a new kernel, then the iterative step with LDT-schedule of iteration is invoked { the iterative step will update the current configuration including the current set of permutations, and step 1 will again be invoked for the first (in the lexicographic order) offspring of permutation }.

This completes the description of stage 2. The following observation, intuitively, “justifies the purpose” of stage 2.

Observation 2

At stage 2, in each generated schedule the maximum full completion time of a job from partial schedule is a lower bound on the optimal schedule makespan.

Proof. First note that the first job of the secondary kernel starts at its release time in schedule . As a result, the jobs of kernel will (again) be redistributed (“decomposed”) into the corresponding substructure components from partial schedule . Then a job from schedule with the maximum full completion time attains a lower bound on the optimal schedule makespan (see Lemmas 4 and 6).

Lemma 10

Stage 2 invoked for a permutation of the type (1) jobs works in less than iterations.

Proof. We need to show that an LDT-schedule in which there is a kernel with no delaying emerging job will be created in less than iterations (step (i)). There are less than kernels in the LDT-schedule of each iteration, whereas the same delaying emerging job may be activated at most once for the same kernel . Since the number of the delaying emerging jobs is bounded from above by the total number of the type (1) jobs, , the total number of the activations at stage 2 cannot exceed . Then after less than iterations, a complete schedule satisfying the halting condition at step (i) will be created.

3.3 The correctness and the time complexity of IEA

We complete this section, dedicated to IEA, with a proof of its correctness and the time complexity.

Theorem 1

At least one of the complete schedules generated by IEA for permutation in time has the minimum makespan among all feasible schedules respecting permutation . Hence, IEA generates an optimal solution to problem in time .

Proof. First we show that IEA creates optimal solution . Remind that type (1) jobs are to be distributed between and within the time intervals of partial schedules (Lemmas 5 and 9). Hence, roughly, schedule can be obtained by a procedure that extends schedule (which makespan is a lower bound on the optimal schedule makespan by Lemma 6) by the type (1) jobs, including them in between and within the idle time intervals of partial schedules . While completing schedule with a consistent non-dominated permutation of the type (1) jobs, a non-consistent permutation can be discarded (see point (c) in Lemma 5 and Lemma 9), and a dominated schedule (the corresponding permutation) can also be neglected as the schedule corresponding to the corresponding dominant permutation will be created, unless it gets dominated by another permutation; in the latter case, the first permutation is also dominated by the third one since our dominance relation is easily seen to be transitive. At the same time, it is straightforward to see that no complete schedule for a (non-dominated consistent) permutation that yields offsprings need to be generated given that its offsprings are considered.

For a given permutation , either (a) there arises a new kernel in schedule or (b) no new kernel in that schedule arises. In case (b), if there is a kernel from the current set of kernels with no delaying emerging job in schedule , then the halting condition from Lemma 8 applies. Otherwise (all kernels from the current set of kernels posses the delaying emerging job in schedule ), in case (b) stage 2 for permutation is invoked. Let be the earliest secondary kernel from schedule and let be the corresponding delaying emerging job. Remind that job cannot be scheduled in between the jobs of partial schedule . Note that the makespan of any feasible schedule respecting permutation in which job is scheduled before the jobs of partial schedule cannot be less than that of schedule . Hence, if the latter makespan is not optimal, then job is included after the jobs of partial schedule in solution . Stage 2 creates an alterantive LDT-schedule with this property. Applying the same reasoning recurrently to schedule of the next iteration and similarly to the schedules of the following iterations of stage 2, we obtain that either schedule or one of the schedules generated at stage 2 for permutation is an optimal schedule respecting that permutation.

It remains to consider case (a) above. Remind that the update in the current configuration is required only in case a new kernel occurs: a new kernel yields new type (1), type (2) and type (3) jobs, whereas some type (4) jobs from the current job partition disappear. The iterative step updates the current configuration accordingly, and the above reasoning can recurrently be applied now for an updated configuration with newly generated permutations, the offsprings of permutation . It follows that one of the generated schedules for one of the tested permutations is optimal.

Now we show the time complexity. Initially, IEA invokes the initial step of the pre-processing procedure to form the initial set of kernels and job partition, and then the iterative step of the pre-processing procedure is invoked at stages 0,1 and 2. Since the pre-processing procedure is invoked only if a new kernel in the current LDT-schedule arises, in total, there may occur at most calls to the pre-processing procedure (as there may arise at most different kernels). Hence, the total cost of the pre-processing procedure is since at each call LDT-heuristic with cost is applied for the activation of the corresponding delaying emerging job. For each kernel, the decomposition procedure with cost is invoked, where is the total number of jobs in the that kernel (Lemma 4). Assuming, for the purpose of this estimation, that each of the at most kernels have the same number of jobs (where the maximum overall cost will be reached), we easily obtain that the total cost of all the calls of the decomposition procedure is bounded from above by (maintaining the jobs of each type in separate binary search tree, the total cost of all job partition updates will be ).

From the above made observations, the overall cost of stage 0 for creating the initial job partition and schedule is . For each (consistent non-dominated) permutation of the type (1) jobs, stage 1 works on at most iterations, and the cost of the insertion of each next type (1) job at each iteration is bounded from above by the number of the corresponding right-shifted jobs. Hence, the total cost of the generation of schedule is (note that, the verification of the dominant and consistency conditions imply no extra cost). Stage 2 generates less than additional LDT-schedules for each permutation (Lemma 10), whereas the cost of the activation of the delaying emerging job at each iteration is that of LDT-heuristics, i.e., it is ; hence for permutation , stage 2 runs in time which can be simplified to . Since there are no more than permutations, the cost for all the permutations is and the overall cost is .

4 Polynomial-time approximation scheme

IEA of the previous section considers up to possible permutations of the type (1) jobs. PTAS described in this section considers possible permutations of only the long type (1) jobs and delivers an -approximation solution with the worst-case time complexity estimation of , for a given . As in IEA, the order in which the permutations are considered matters. PTAS first tries to obtain the desired approximation based on the complete schedule(s) respecting the steady permutation of the long type (1) jobs: explicit approximability conditions when a complete schedule, respecting permutation and then each next permutation of the long type (1) jobs, already gives the desired approximation are essential to speed up the performance (see Section 5.1).

PTAS uses the truncated version of stage 1 of IEA, i.e., the version without the halting condition from Lemma 8, to generate LDT-schedules consisting of only the long jobs respecting permutation of the long type (1) jobs (though the halting condition from Lemma 8 cannot be used for a partial schedule, a similar condition will be applied in PTAS to complete schedules with short jobs, see Lemma 12 later in this section). Initially, PTAS runs stage 0 of IEA for the long type (2)-(4) jobs, obtaining in this way a partial schedule of the long type (2)-(4) jobs; then it runs (the truncated version of) stage 1 of IEA with schedule for the steady permutation , obtaining in this way the initial schedule with all the long jobs (for the sake of simplicity, we use the same schedule notation as in IEA). In the next subsection, we describe how short jobs are incorporated into a partial schedule of the long jobs respecting permutation of the long type (1) jobs. If for a created complete schedule respecting permutation the approximability conditions are not satisfied, the next permutation from the priority list is similarly considered, otherwise PTAS halts.

4.1 Incorporating short jobs

Given a permutation of the type (2)-(4) long jobs, the main procedure of PTAS incorporates the short jobs into schedule . This procedure works on a number of iterations creating a solution tree in which, with every node/iteration , (partial) schedule is associated. It consists of two basic stages and carries out breadth-first search in tree . At stage 1 a single complete extension of schedule represented by a branch of tree is created. At stage 2 backtracking to some node(s) created at stage 1 is accomplished and some additional complete extensions of schedule are generated. The number of the created complete schedules in tree depends on the structural properties of these schedule and is bounded from above by a low degree polynomial of . In each generated complete schedule certain sufficient conditions are verified, and if one of them is satisfied, PTAS halts with an -approximation solution. Otherwise, the next complete schedule for permutation is generated, and so on until certain bottleneck condition for the last created such complete schedule is satisfied. Then the main procedure may again be called for the next permutation from the priority list that differs from permutation in the order of some jobs.

The main procedure is initially invoked for schedule . Iteratively, at iteration , partial schedule is augmented with the incoming job , resulting in the extended schedule of iteration ; the incoming job is normally a short job but can also be a long job since an initially included long job might be omitted (and then newly included). For the convenience, we associate job with the edge , where is the parent-node of node . A single immediate successor-node of each created node in tree (except the leafs) is generated at stage 1; one additional son of such a node may be created during the backtracking at stage 2.

At iteration , let be the release time of an earliest released job not in schedule , and let be the release time of an earliest released yet unscheduled short job (as we will see, can be smaller than ). Among all short jobs released at time , the short incoming job is one with the largest delivery time ( is any longest one if further ties need to be broken).

Suppose the short incoming job is such that time moment falls within the execution interval of a long job , i.e., the latter long job starts before time in schedule . Then we will say that the long job covers the short job if and the portion of job from time is long. If (a) the execution interval of long job is retained then this job will impose a forced “non-allowable” delay of a more urgent short job ; alternatively, (b) if long job is removed from its current execution interval then short job can be started without delay, which would cause the rise of a gap in schedule . These two possibilities are considered as stages 1 and 2, respectively: two alternative branches/sons from node may be created in solution tree if long job covers short job . The first branch is always created at stage 1 for case (a) above, whereas the second branch may be created already at stage 2 by Procedure BACKTRACK for case (b). In the branch created in the main procedure at stage 1, the long job becomes the incoming job and this job is processed as it is processed in schedule in the newly created extension (note that job does not literally become included, it rather remains scheduled as it is scheduled in ).

If the alternative branch headed by the second son of node is created during the backtracking at stage 2, then in partial schedule long job is omitted and is declared as a pending long job in that branch of solution tree . In schedule short job becomes the incoming job and is started at its release time, whereas the pending long job will become the incoming job at the following iteration of stage 2.

If the incoming job overlaps with the next scheduled long job (one from schedule ), then job and the succeeding long jobs from schedule are right-shifted correspondingly.

We will write when all the long jobs, including job , are right-shifted once job gets included in schedule , and we will write in case job is not included in schedule as it becomes a pending long job.


Below we give a detailed description of the main procedure invoked for schedule . We let be the scheduling time of iteration , the time moment at which the incoming job is scheduled at that iteration (as we will see from the description, it is either the completion time of a specially determined job from schedule or the release time of job , whichever magnitude is larger). We also write in case there is no pending job at the current iteration.

(0) ; ; ; call Procedure MAIN

Procedure MAIN:

If at iteration a complete schedule is generated

then call procedure BACKTRACK else {and proceed as follows}

(1) If time moment falls in a gap in schedule then
(1.1) If or then {schedule short job at time }
; ;
(1.2) If and then {schedule long job at time }
;