## 1 Introduction

In general, a major challenge of scheduling problems is the determination of a job sequence for each machine involved. Especially in non-preemptive one machine settings without idle times, this is usually the only task to be performed. In this context, scheduling problems appear without restrictions on this sequence (e.g. , minimize total tardiness) or with restrictions on the sequence (e.g. , minimize total completion time). Restrictions on the sequence are commonly either time dependent or linked to job pairs. Examples for time dependent restrictions are release dates, deadlines, or time dependent maintenance activities. Precedence constraints are a typical restriction based on job pairs. In this paper, we elaborate on a different restriction based on the position of a job in the sequence. To be more precise, we force one job to have a fixed position within the sequence of jobs.

The practical and theoretical motivation for such a scheduling problem is twofold. Firstly, such a job that has a fixed position in the sequence could be considered as a maintenance operation. Classically, maintenance is also considered to be time dependent, e.g. by modeling predetermined machine unavailability intervals ([15, 1, 16]), by allowing a maximum time between two maintenances, which is often referred to as “tool changes” ([6, 7]), or maintenances may be inserted arbitrarily in order to reduce the processing times of the following jobs ([14, 18]). However, just lately, position dependent maintenance operations have been introduced by Drozdowski, Jaehn and Paszkowski [8]. Amongst others, they motivate position dependent maintenance activities with wear and tear of jet engines or aircraft wheels, which is rather caused by the number of flights (because of the climb flight and thrust reversal for the engines) than by the length of the flight. So the problem considered here can be seen as the special case in which exactly one position dependent maintenance activity is necessary.

Secondly, our problem is a special case of scheduling with non-negative inventory constraints, as was introduced by Briskorn et al. [2]. Here, each job either delivers or removes items of a homogeneous good to or from an inventory. Jobs that remove items can only be processed if the required number of items are available, i.e. only if the inventory level does not become negative. This problem relates to ours, in which a job is fixed to position , as follows. The job fixed on position can be considered as the only job removing items from the inventory, and jobs are required to deliver items before this job can be scheduled. If the parameter settings of the fixed job are chosen such that this job is to be scheduled as early as possible, it is forced to be on position . Analogously, the fixed job can be modeled as the only one delivering to the inventory so that it must be scheduled the latest on position . Parameter settings then need to ensure that it is not scheduled earlier.

In this paper we continue the work of [2] on problem . We consider one machine with the above mentioned inventory constraint with the objective of minimizing total weighted completion time. Briskorn et al. [2] show that this problem is strongly NP-hard in the general case and they propose various special cases, which are easily solvable and some which are still open. They especially differ between the sets of jobs that deliver to the inventory and that remove goods from the inventory. As mentioned before, we consider a special case in which one of the two sets only consists of one job. For this problem setting, we propose a fully polynomial time approximation scheme (FPTAS).

Several special cases and generalizations of problem have been analyzed in the literature. Briskorn and Pesch [5]

consider the generalization with a maximum inventory level. They show that even finding a feasible solution is NP-hard and they propose heuristics. Another generalization is analyzed by Kononov and Lin

[13]. Here, each job consumes some items at the beginning of its processing time and it adds to the inventory a number of items at its completion time. They show NP-hardness of several special cases and present some approximation algorithms for further special cases. Morsy and Pesch [17] consider a special case in which all jobs delivering to the inventory must be equal (concerning processing time, weight, and inventory modification) and the remaining jobs must also share some characteristics. For this setting, a 2-approximation is presented. Optimality criteria and an exact branch-and-bound algorithm for the standard problem are proposed by Briskorn et al. [3].There are some problems discussed in the literature, which are closely related to . First of all, Briskorn and Leung [4] consider the problem with maximum lateness objective function. They propose some optimality criteria, lower bounds, and heuristics, which are then used in a branch-and-bound framework. There are various papers ([9, 10, 11, 12]) that analyze a “non-renewable resource constraint”, which means that each job removes goods from the inventory, but the inventory is filled automatically at predetermined points in time. So contrary to the above mentioned inventory constraint, which was exclusively based on the job sequence, here the constraint is partially time based. The papers on this problem mostly focus on minimizing makespan. Only Kis [12] considers the same objective function and presents a strong NP-hardness proof and an FPTAS for a special case.

We formulate the problem with the constraint that a fixed amount of jobs must be finished when the special job (refer to as pivot job) starts. We present an FPTAS in this paper. First we propose a dynamic programming whose running time depends on job processing times. To obtain an FPTAS, we use rounding technique: we round the job processing times so that they are polynomial in and , then we obtain the optimal schedule for the rounded jobs via dynamic programming and apply that schedule to original jobs. However, the rounding approach does not guarantee - approximation. To make it work, we further discover an important property when the rounding technique fails: the job with the largest weight can not be scheduled after (or the same as) the job with the largest processing time. The reason behind is that when this property breaks, the objective value of the optimal schedule is large enough, which makes the dynamic programming solution good enough. With this property, on one hand we apply the rounding technique and on the other hand we put these two special jobs before or after the pivot job accordingly and solve the subproblem.

The remainder of the paper is organized as follows. The problem formulation is given in Section 2. In Section 3 we propose two dynamic programs to solve the problem, with running time polynomial in job processing times and job weights, respectively. Then we use the dynamic programming to design an FPTAS in Section 4. In Section 5, we present another FPTAS as a comparison. In Section 6 we conclude our work.

## 2 Formulation

The instance of the problem is a set of jobs , a specified job and an integer .
Each job is defined by its weight and its processing time (or sometimes refereed to as workload, size).
A schedule over instance is an order of jobs, we write (resp. ) meaning that job precedes (resp. succeeds) job or jobs are the same in schedule .
The completion time of a job in a feasible schedule is the time when the job finishes.
Assume that the machine is never idle unless there is no more job to be processed, we define as the *completion time* for each job in schedule .
Also, we denote as the density of job .
The objective is to minimize total weighted completion time on a single machine such that there are exactly jobs scheduled before job , i.e.
where is part of the input.

In the optimal solution, jobs that are scheduled before (or after) job must follow *Smith’s order*.
In classical *Smith’s Rule* [19] (or Smith’s order), jobs are executed in non-increasing order of its density .
Smith’s Rule has been proven to be optimal when there is no position constraint of job . However, Smith’r Rule does not work in this problem as in the special case where approaches to infinity and the jobs that are placed before the pivot job in the optimal solution should have the smallest processing times.
In the sequel, we assume job is indexed as , and the remaining jobs are sorted in Smith’s order, i.e.
.

## 3 Dynamic Programming with Side Constraints

In this section, we propose pseudo-polynomial dynamic programs to solve this problem. Given integer , job and sets of jobs , such that , we aim to find the optimal schedule such that (i.) jobs from are scheduled before job (ii.) jobs from are scheduled after job (iii.) there are exactly jobs scheduled before job .

We say that job is assigned when the order of job and job is determined and unassigned otherwise. Therefore, jobs are assigned and let be the unassigned jobs. Let be the rounded job processing time of job with a given parameter . We would see later that is polynomial in , , and linear in where is the maximum job processing time among all unassigned jobs. In other words, we make sure that for each unassigned job , is polynomial in and . Similarly, we could also round job weights with a different , i.e. . In the following part of this section, we give two dynamic programs to solve the rounded jobs based on job processing time in Section 3.1 and based on job weight in Section 3.2, and denote and as the optimal schedule (the order of jobs) returned by the corresponding dynamic programming for the rounded jobs respectively.

### 3.1 Based On Job Processing Time

We propose a dynamic programming with pseudo-polynomial running time. That is, we assume for each job that its processing time has already been rounded into integers, and the running time of the dynamic program is polynomial in the number of jobs and the maximum job processing time.

Given a set of jobs , we denote as the total processing time of jobs . For a feasible schedule , let be the subset of jobs in which are scheduled before job in schedule .

Let and . A partial schedule of jobs assigns to each of these jobs a valid completion time, making sure that each job could be finished within that valid completion time (i.e. jobs do not overlap). First, we try every possibility of the completion time of job , i.e. we aim to find the optimal schedule such that where we test every possibility of from . Hence, we denote as the completion time of job when is fixed. Afterwards, we consider jobs and focus on two parameters in the optimal schedule where and . Finally, we test every possibility of in the dynamic programming. Let be the total weighted completion time of jobs in an optimal partial schedule such that there are jobs from which are scheduled before job with total processing time , where , , . is taken to be infinity if no such partial schedule exists.

To find the optimal schedule of jobs , we fix the schedule of job and then solve the subproblem. We show that the completion time of job could be calculated once job is determined to be scheduled before or after job . For , we have if and otherwise. For , we have

###### Proof.

Let be an optimal schedule. Without loss of generality, we assume that because we try every possibility of value from . Then, we will prove that Lemma 3.1 gives the correct optimal solution.

For the base case, we have , i.e. . The lemma is correct because no feasible schedule exists when or . For the case , we prove the lemma by claiming that we have tried every possibility for the schedule of job . More specifically, we show that when job is scheduled before (or after) job , the completion time of job could be computed directly. Therefore, we only need to try two possibilities (before or after job ) for the schedule of job .

In the following we show that the completion time of job is correct. Without loss of generality, we assume and because we try every possibility of parameter and . Given , we denote as the set of jobs that are scheduled no later than (inclusive) in the final optimal schedule.

If job is scheduled before job in the optimal schedule, i.e. , then by Smith’s Rule, we have , i.e. jobs from will not be scheduled before job in the optimal schedule because is scheduled before job . Therefore, , which implies that the completion time of job is . It corresponds to the first (resp. second) case of the equation, if (resp. ).

Otherwise, job is scheduled after job in the optimal schedule. By Smith’s Rule, we have , i.e. jobs from must be scheduled before job in the optimal schedule because job is scheduled after job . Moreover, we have , i.e. jobs from will not be scheduled between and . Therefore . As a result, the completion time of job is . It corresponds to the third case of the equation, if or . ∎

Time Complexity: Note that the values , and could be precomputed, and they will not change the overall running time. In other words, the running time depends on the unassigned jobs. The overall time complexity is where . Indeed, the dynamic programming has a table size , the computation of each value takes operations, and the dynamic programming needs time for , thus in total the time complexity is . It is important that only depends on the unassigned jobs .

### 3.2 Based On Job Weight

In this section, the unassigned jobs are required to have integer weight, as the running time of the dynamic programming depends on the weights of the unassigned jobs. We assume for each job that its weight is already rounded to integer. As this problem is highly symmetrical, we show that the dynamic programming in Section 3.1 could be applied by Theorem 3.2. For each job , we create a corresponding job with processing time , weight , and we obtain a new instance , i.e. .

The reverse order of is .

###### Proof.

We denote as the objective value of schedule for the jobs with parameter (position constraint parameter). Let be any feasible schedule for jobs with parameter , and let be the reverse of , i.e. if and only if . We prove that

Firstly, in schedule there are jobs which are scheduled before job since is feasible for with parameter . Therefore, in schedule for there are jobs which are scheduled before job by definition of . Moreover, in schedule jobs are scheduled before job , then in schedule jobs are scheduled after job . Similar analysis can be used for jobs . Consequently, schedule is a feasible schedule for with parameter . Then, we prove the theorem by the equation: . In the first equality, we formulate the objective of schedule for . In the second equality, we reorganize the summation. In the third equality, for each we substitute by and by as they have equal value. In the fourth equality, we replace by .

Consequently, the reverse order of the optimal solution for jobs with parameter is optimal for jobs with parameter . ∎

## 4 Fully Polynomial-Time Approximation Scheme(FPTAS)

In this section, we present the FPTAS algorithm. Recall that the dynamic programming in previous section gives the optimal solution while the running time depends on job processing times (or job weights). The straightforward idea is to round the job processing times such that they are polynomial in and and then solve the rounded jobs via dynamic programming. However, this technique does not guarantee - approximation where we will show an example. Later, we extract information from this failure and design an FPTAS.

#### Rounding Technique

For each job , we round job processing time with parameter , i.e. with where is the maximum processing time of all jobs and is a function which is polynomial in and . We obtain the optimal schedule (denote by ) for the rounded jobs via dynamic programming and analyze the performance of schedule for jobs . Let be the optimal schedule for jobs . The objective of could be bounded:

(1) |

where in the first and third inequality we use , and in the second inequality we apply the fact that is optimal for the rounded jobs. Similarly, when we round job weights with a different parameter , i.e. , we would have

(2) |

The error in Equation (1) may not be bounded by , where one would see from the following example. In the example, we have jobs where , and . After rounding, as , therefore the optimal schedule for the rounded jobs will be , while the optimal schedule for original jobs is . Therefore, the approximation ratio is , which is a constant.

Note that the error in Equation (1) is . This error may not be bounded by if the objective value of optimal solution is small, comparing to the product of maximum job processing time and maximum job weight. Therefore, we focus on two such special jobs, job the job of largest processing time, and job the job of largest weight. Note that and that if job is scheduled after job or in the optimal solution, i.e. , we will have , then the error in Equation (1) could be bounded when we take :

Therefore, when the rounding technique fails, we would have , i.e. the job of largest weight must be scheduled before the job of largest processing time. This property from the failure of rounding technique plays an important role in designing the FPTAS algorithm.

#### FPTAS Algorithm

From the above analysis, the rounding technique could possibly fail to return a good solution, which we never know. In case that the failure happens, we would assign some jobs based on the property from such failure that the objective value of the optimal schedule is small (i.e. the job of largest weight must be scheduled before the job of largest processing time). Afterwards we run the rounding technique again, and still a good solution may not be returned. Indeed, we could recursively assign jobs and apply the rounding technique. However, as more and more jobs are assigned, the unassigned jobs will have small weight and processing time, which will not reflect the objective value of the optimal schedule. In other words, the above property will not hold any more. Instead, we would fix the position of one job when such case happens, i.e. the job weight or job processing time of the unassigned job is small enough.

The FPTAS algorithm has many rounds. In each round, we aim to fix the position of one job. More precisely, we make this job as the first job (or last job), then we take the remaining jobs as a new instance (update constraint parameter accordingly) and start over. We guarantee that the performance of the solution lose by a factor of each time when we fix one job. Let be the remaining jobs in current round, i.e. jobs are already fixed. We would take as an instance, and let be the optimal schedule of jobs . In order to find and fix one job from , the algorithm will go into many iterations and assign jobs into sets such that either or , where jobs (resp. ) are determined to be scheduled before (resp. after) job . Let be the unassigned jobs. Let , and , . The algorithm handles the problem separately according to the following inequalities.

(3) | ||||

(4) |

Assume that the optimal schedule assign jobs (resp. jobs ) before (resp. after) job , if inequality (3) or (4) holds, we are able to either

i) obtain a feasible schedule with -approximation, or

ii) fix one job from as the first job or last job by losing at most a factor of comparing to the optimal schedule of .

###### Proof.

In the following cases, we claim that i) could be achieved if Case 1) or 2) happens and ii) could be achieved if Case 3) or 4) happens (refer to Algorithm 1 procedure FixJob).

Case 1) If and , we assign jobs before job , jobs after job and terminate the algorithm, where are the first jobs from by non-increasing order of job weight. Let be the corresponding schedule. In the optimal schedule , let (resp. ) be the set of jobs scheduled before (resp. after) job . In schedule , we use the corresponding notation . Schedule is obtained based on schedule by advancing and rearranging jobs as the order in the optimal schedule, hence the completion time of jobs in schedule is at most that in the optimal schedule, i.e. . Since jobs (resp. ) in schedule are ordered by Smith Rule, the objective value of is at most that of . Note that as , then we have because these jobs are scheduled before job , and . Thus the objective value of is at most:

where in the first inequality we apply as jobs are selected by job weight from jobs , and in the second inequality we apply . As for (one would enumerate all possible solutions for ). The claim follows.

Case 2) If and , we assign jobs before job , jobs after job and terminate the algorithm, where are the first jobs from by non-decreasing order of job processing time. A similar argument could be constructed as Case 1).

Case 3) If and , we place job as the last job among and reduce to subproblem by taking the remaining jobs as a new instance. Let be the schedule transformed from by placing job after jobs . Schedule is feasible because job must be scheduled after job in the optimal schedule as . Hence, after transformation the completion time of any job of in schedule is at most that in schedule . By assumption, in the optimal schedule , job must be scheduled after all jobs in , we have and . Therefore,

Case 4) If and , we place job as the first job among . A similar argument could be constructed as Case 3). ∎

Assume the rounding technique fails to return -approximation solution every time, then either inequality (3) or (4) will hold after at most iterations.

###### Proof.

Initially, we have . Suppose and . Let job be the job of largest processing time among unassigned jobs, and let . In each iteration, we first apply the rounding technique (round job processing time) with parameter . Note that the time complexity of dynamic programming in Section 3 only depends on unassigned jobs.

If , we claim that the rounding technique will return -approximation solution due to , as the error in Equation (1) could be bounded

Otherwise, , we solve by three cases.

Case 1) If . We assign job into set (as the optimal does) and remove job from , i.e. . Then we apply the rounding technique (round job processing time) again with where . Note that job is still the job of largest weight among unassigned jobs . We claim that we would have if the rounding technique fails again. Similarly, if , we have

That is, the rounding technique returns -approximation solution and the claim follows. Therefore, we have if the rounding technique fails again, which implies that the algorithm will stay on Case 1). Hence, we continue assigning job into set

until at some moment

(refer to Algorithm 1 procedure RepeatSize). Consequently, at least one rounding technique will succeed.Case 2) If . We apply the rounding technique of rounding job weights by taking (refer to Algorithm 1 procedure RepeatWeight). The error in Equation (2) could be bounded due to :

A similar argument could be constructed to show that once Case 2) is triggered, the algorithm will stay on Case 2) and at least one rounding technique will succeed.

Case 3) If . We assign job into set and job into set , and continue to the next iteration (refer to Algorithm 1 line 10).

One would see that there will be at most iterations to assign all jobs, i.e. the procedure RepeatSize and RepeatWeight in Algorithm 1 will be called at most times. ∎

Algorithm 1 is - approximation with time complexity .

###### Proof.

We first prove that the algorithm will give - approximation solution. The solution returned by the algorithm comes from either a successful rounding technique, or the termination case in the procedure FixJob (Line 29 and 32), and before that the algorithm may have already fixed some jobs (Line 35 and 38). By Lemma 4, the termination case will return -approximation solution, also a successful rounding technique will return -approximation solution, comparing to . When fixing one job, we would lose a factor of by Lemma 4, comparing . One may think about transforming the optimal solution of jobs into our solution by fixing one job each time, then each time we still lose a factor of comparing to . Therefore, the total approximation of fixing jobs will increase exponentially, which is . Finally, the overall approximation is .

We show the time complexity of the algorithm. After rounding, the largest job processing time or job weight of unassigned jobs is at most as we take when rounding job processing time or when rounding job weight. Therefore, each dynamic programming has running time , In each iteration, the dynamic programming procedure RepeatSize and RepeatWeight is called once, each procedure executing dynamic programming at most times. We need to try iterations to fix one job (Line 35 and 38) or terminate the algorithm (Line 29 and 32) We need to fix at most jobs. Finally, the time complexity is . ∎

## 5 Different Approach

In this section, we show that a different FPTAS could be constructed based on the approach by Woeginger [20]. Woeginger proposed some conditions to identify whether a dynamic programming could be transformed into an FPTAS, using the method of trimming state space.

First, we present a different dynamic programming, and then show that the conditions are satisfied. We assume that jobs have integer weights and integer processing times. Recall that job is indexed as , and the remaining jobs are sorted by Smith’s order. We start from the schedule which only contains job

, and then add remaining jobs into the schedule one by one. The dynamic programming algorithm works with vector sets

where in phase () job is considered and is computed from . A state vector in encodes a partial schedule without idle time for jobs : is the number of jobs that are scheduled before job , (resp. ) is the total processing time (resp. total weights) of jobs before (resp. after and include) job , and is the objective value for the partial schedule.Initialization. Set .

Phase j. For every vector in , put the vectors and into .

Output. return the vector that minimizes the value such that .

Note that in phase , by Smith’s rule job can only be scheduled at the end or right before job . If job is scheduled at the end, we just append job into the schedule, then the objective value only increases by the weighted completion time of job , i.e. . Otherwise job is scheduled right before job . When we insert job right before job , besides the weighted completion time of job , the completion time of those jobs that are scheduled after job will increase by , therefore the objective value will increase by

Comments

There are no comments yet.