Approximate Schedules for Non-Migratory Parallel Jobs in Speed-Scaled Multiprocessor Systems

11/28/2018 ∙ by Alexander Kononov, et al. ∙ Novosibirsk State University 0

We consider a problem of scheduling rigid parallel jobs on variable speed processors so as to minimize the total energy consumption. Each job is specified by its processing volume and the required number of processors. We propose new constant factor approximation algorithms for the non-migratory cases when all jobs have a common release time and/or a common deadline.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Modern processors can vary their speeds dynamically when jobs are executed. The instantaneous power required to run a job at speed is defined by , where is a constant. The energy consumption is the power integrated over time. We assume that the release times and deadlines are given for jobs and the aim is to calculate the speeds of the jobs and to construct a feasible schedule so as to minimize the total energy consumption. There exist various variants of the speed scaling scheduling, depending on the type of jobs and processors, and system characteristics. One of the algorithmic and complexity study of this area is devoted to revising classical scheduling problems with dynamic speed scaling (see e.g. [2, 4, 8, 10, 12, 20, 21] and others).

In this paper we study some particular cases of the speed scaling scheduling of parallel jobs, each of them requires several processors simultaneously [11]. The motivation to consider parallel jobs consists in the fact that some jobs can not be performed asynchronously on modern computers. Such situation takes place in testing and reliable computing, parallel applications on graphics cards, computer control systems and others.

The energy effective scheduling has been widely investigated for single-processor jobs (see e.g. reviews [1, 12]). The preemptive single-processor setting is polynomially solvable. The algorithms developed in [18, 20] have running time . However, the same problem without preemptions is NP-hard [5] even in the case of tree structured jobs. Antoniadis et al. [5] proposed an -approximation algorithm for this case and an -approximation algorithm for the general non-preemptive case on one processor. Later, Bampis et al. [7] presented an algorithm that achieves -approximation for the latter problem, where is the generalized Bell number.

For the migratory problem, where parallel processors are available and all jobs are of the single-processor type, polynomial time algorithms were given in [2, 4, 8, 20]. As far as we know, the algorithm [20] has the best time complexity among the algorithms above.

The preemptive multiprocessor setting without migration is NP-hard in the strong sense as proved in [3]. Albers et al. [3] provided -approximation algorithms for jobs with unit works and arbitrary deadlines, or arbitrary works and agreeable deadlines, and an -approximation algorithm for jobs with common release times, or common deadlines. Moreover, they showed that the problem with unit works is polynomially solvable for agreeable deadlines. Chen et al. [9] proposed a greedy algorithm with an approximation guarantee for the case when all jobs must be executed in one time interval. In [6], for non-preemptive instances an -approximation algorithm has been presented, that explore the idea of transforming an optimal preemptive schedule to a non-preemptive one.

Scheduling of multiprocessor jobs has been extensively investigated in the case of regular time criteria (see e.g. the book of Drozdowski [11]), but for the criterion of energy minimization it is poorly studied. Recently, an approximation algorithm has been proposed in [15] for the speed scaling scheduling of rigid parallel jobs with migration. The algorithm returns a solution within an additive error  and runs in time polynomial in and the input size. Note that this algorithm is pseudopolynomial and it is based on solving a linear configuration program using the Ellipsoid method. In [16], we developed a strongly polynomial algorithm that achieves -approximation ratio for the same problem. As well we showed that most of the NP-hardness proofs for scheduling problems with the maximum lateness criterion may be easily transformed to their speed scaling counterparts [15].

2 Problem Statement and Our Results

We assume that a computer system consists of parallel identical processors, which can dynamically change the speed. A set  of parallel jobs is given. Each job  has a release time , a deadline and a processing volume (work) . The number of processors simultaneously required by job  is called job size and denoted by . Any subset of parallel processors of the given size can be used to execute job . Jobs with such property are called rigid jobs [11]. Migration of a job among different subsets of processors is disallowed. Job preemption might or might not be allowed in the exploring of scheduling in this paper.

The speed of a job is the rate at which its work is completed. A continuous spectrum of processor speeds is available. The power consumed when running at speed is , where is a constant close to  [12]. The energy used is power integrated over time. Each of processors may operate at variable speed, but if processors execute the same job simultaneously then all these processors run at the same speed.

A schedule is required to be feasible in the sense that each processor executes at most one job at the time and each job is processed in the required work between its release time and deadline. The problem is to find a feasible schedule that minimizes the total energy consumed on all the processors. The preemptive and nonpreemptive variants of non-migration energy-efficient scheduling of rigid jobs [7, 11] are denoted by and , respectively.

Let us assume that all jobs have a common release time  and/or a common deadline . We present strongly polynomial time algorithms achieving constant factor approximation guarantees for these particular cases. Our algorithms consist of two stages. At the first stage we obtain a lower bound on the minimal energy consumption and calculate intermediate execution times of jobs. Then, at the second stage, we determine final speeds of jobs and schedule them.

A lower bound on the objective function and intermediate processing times of jobs can be found in time using the method developed in [16]. The method is based on a reduction of the speed scaling problem to the special min-cost max-flow problem [20]. Here we propose more effective approaches for the considered problem instances.

Problems and are strongly NP-hard even in the case of single-processor jobs with arbitrary processing volumes as proved in [3]. Simple reductions from 2-PARTITION and 3-PARTITION imply that problem is ordinary NP-hard and problem  is NP-hard in the strong sense even if all jobs have unit processing volumes. Using the approach from [15] (Section 3. NP-Hardness Results), we can show that problem is strongly NP-hard even in the case of two processors. The proof is almost a step by step reproduction of the NP-hardness proof for (see Theorem Al in [17]).

The paper is organized as follows. In Section 3 we propose an -approximation algorithm for the non-preemptive scheduling where all jobs are available in one time interval. In Section 4, -approximate schedules are constructed for the preemptive and non-preemptive problems with jobs sharing a common release time (or symmetrically, a common deadline). The last section contains the concluding remarks.

3 Common Release Date and Deadline

In this section we consider the non-preemptive case of the problem where all jobs arrive at time and have a shared global deadline .

The first stage. Now we find auxiliary durations of jobs and a lower bound on the objective function in time. Let denote the current number of unoccupied processors and be the set of currently considered jobs. Initially and .

We enumerate the jobs one by one in order of non-increasing works. If the current job  has , then we assign duration for this job, and set and . After that we go to the next job. Otherwise, all jobs satisfy the inequality , and we assign durations  for them.

The presented approach guarantees that , , and gives the lower bound on the objective function equal to . At the second stage we use the “non-preemptive list-scheduling” algorithm [19] to construct a feasible schedule in interval .

The second stage. Whenever a subset of processors falls idle, the “non-preemptive list-scheduling” algorithm schedules a job that does not require more processors than are available, until all jobs in are assigned. The time complexity of the algorithm is .

We claim that the length of the constructed schedule is at most (see Lemma 1 below). By increasing the speed of each job in times we obtain a schedule of the length at most . The total energy consumption is increased by a factor in comparison with the lower bound. As a result, we have

Theorem 3.1

A -approximate schedule can be found in time for problems and .

Using the results from [14], we conclude that the approximation ratio of for the “non-preemptive list-scheduling” algorithm is tight even if all jobs have single-processor type. As a result, the energy consumption is increased in times when we put the resulting schedule inside the interval . Therefore, the approximation ratio of our algorithm is also tight.

Lemma 1

Given processors, an interval , and a set of jobs with processing times and sizes , where . The length of the schedule  constructed by the “non-preemptive list-scheduling” algorithm is at most .

Proof. Let denote the length of schedule . If at least processors are used at any time instance in , we have

Otherwise, assume that is the last time interval of schedule  with processors being used during . By the construction of there is a job that is performed during the whole interval . Let be the completion time of in . It is easy to see that at every point in time during interval schedule uses at least processors (otherwise job should be started earlier). Moreover, at least processors are utilized in interval , therefore, each job executed in interval requires no less than processors. Thus, the total load of all processors is at least . If , then .

Otherwise as we have

4 Common Release Date or Deadline

In this section we study the problem without migration where all jobs are released at time but have individual deadlines.

4.1 Preemptive Problem

Here we consider the case when the preemption of jobs is allowed.

The first stage. It can easily be checked that the optimal energy consumption of rigid jobs with sizes and works on processors is at least times that of single-processor jobs with works on one processor (the result is proved similarly to Lemma 1 in [3]). So, if we find an optimal solution of the latter problem, and decrease the speeds of jobs in times, then a lower bound on the energy consumption of rigid jobs is obtained. However such approach may lead to the execution time of a job  greater than .

In [21], Yao et al. showed that the preemptive problem on a single processor is solvable in polynomial time. They proposed an efficient algorithm called YSD that repeatedly identifies time intervals of highest density. The density of an interval is the total work released and to be completed in divided by the length of . The algorithm repeatedly schedules jobs in highest density intervals and takes care of reduced subproblems.

We propose a modification of the algorithm YSD [21] to obtaining a lower bound on the minimal energy consumption for the considered problem. At the same time, for each rigid job we find an intermediate duration such that

(1)
(2)

Now we construct a special schedule for jobs of works on one processor, which will assure conditions (1) and (2) for the corresponding rigid jobs. Let be the jobs which need to be processed in some time interval . Initially we have intervals of the form as all jobs release at time .

Modified YSD Algorithm

Step 1. Repeat steps 1.1 and 1.2 until :

Step 1.1. Let be the interval with maximum density, i.e., that maximizes

Step 1.2. If the inequality

(3)

holds for all , then process these jobs in interval with speed equal to the maximum density, i.e., set processing time for each job . Then remove the jobs from , and adjust the remaining jobs as if the time interval does not exist, i.e., set for each job . Endpoints and densities of intervals are updated for Step 1.1.

Otherwise enumerate all jobs from for which inequality (3) is violated. The current job  is assigned in interval , i.e., . Then remove job  from and delete interval from the further consideration, i.e., set for and for .

Step 2. Return the resulting durations of jobs .

At least one job is deleted at each call of Step 1.2. Removing a job requires additional operations to update information about the remaining jobs and intervals. Therefore, the running time of the algorithm is .

The intermediate processing times  of rigid jobs are calculated as , and give the lower bound on the energy consumption. The conditions (1) and (2) hold for the computed processing times.

At the second stage we use the “preemptive earliest deadline list-scheduling” algorithm to construct a feasible schedule of problem .

The second stage. The “preemptive earliest deadline list-scheduling” algorithm schedules jobs in order of non-decreasing deadlines as follows. If , then job  is assigned at the end of the current schedule. Otherwise we start job  at the earliest time instant when processors are idle and process it during time, ignoring intervals of jobs with . The time complexity of the algorithm is .

We claim that the completion time  of each job  in the constructed schedule is at most (see Lemma 2 below). Hence an increasing of the speeds in times yields a feasible schedule. The total energy consumption is increased by a factor .

Obviously, by interchanging release times and deadlines, the presented algorithm can also handle to the case of jobs with individual release times but a common deadline. As a result, we have

Theorem 4.1

A -approximate schedule can be found in time for problems and .

Lemma 2

Given processors and a set of jobs with deadlines  processing times and sizes , where for each with . The completion time  is at most for each job  in the schedule  constructed by the “preemptive earliest deadline list-scheduling” algorithm.

Proof. We consider an arbitrary deadline , where job  has the maximal completion time in schedule  among all jobs with deadline equal to .

Note that for all . Let denote the part of schedule , which contains only jobs from and occupies interval . We will show that .

If at least processors are used at any time instance in subschedule , we have

Otherwise assume that is the last job in subschedule , that requires processors. It is easy to see that all time-slots in intervals and use at least processors, and at least processors are utilized in interval . Therefore, the total load of all processors in subschedule is at least

If , then .

Otherwise as we have

4.2 Non-Preemptive Problem

In this subsection we consider the case when preemptions are disallowed and each job requires no more than processors.

A lower bound on the objective function and intermediate processing times of jobs are calculated as in Subsection 4.1. But at the second stage we use the “non-preemptive earliest deadline list-scheduling” algorithm to construct a schedule. This algorithm assigns jobs in the schedule as soon as possible in order of non-decreasing deadlines, and it’s time complexity is also .

The property of Lemma 2 holds as well in the presence of the “non-preemptive earliest deadline list-scheduling” algorithm for problem with . So, an increasing of the job speeds in times in the constructed schedule leads to a feasible solution. Thus the following theorem holds.

Theorem 4.2

A -approximate schedule can be found in time for problems and with .

5 Conclusion

We have studied the energy minimization under a global release time or a global deadline constraint. Strongly polynomial time approximation algorithms are developed for the rigid jobs with no migration. Our algorithms have constant factor approximation guarantees.

Further research might address the approaches to the problems with more complex structure, where processors are heterogeneous and jobs have alternative execution modes with different characteristics.


References

  • [1] S. Albers, Energy-efficient algorithms, Communications of the ACM, 53:5 (2010), 86–96. DOI: 10.1145/1735223.1735245
  • [2] S. Albers, A. Antoniadis, G. Greiner, On multi-processor speed scaling with migration, Journal of Computer and System Sciences, 81 (2015), 1194–1209. DOI: 10.1016/j.jcss.2015.03.001
  • [3] S. Albers, F. Müller, S. Schmelzer, Speed scaling on parallel processors, Algorithmica, 68: 2 (2007), 404–425. DOI: 10.1007/s00453-012-9678-7
  • [4] E. Angel, E. Bampis, F. Kacem, D. Letsios, Speed scaling on parallel processors with migration, 18th International European Conference on Parallel and Distributed Computing, Lecture Notes in Computer Science, 7484 (2012), 128–140. DOI: 10.1007/978-3-642-32820-6_15
  • [5] A. Antoniadis, C.C. Huang, Non-preemptive speed scaling, J. Sched., 16: 4 (2013), 385–394. DOI: 10.1007/s10951-013-0312-6
  • [6] E. Bampis, A. Kononov, D. Letsios, G. Lucarelli, I. Nemparis, From preemptive to non-preemptive speed-scaling scheduling, Discrete Applied Mathematics, 181 (2015), 11–20. DOI: 10.1016/j.dam.2014.10.007
  • [7] E. Bampis, A. Kononov, D. Letsios, G. Lucarelli, M. Sviridenko, Energy efficient scheduling and routing via randomized rounding, J. Sched., 21: 1 (2018), 35–51. DOI: 10.1007/s10951-016-0500-2
  • [8] B.D. Bingham, M.R. Greenstreet, Energy optimal scheduling on multiprocessors with migration, International Symposium on Parallel and Distributed Processing with Applications (ISPA’08), 2018, IEEE, 153–161. DOI: 10.1109/ISPA.2008.128
  • [9] J. Chen, H. Hsu, K. Chuang, C. Yang, A. Pang, T. Kuo, Multiprocessor energy-efficient scheduling with task migration considerations, 16th Euromicro Conference on Real-Time Systems (ECRTS2004), 2004, IEEE, 101–108. DOI: 10.1109/EMRTS.2004.1311011
  • [10] V. Cohen-Addad, Z. Li, C. Mathieu, I. Milis, Energy-efficient algorithms for non-preemptive speed-scaling, International Workshop on Approximation and Online Algorithms (WAOA 2014), Lecture Notes in Computer Science, 8952 (2015), 107–118. DOI: 10.1007/978-3-319-18263-6_10
  • [11] M. Drozdowski, Scheduling for Parallel Processing, Springer-Verlag, London, 2009.
  • [12] M.E.T Gerards, J.L. Hurink, P.K.F. Hölzenspies, A survey of offline algorithms for energy minimization under deadline constraints, J. Sched., 19 (2016), 3–19. DOI: 10.1007/s10951-015-0463-8
  • [13] G. Greiner, T. Nonner, A. Souza, The bell is ringing in speed-scaled multiprocessor scheduling

    , Theory of Computing Systems,

    54:1 (2014), 24–44. DOI: 10.1007/s00224-013-9477-9
  • [14] B. Johannes, Scheduling parallel jobs to minimize the makespan, J. Sched., 9 (2006), 433–452. DOI: 10.1007/s10951-006-8497-6
  • [15] A. Kononov, Y. Kovalenko, On speed scaling scheduling of parallel jobs with preemption. International Conference on Discrete Optimization and Operations Research (DOOR-2016), Lecture Notes in Computer Science, 9869 (2016), 309–321. DOI: 10.1007/978-3-319-44914-2_25
  • [16] A. Kononov, Y. Kovalenko, An approximation algorithm for preemptive speed scaling scheduling of parallel jobs with migration, International Conference on Learning and Intelligent Optimization (LION-11), Lecture Notes in Computer Science, 10556 (2017), 351–357. DOI: 10.1007/978-3-319-69404-7_30
  • [17] C-Y. Lee, X. Cai, Scheduling one and two-processor tasks on two parallel processors, IIE Transactions, 31: 5 (1999), 445–455. DOI: 10.1023/A:1007501324572
  • [18] M. Li, F. Yao, H. Yuan, An algorithm for computing optimal continuous voltage schedules, International Conference on Theory and Applications of Models of Computation (TAMC 2017), Lecture Notes in Computer Science, 10185 (2017), 389–400. DOI: 10.1007/978-3-319-55911-7_28
  • [19] E. Naroska, U. Schwiegelshohn, On an on-line scheduling problem for parallel jobs, Information Processing Letters, 81: 6 (2002), 297–304. DOI: 10.1016/S0020-0190(01)00241-1
  • [20] A. Shioura, N. Shakhlevich, V. Strusevich, Energy saving computational models with speed scaling via submodular optimization, Third International Conference on Green Computing, Technology and Innovation (ICGCTI2015), 2015, Serdang, 7–18.
  • [21] F. Yao, A. Demers, S. Shenker, A scheduling model for reduced CPU energy, 36th Annual Symposium on Foundation of Computer Science (FOCS 1995), 1995, IEEE, 374–382. DOI: 10.1109/SFCS.1995.492493