1 Introduction
The sporadic task model has been widely adopted to model recurring executions of tasks in realtime systems [28]. A sporadic realtime task is defined with a minimum interarrival time , its timing constraint or relative deadline , and its (worstcase) execution time . A sporadic task represents an infinite sequence of task instances, also called jobs, that arrive with the minimum interarrival time constraint. That is, any two consecutive jobs of task should be temporally separated by at least . When a job of task arrives at time , the job must finish no later than its absolute deadline . According to the Liu and Layland task model [27], the minimum interarrival time of a task can also be interpreted as the period of the task.
To schedule realtime tasks on multiprocessor platforms, there have been three widely adopted paradigms: partitioned, global, and semipartitioned scheduling. A comprehensive survey of multiprocessor scheduling in realtime systems can be found in [15]. In this paper, we consider partitioned scheduling, in which tasks are statically partitioned onto processors. This means that all the jobs of a task are executed on a specific processor, which reduces the online scheduling overhead since each processor can schedule the sporadic tasks assigned on it without considering the tasks on the other processors. Moreover, we consider preemptive scheduling on each processor, i.e, a job may be preempted by another job on the processor. For scheduling sporadic tasks on one processor, the (preemptive) earliestdeadlinefirst (EDF) policy is optimal [27] in terms of meeting timing constraints, in the sense that if the task set is schedulable then it will also be schedulable under EDF. In EDF, the job (in the ready queue) with the earliest absolute deadline has the highest priority for execution. Alternatively, another widely adopted scheduling paradigm is (preemptive) fixedpriority (FP) scheduling, where all jobs released by a sporadic task have the same priority level.
The complexity of testing whether a task set can be feasibly scheduled on a uniprocessor depends on the relations between the relative deadlines and the minimum interarrival times of tasks. An input task set is said to have (1) implicit deadlines if the relative deadlines of sporadic tasks are equal to their minimum interarrival times, (2) constrained deadlines if the minimum interarrival times are no less than their relative deadlines, and (3) arbitrary deadlines, otherwise.
On a uniprocessor, checking the feasibility for an implicitdeadline task set is simple and wellknown: the timing constraints are met by EDF if and only if the total utilization is at most [27]. Moreover, if every task on the processor is with , it is not difficult to see that testing whether the total utilization is less than or equal to is also a necessary and sufficient schedulability test. This can be achieved by considering a more stringent case which sets to for every . Hence, this special case of arbitrarydeadline task sets can be reformulated to task sets with implicit deadlines without any loss of precision. However, determining the schedulability for task sets with constrained or arbitrary deadlines in general is much harder, due to the complex interactions between the deadlines and the periods, and in particular is known to be cohard or cocomplete [17, 19, 18].
In this paper, we consider partitioned scheduling in homogeneous multiprocessor systems. Deciding if an implicitdeadline task set is schedulable on multiple processors is already complete in the strong sense under partitioned scheduling. To cope with these hardness issues, one natural approach is to focus on approximation algorithms, i.e., polynomial time algorithms that produce an approximate solution instead of an exact one. In our setting, this translates to designing algorithms that can find a feasible schedule using either (i) faster or (ii) additional processors. The goal, of course, is to design an algorithm that uses the least speeding up or as few additional processors as possible. In general, this approach is referred to as resource augmentation and is used extensively to analyze and compare scheduling algorithms. See for example [29] for a survey and motivation on why this is a useful measure for evaluating the quality of scheduling algorithms in practice. However, such a measure also has its potential pitfalls as recently studied and reported by Chen et al. [12]. Interestingly, it turns out that there is a huge difference regarding the approximation factors depending on whether it is possible to increase the processor speed or the number of processors. As already discussed in [11], approximation by speeding up is known as the multiprocessor partitioned scheduling problem, and by allocating more processors is known as the multiprocessor partitioned packing problem. We study the latter one in this paper.
Formally, an algorithm for the multiprocessor partitioned packing problem is said to have an approximation factor , if given any task set , it can find a feasible partition of on processors, where is the minimum (optimal) number of processors required to schedule . However, it turns out that the approximation factor is not the best measure in our setting (it is not finegrained enough). For example, it is complete to decide if an implicitdeadline task set is schedulable on 2 processors or whether 3 processors are necessary. Assuming , this rules out the possibility of any efficient algorithm with approximation factor better than , as shown in [11]. (This lower bound is further lifted to for sporadic tasks in Section 5.) The problem with this example is that it does not rule out the possibility of an algorithm that only needs processors. Clearly, such an algorithm is almost as good as optimum when is large and would be very desirable.^{1}^{1}1Indeed, there are (very ingenious) algorithms known for the implicitdeadline partitioning problem that use only processors [25], based on the connection to the binpacking problem. To get around this issue, a more refined measure is the socalled asymptotic approximation factor. An algorithm has an asymptotic approximation factor if we can find a schedule using at most processors, where is a constant that does not depend on . An algorithm is called an asymptotic polynomialtime approximation scheme (APTAS) if, given an arbitrary accuracy parameter as input, it finds a schedule using processors and its running time is polynomial assuming is a fixed constant.
For implicitdeadline task sets, the multiprocessor partitioned scheduling problem, by speeding up, is equivalent to the Makespan problem [21], and the multiprocessor partitioned packing problem, by allocating more processors, is equivalent to the bin packing problem [20]. The Makespan problem admits polynomialtime approximation schemes (PTASes), by Hochbaum and Shmoys [22], and the bin packing problem admits asymptotic polynomialtime approximation schemes (APTASes), by de la Vega and Lueker [16, 25].
When considering sporadic task sets with constrained or arbitrary deadlines, the problem becomes more complicated. When adopting speedingup for resource augmentation, the deadlinemonotonic partitioning proposed by Baruah and Fisher [3, 4] has been shown to have a speedup factor in [10], where is the given number of identical processors. The studies in [2, 11, 1] provide polynomialtime approximation schemes for some special cases when speedingup is possible. The PTAS by Baruah [2] requires that are constants, where ( and , respectively) is the maximum relative deadline (worstcase execution time and period, respectively) in the task set and ( and , respectively) is the minimum relative deadline (worstcase execution time and period, respectively) in the task set. It was later shown in [11, 1] that the complexity only depends on . If is a constant, there exists a PTAS developed by Chen and Chakraborty [11], which admits feasible task partitioning by speeding up the processors by . The approach in [11]
deals with the multiprocessor partitioned scheduling problem as a vector scheduling problem
[7] by constructing (roughly) dimensions and then applies the PTAS of the vector scheduling problem developed by Chekuri and Khanna [7] in a blackbox manner. Bansal et al. [1] exploit the special structure of the vectors and give a faster vector scheduling algorithm that is a quasipolynomialtime approximation scheme (qPTAS) even if is polynomially bounded.However, augmentation by allocating additional processors, i.e., the multiprocessor partitioned packing problem, has not been explored until recently in realtime systems. Our previous work in [11] has initiated the study for minimizing the number of processors for realtime tasks. While [11] mostly focuses on approximation algorithms for resource augmentation via speeding up, it also showed that for the multiprocessor partitioned packing problem there does not exist any APTAS for arbitrarydeadline task sets, unless . However, the proof in [11] for the nonexistence of APTAS only works when the input task set has exactly two types of tasks in which one type consists of tasks with relative deadline less than or equal to its period (i.e., for some in ) and another type consists of tasks with relative deadline larger than its period (i.e., for some in ). Therefore, it cannot be directly applied for constraineddeadline task sets.
For the results, from the literature and also this paper, related to the multiprocessor partitioned scheduling and packing problems, Table 1 provides a short summary.
implicit deadlines  constrained deadlines  arbitrary deadlines  arbitrary deadlines (dependent on )  
partitioned EDF  PTAS [22]  speed up [10]  speed up [10]  PTAS [11] for constant 
scheduling  qPTAS [1] for polynomial  
partitioned FP  [6], [26]  speedup [8]  speed up[8]  
scheduling  (extended from packing)  
partitioned packing  APTAS [16]  nonexistence of APTAS  nonexistence of APTAS [11]  
approximation, asymptotic approximation, nonexistence of approximation 
Our Contributions This paper studies the multiprocessor partitioned packing problem in much more detail. On the positive side, when the ratio of the period of a constraineddeadline task to the relative deadline of the task is at most , in Section 3, we provide a simple polynomialtime algorithm with a approximation factor. In Section 4, we show that the deadlinemonotonic partitioning algorithm in [3, 4] has an asymptotic approximation factor for the packing problem, where . In particular, when and are not constant, adopting the worstfit or bestfit strategy in the deadlinemonotonic partitioning algorithm is shown to have an approximation factor, where is the number of tasks. In contrast, from [10], it is known that both strategies have a speedup factor , when the resource augmentation is to speed up processors. We also show that speeding up processors can be much more powerful than allocating more processors. Specifically, in Section 5, we provide input instances, in which the only feasible schedule is to run each task on an individual processor but the system requires only one processor with a speedup factor of , where .
On the negative side, in Section 6, we show that there does not exist any asymptotic polynomialtime approximation scheme (APTAS) for the multiprocessor partitioned packing problem for task sets with constrained deadlines, unless . As there is already an APTAS for the implicit deadline case, this together with the result in [11] gives a complete picture of the approximability of multiprocessor partitioned packing for different types of task sets, as shown in Table 1.
2 System Model
2.1 Task and Platform Model
We consider a set of independent sporadic realtime tasks. Each of these tasks releases an infinite number of task instances, called jobs. A task is defined by , where is its relative deadline, is its minimum interarrival time (period), and is its (worstcase) execution time. For a job released at time , the next job must be released no earlier than and it must finish (up to) amount of execution before the jobs absolute deadline at . The utilization of task is denoted by . We consider platforms with identical processors, i.e., the execution and timing property remains no matter which processor a task is assigned to. According to the relations of the relative deadlines and the minimum interarrival times of the tasks in , the task set can be identified to be with (1) implicit deadlines, i.e., , (2) constrained deadlines, i.e., , or (3) arbitrary deadlines, otherwise. The cardinality of a set is denoted by .
In this paper we focus on partitioned scheduling, i.e., each task is statically assigned to a fixed processor and all jobs of the task is executed on the assigned processor. On each processor, the jobs related to the tasks allocated to that processor are scheduled using preemptive earliest deadline first (EDF) scheduling. This means that at each point the job with the shortest absolute deadline is executed, and if a new job with a shorter absolute deadline arrives the currently executed job is preempted and the new arriving job starts executing. A task set can be feasibly scheduled by EDF (or EDF is a feasible schedule) on a processor if the timing constraints can be fulfilled by using EDF.
2.2 Problem Definition
Given a task set , a feasible task partition on identical processors is a collection of subsets, denoted , such that

for all ,

is equal to the input task set , and

set can meet the timing constraints by EDF scheduling on a processor .
The multiprocessor partitioned packing problem: The objective is to find a feasible task partition on identical processors with the minimum .
We assume that and for any task since otherwise there cannot be a feasible partition.
2.3 Demand Bound Function
This paper focuses on the case where the arrival times of the sporadic tasks are not specified, i.e., they arrive according to their interarrival constraint and not according to a predefined pattern. Baruah et al. [5] have shown that in this case the worstcase pattern is to release the first job of tasks synchronously (say, at time for notational brevity), and all subsequent jobs as early as possible. Therefore, as shown in [5], the demand bound function of a task that specifies the maximum demand of task to be released and finished within any time interval with length is defined as
(1) 
The exact schedulability test of EDF, to verify whether EDF can feasibly schedule the given task set on a processor, is to check whether the summation of the demand bound functions of all the tasks is always less than for all [5].
3 Reduction to Bin Packing
When considering tasks with implicit deadlines, the multiprocessor partitioned packing problem is equivalent to the bin packing problem [20]. Therefore, even though the packing becomes more complicated when considering tasks with arbitrary or constrained deadlines, it is pretty straightforward to handle the problem by using existing algorithms for the bin packing problem if the maximum ratio of the period to the relative deadline among the tasks, i.e., , is not too large.
For a given task set , we can basically transform the input instance to a related task instance by creating task based on task in such that

is , is , and is when for , and

is , is and is when for .
Now, we can adopt any greedy fitting algorithms (i.e., a task is assigned to “one” allocated processor that is feasible; otherwise, a new processor is allocated and the task is assigned to the newly allocated processor) for the bin packing problem by considering only the utilization of transformed tasks in for the multiprocessor partitioned packing problem, as presented in [30, Chapter 8]. The construction of has a time complexity of , and the greedy fitting algorithm has a time complexity of .
Any greedy fitting algorithm by considering for task assignment is a approximation algorithm for the multiprocessor partitioned packing problem.
Proof.
Clearly, as we only reduce the relative deadline and the periods, the timing parameters in are more stringent than in . Hence, a feasible task partition for on processors also yields a corresponding feasible task partition for on processors. As has implicit deadlines, we know that any task subset in with total utilization no more than can be feasibly scheduled by EDF on a processor, and therefore the original tasks in that subset as well. For any greedy fitting algorithms that use processors, using the same proof as in [30, Chapter 8], we get .
By definition, we know that . Therefore, any feasible solution for uses at least processors and the approximation factor is hence proved. ∎
4 DeadlineMonotonic Partitioning under EDF Scheduling
This section presents the worstcase analysis of the deadlinemonotonic partitioning strategy, proposed by Baruah and Fisher [4, 3], for the multiprocessor partitioned packing problem. Note that the underlying scheduling algorithm is EDF but the tasks are considered in the deadlinemonotonic (DM) order. Hence, in this section, we index the tasks accordingly from the shortest relative deadline to the longest, i.e., if . Specifically, in the DM partitioning, the approximate demand bound function is used to approximate Eq. (1), where
(2) 
Even though the DM partitioning algorithm in [4, 3] is designed for the multiprocessor partitioned scheduling problem, it can be easily adapted to deal with the multiprocessor partitioned packing problem. For completeness, we revise the algorithm in [4, 3] for the multiprocessor partitioned packing problem and present the pseudocode in Algorithm 1. As discussed in [4, 3], when a task is considered, a processor among the allocated processors where both the following conditions hold
(3)  
(4) 
is selected to assign task , where is the set of the tasks (as a subset of ), which have been assigned to processor before considering . If there is no where both Eq. (3) and Eq. (4) hold, a new processor is allocated and task is assigned to the new processor. The order in which the already allocated processors are considered depends on the fitting strategy:

firstfit (FF) strategy: choosing the feasible with the minimum index;

bestfit (BF) strategy: choosing, among the feasible processors, with the maximum approximate demand bound at time ;

worstfit (WF) strategy: choosing with the minimum approximate demand bound at time .
For a given number of processors, it has been proved in [10] that the speedup factor of the DM partitioning is at most , independent from the fitting strategy. However, if the objective is to minimize the number of allocated processors, we will show that DM partitioning has an approximation factor of at least (in the worst case) when the bestfit or worstfit strategy is adopted. We will prove this by explicitly constructing two concrete task sets with this property. Afterwards, we show that the asymptotic approximation factor of DM partitioning is at most for packing, where .
The approximation factor of the deadlinemonotonic partitioning algorithm with the bestfit strategy is at least when and the schedulability test is based on Eq. (3) and Eq. (4).
Proof.
The theorem is proven by providing a task set that can be scheduled on two processors but where Algorithm 1 when applying the bestfit strategy uses processors. Under the assumption that is an integer, is , and is sufficiently large, i.e., , such a task set can be constructed as:

Let , , and .

For , let , , and .

For , let , , and .
The task set can be scheduled on two processors under EDF if all tasks with an odd index are assigned to processor 1 and all tasks with an even index are assigned to processor 2. On the other hand, the bestfit strategy assigns
to processor . The resulting solution uses processors. Details are in the Appendix. ∎The approximation factor of the deadlinemonotonic partitioning algorithm with the worstfit strategy is at least when the schedulability test is based on Eq. (3) and Eq. (4).
Proof.
The proof is very similar to the proof of Theorem 4, considering the task set:

Let , , and .

For , let , , and .

For , let , , and .
Odd tasks are assigned to processor 1 and even tasks to processor 2 the task set is schedulable while is assigned to processor using the worstfit strategy. Details are in the Appendix. ∎
The DM partitioning algorithm is an asymptotic approximation algorithm for the multiprocessor partitioned packing problem, when and .
Proof.
We consider the task which is the task that is responsible for the last processor that is allocated by Algorithm 1. The other processors are categorized into two disjoint sets and , depending on whether Eq. (3) or Eq. (4) is violated when Algorithm 1 tries to assign (if both conditions are violated, the processor is in ). The two sets are considered individually and the maximum number of processors in both sets is determined based on the minimum utilization for each of the processors. Afterwards, a necessary condition for the amount of processors that is at least needed for a feasible solution is provided and the relation between the two values proves the theorem. Details can be found in the Appendix. ∎
5 Hardness of Approximations
It has been shown in [11, 2] that a PTAS exists for augmenting the resources by speeding up. A straightforward question is to see whether such PTASes will be helpful for bounding the lower or upper bounds for multiprocessor partitioned packing. Unfortunately, the following theorem shows that using speeding up to get a lower bound for the number of required processors is not useful.
There exists a set of input instances, in which the number of allocated processors is up to , while the task set can be feasibly scheduled by EDF with a speedup factor on a processor, where .
Proof.
We provide a set of input instances, with the property described in the statement:

Let , , and .

For any , let , , and .
Since for any task , assigning any two tasks on the same processor is infeasible without speeding up. Therefore, the only feasible processor allocation is processors and to assign each task individually on one processor. However, by speeding up the system by a factor , the tasks can be feasibly scheduled on one processor due to for any . A proof is in the Appendix. Hence, the gap between these two types of resource augmentation is up to . ∎
Moreover, the following theorem shows the inapproximability for a factor without adopting asymptotic approximation.
For any , there is no polynomialtime approximation algorithm with an approximation factor of for the multiprocessor partitioned packing problem, unless .
Proof.
Suppose that there exists such a polynomialtime algorithm with approximation factor . can be used to decide if a task set is schedulable on a uniprocessor, which would contradict the hardness [17] of this problem. Indeed, we simply run on the input instance. If returns a feasible schedule using one processor, we already have a uniprocessor schedule. On the other hand, if requires at least two processors, then we know that any optimum solution needs processors, implying that the task set is not schedulable on a uniprocessor. ∎
6 NonExistence of APTAS for Constrained Deadlines
We now show that there is no APTAS when considering constraineddeadline task sets, unless . The proof is based on an Lreduction (informally an approximation preserving reduction) from a special case of the vector packing problem, i.e., the 2D dominated vector packing problem.
6.1 The 2D Dominated Vector Packing Problem
The vector packing problem is defined as follows: The vector packing problem: Given a set of vectors with dimensions, in which is the value for vector in the th dimension, the problem is to partition into parts such that is minimized and each part is feasible in the sense that for all . That is, for each dimension , the sum of the th coordinates of the vectors in is at most .
We say that a subset of can be feasibly packed in a bin if for all th dimensions. Note that for this is precisely the binpacking problem. The vector packing problem does not admit any polynomialtime asymptotic approximation scheme even in the case of dimensions, unless [31].
Specifically, the proof in [11] for the nonexistence of APTAS for task sets with arbitrary deadlines comes from an Lreduction from the dimensional vector packing problem as follows: For a vector in , a task is created with , , and . However, a trivial extension from [11] to constrained deadlines does not work, since for the transformation of the task set we need to assume that for any so that for every reduced task . This becomes problematic, as one dimension in the vectors in such input instances for the twodimensional vector packing problem can be totally ignored, and the input instance becomes a special case equivalent to the traditional binpacking problem, which admits an APTAS. We will show that the hardness is equivalent to a special case of the twodimensional vector packing problem, called the twodimensional dominated vector packing (2DDVP) problem, in Section 6.2. The twodimensional dominated vector packing (2DDVP) problem is a special case of the twodimensional vector packing problem with following conditions for each vector :

, and

if , then is dominated by , i.e., .
Moreover, we further assume that and are rational numbers for every .
Here, some tasks are created with implicit deadlines (based on vector if is ) and some tasks with strictly constrained deadlines (based on vector if is not ). However, the 2DDVP problem is a special case of the twodimensional vector packing problem, and the implication for when does not hold in the proof in [31]. We note, that the proof for the nonexistence of an APTAS for the twodimensional vector packing problem in [31] is erroneous. However, the result still holds. Details are in the Appendix. Therefore, we will provide a proper reduction in Section 6.3 to show the nonexistence of APTAS for the multiprocessor partitioned packing problem for tasks with constrained deadlines.
6.2 2DDVP Problem and Packing Problem
We now show that the packing problem is at least as hard as the 2DDVP problem from a complexity point of view. For vector with , we create a corresponding task with
Clearly, for such tasks. Let be a common multiple, not necessary the least, of the periods of the tasks constructed above. By the assumption that all the values in the 2DDVP problem are rational numbers and for every vector , we know that exists and can be calculated in . For vector with , we create a corresponding implicitdeadline task with
The following lemma shows the related schedulability condition.
Suppose that the set of tasks assigned on a processor consists of (1) strictly constraineddeadline tasks, denoted by , with a common relative deadline and (2) implicitdeadline tasks, i.e., , in which the period is a common integer multiple of the periods of the strictly constraineddeadline tasks. EDF schedule is feasible for the set of tasks on a processor if and only if
Proof.
Only if: This is straightforward as the task set cannot meet the timing constraint when or .
If: If and , we know that when , then . When , we have
(5) 
Moreover, with , we know that when
where comes from the fact that is an integer for any in and so that is equal to .
For any value , the value of is equal to
. Therefore, we know that if and ,
the task set can be feasibly scheduled by EDF.
∎
If there does not exist any APTAS for the 2DDVP problem, unless , there also does not exist any APTAS for the multiprocessor partitioned packing problem with constraineddeadline task sets.
Proof.
Clearly, the reduction in this section from the 2DDVP problem to the multiprocessor partitioned packing problem with constrained deadlines is in polynomial time.
For a task subset of , suppose that is the set of the corresponding vectors that are used to create the task subset . By Lemma 6.2, the subset of the constructed tasks can be feasibly scheduled by EDF on a processor if and only if and .
Therefore, it is clear that the above reduction is a perfect approximation preserving reduction. That is, an algorithm with a (asymptotic) approximation factor for the multiprocessor partitioned packing problem can easily lead to a (asymptotic) approximation factor for the 2DDVP problem. ∎
6.3 Hardness of the 2DDVP problem
Based on Theorem 6.2, we are going to show that there does not exist APTAS for the 2DDVP problem, which also proves the nonexistence of APTAS for the multiprocessor partitioned packing problem with constrained deadlines.
There does not exist any APTAS for the 2DDVP problem, unless .
Proof.
This is proved by an Lreduction, following a similar strategy in [31] by constructing an Lreduction from the Maximum Bounded 3Dimensional Matching (MAX3DM), which is MAX SNPcomplete [24]. Details are in the Appendix, where a short comment regarding an erroneous observation in [31] is also provided. ∎
There does not exist any APTAS for the multiprocessor partitioned packing problem for constraineddeadline task sets, unless .
7 Concluding Remarks
This paper studies the partitioned multiprocessor packing problem to minimize the number of processors needed for multiprocessor partitioned scheduling. Interestingly, there turns out to be a huge difference (technically) in whether one is allowed faster processors or additional processors. Our results are summarized in Table 1. For general cases, the upper bound and lower bound for the firstfit strategy in the deadlinemonotonic partitioning algorithm are both open. The focus of this paper is the multiprocessor partitioned packing problem. If global scheduling is allowed, in which a job can be executed on different processors, the problem of minimizing the number of processors has been also recently studied in a more general setting by Chen et al. [14, 13] and Im et al. [23]. They do not explore any periodicity of the job arrival patterns. Among them, the stateoftheart online competitive algorithm has an approximation factor (more precisely, competitive factor) of by Im et al. [23]. These results are unfortunately not applicable for the multiprocessor partitioned packing problem since the jobs of a sporadic task may be executed on different processors.
References
 [1] Nikhil Bansal, Cyriel Rutten, Suzanne van der Ster, Tjark Vredeveld, and Ruben van der Zwaan. Approximating realtime scheduling on identical machines. In LATIN: Theoretical Informatics  11th Latin American Symposium, pages 550–561, 2014.
 [2] Sanjoy Baruah. The partitioned EDF scheduling of sporadic task systems. In RealTime Systems Symposium (RTSS), pages 116 –125, 2011.
 [3] Sanjoy K. Baruah and Nathan Fisher. The partitioned multiprocessor scheduling of sporadic task systems. In RealTime Systems Symposium (RTSS), pages 321–329, 2005.
 [4] Sanjoy K. Baruah and Nathan Fisher. The partitioned multiprocessor scheduling of deadlineconstrained sporadic task systems. IEEE Trans. Computers, 55(7):918–923, 2006.
 [5] Sanjoy K. Baruah, Aloysius K. Mok, and Louis E. Rosier. Preemptively scheduling hardrealtime sporadic tasks on one processor. In RealTime Systems Symposium (RTSS), pages 182–190, 1990.
 [6] Almut Burchard, Jörg Liebeherr, Yingfeng Oh, and Sang Hyuk Son. New strategies for assigning realtime tasks to multiprocessor systems. IEEE Trans. Computers, 44(12):1429–1442, 1995.
 [7] Chandra Chekuri and Sanjeev Khanna. On multidimensional packing problems. SIAM J. Comput., 33(4):837–851, 2004.
 [8] JianJia Chen. Partitioned multiprocessor fixedpriority scheduling of sporadic realtime tasks. In Euromicro Conference on RealTime Systems (ECRTS), pages 251–261, 2016.
 [9] JianJia Chen, Nikhil Bansal, Samarjit Chakraborty, and Georg von der Brüggen. Packing sporadic realtime tasks on identical multiprocessor systems. Computing Research Repository (CoRR), 2018. http://arxiv.org/abs/XXX.YYY.
 [10] JianJia Chen and Samarjit Chakraborty. Resource augmentation bounds for approximate demand bound functions. In IEEE RealTime Systems Symposium, pages 272 – 281, 2011.
 [11] JianJia Chen and Samarjit Chakraborty. Partitioned packing and scheduling for sporadic realtime tasks in identical multiprocessor systems. In ECRTS, pages 24–33, 2012.
 [12] JianJia Chen, Georg von der Brüggen, WenHung Huang, and Robert I Davis. On the pitfalls of resource augmentation factors and utilization bounds in realtime scheduling. In Euromicro Conference on RealTime Systems, ECRTS, pages 9:1–9:25, 2017.
 [13] Lin Chen, Nicole Megow, and Kevin Schewior. An o(log m)competitive algorithm for online machine minimization. In Symposium on Discrete Algorithms, SODA, pages 155–163, 2016.
 [14] Lin Chen, Nicole Megow, and Kevin Schewior. The power of migration in online machine minimization. In Symposium on Parallelism in Algorithms and Architectures, pages 175–184, 2016.
 [15] Robert I. Davis and Alan Burns. A survey of hard realtime scheduling for multiprocessor systems. ACM Comput. Surv., 43(4):35, 2011.
 [16] Wenceslas Fernandez de la Vega and George S. Lueker. Bin packing can be solved within 1+epsilon in linear time. Combinatorica, 1(4):349–355, 1981.
 [17] Friedrich Eisenbrand and Thomas Rothvoß. EDFschedulability of synchronous periodic task systems is coNPhard. In Symposium on Discrete Algorithms (SODA), pages 1029–1034, 2010.
 [18] Pontus Ekberg and Wang Yi. Uniprocessor feasibility of sporadic tasks remains coNPcomplete under bounded utilization. In IEEE RealTime Systems Symposium, RTSS, pages 87–95, 2015.
 [19] Pontus Ekberg and Wang Yi. Uniprocessor feasibility of sporadic tasks with constrained deadlines is strongly coNPComplete. In Euromicro Conference on RealTime Systems, ECRTS, pages 281–286, 2015.
 [20] M. R. Garey and D. S. Johnson. Computers and intractability: A guide to the theory of NPcompleteness. W. H. Freeman and Co., 1979.
 [21] Ronald L. Graham. Bounds on multiprocessing timing anomalies. SIAM Journal of Applied Mathematics, 17(2):416–429, 1969.
 [22] Dorit S. Hochbaum and David B. Shmoys. Using dual approximation algorithms for scheduling problems theoretical and practical results. J. ACM, 34(1):144–162, 1987.
 [23] Sungjin Im, Benjamin Moseley, Kirk Pruhs, and Clifford Stein. An O(log log m)competitive algorithm for online machine minimization. In RealTime Systems Symposium, (RTSS), pages 343–350, 2017.
 [24] Viggo Kann. Maximum bounded 3dimensional matching is max snpcomplete. Inf. Process. Lett., 37(1):27–35, January 1991.
 [25] N. Karmarkar and R. M. Karp. An efficient approximation scheme for the onedimensional binpacking problem. In Symp. on Foundations of Computer Science (FOCS), pages 312–320, 1982.
 [26] Andreas Karrenbauer and Thomas Rothvoß. A 3/2approximation algorithm for ratemonotonic multiprocessor scheduling of implicitdeadline tasks. In International Workshop of Approximation and Online Algorithms WAOA, pages 166–177, 2010.
 [27] C. L. Liu and James W. Layland. Scheduling algorithms for multiprogramming in a hardrealtime environment. Journal of the ACM, 20(1):46–61, 1973.
 [28] A. K. Mok. Fundamental design problems of distributed systems for the hardrealtime environment. Technical report, Massachusetts Institute of Technology, Cambridge, MA, USA, 1983.
 [29] K. Pruhs, E. Torng, and J. Sgall. Online scheduling. In Joseph Y. T. Leung, editor, Handbook of Scheduling: Algorithms, Models, and Performance Analysis, chapter 15, pages 15:1 – 15:41. 2004.
 [30] Vijay V. Vazirani. Approximation Algorithms. Springer, 2001.
 [31] Gerhard J. Woeginger. There is no asymptotic ptas for twodimensional vector packing. Inf. Process. Lett., 64(6):293–297, 1997.
Appendix
Proofs related to Section 4
Proof of Theorem 4. We provide a task set that can be scheduled on two processors but where Algorithm 1 when applying the bestfit strategy uses processors. Let be an integer, is , and is sufficiently large, i.e., .

Let , , and .

For , let , , and .

For , let , , and .
Hence, in this input instance, , , , . For the simplicity of presentation, we will omit any term multiplied with by assuming that this is positive and arbitrarily small. When applying DM partitioning, tasks and are both assigned on processor . Then, we know that at time , . Clearly, are not eligible for processor , because for we have
(6) 
Therefore, is assigned on processor . When considering , both processors are feasible, and processor has a higher approximate demand at time , i.e., and . Therefore, is assigned on processor under the bestfit strategy. Similarly, are not eligible for processor , because for we have
(7) 
When considering , the allocated three processors are all feasible, but processor has a higher approximate demand at time . One can formally prove that task is assigned to processor because for any . Moreover, since for any and due to the assumption , we know that processor has the highest approximate demand at time among the first (allocated) processors. Thus, task is assigned to processor due to the bestfit strategy. Therefore, we conclude that the bestfit strategy assigns to processor . The resulting solution uses processors.
Now, consider the following task assignment, in which is assigned on processor (resp., ) if is an odd (resp. even) number. Let be the set of tasks that are assigned on processor . The assignment is feasible on processor , as all the tasks are with implicit deadlines, and the total utilization is . The assignment is also feasible on processor by verifying the schedulability by using , i.e., the demand bound function without approximation! Since all tasks in have the same period, we only have to verify at time , in which for .
We will now show that when , the of at time will still be no more than , i.e., showing that . Since the tasks in have the same period, for the simplicity of presentation, let be with . We can divide the time interval into . Suppose that is a nonnegative integer and is an index , where is in interval . Here, is an auxiliary parameter set to and is an auxiliary parameter set to for brevity.
Then, due to the parameters of task and , for task , we have and . As a result, when and , we have
Moreover, when , we have . When , we have . When , we have . Therefore, we reach the conclusion that .
Hence, there exists a feasible solution by using only processors, but the DM partitioning algorithm under BF uses processors.
Proof of Theorem 4. Suppose that is an integer, is , and is sufficiently large, i.e., . Consider the following input task set:

Let , , and .

For , let , , and .

For , let , , and .
We know that , , , . The proof is very similar to that of Theorem 4. For the simplicity of presentation, we will omit any term multiplied with by assuming that this is positive and arbitrarily small.
When applying DM partitioning, task and are both assigned on processor . One can formally prove that task is assigned to processor because for any . Moreover, since