Reservation-Based Federated Scheduling for Parallel Real-Time Tasks

12/13/2017 ∙ by Niklas Ueter, et al. ∙ 0

This paper considers the scheduling of parallel real-time tasks with arbitrary-deadlines. Each job of a parallel task is described as a directed acyclic graph (DAG). In contrast to prior work in this area, where decomposition-based scheduling algorithms are proposed based on the DAG-structure and inter-task interference is analyzed as self-suspending behavior, this paper generalizes the federated scheduling approach. We propose a reservation-based algorithm, called reservation-based federated scheduling, that dominates federated scheduling. We provide general constraints for the design of such systems and prove that reservation-based federated scheduling has a constant speedup factor with respect to any optimal DAG task scheduler. Furthermore, the presented algorithm can be used in conjunction with any scheduler and scheduling analysis suitable for ordinary arbitrary-deadline sporadic task sets, i.e., without parallelism.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A frequently used model to describe real-time systems is with a collection of independent tasks that release an infinite sequence of jobs according to some parameterizable release pattern. The sporadic task model, where a task is characterized by its relative deadline , its minimum inter-arrival time , and its worst-case execution time (WCET) , has been widely adopted for real-time systems. A sporadic task is an infinite sequence of task instances, referred to as jobs, where the arrival of two consecutive jobs of a task is separated at least by its minimum inter-arrival time. In real-time systems, tasks must fulfill timing requirements, i.e., each job must finish at most units of computation between the arrival of a job at and that jobs absolute deadline at . A sporadic task system is called an implicit-deadline system if holds for each in , and is called a constrained-deadline system if holds for each in . Otherwise, such a sporadic task system is an arbitrary-deadline system.

Traditionally, each task is only associated with its worst-case execution time (WCET) , since in uniprocessor platforms the processor executes only one job at each point in time and there is no need to express potential parallel execution paths. However, modern real-time systems increasingly employ multi-processor platforms to suffice the increasing performance demands and the need for energy efficiency. Multi-processor platforms allow both inter-task parallelism, i.e., to execute sequential programs concurrently, and intra-task parallelism, i.e., a job of a parallelized task can be executed on multiple processors at the same time. To enable intra-task parallelism, programs are expected to be potentially executed in parallel which must be enabled by the software design. An established model for parallelized tasks is the Directed-Acyclic-Graph (DAG) model. Through out this paper, we consider how to schedule a sporadic DAG task set on a multi-processor system with homogeneous processors.

Task Model

The Directed-Acyclic-Graph (DAG) model for a parallel task expresses intra-task dependencies and which subtasks can potentially be executed in parallel [21, 17, 6]. In particular, the execution of a task can be divided into subtasks and the precedence constraints of these subtasks are defined by a DAG structure. An example is presented in Figure 1. Each node represents a subtasks and the directed arrows indicate the precedence constraints. Each node is characterized by the worst-case execution time of the corresponding subtask.

Fig. 1: A sporadic, constrained-deadline DAG task with , .

For a DAG, two parameters are of importance:

  • total execution time (or work) of task : the summation of the worst-case execution times of all the subtasks of task .

  • critical-path length of task : the length of the critical path in the given DAG, i.e., the worst case execution time of the task on an infinite number of processors.

By definition, for every task . The utilization of task is denoted by .

This way of parametrization has the advantage to be completely agnostic of the internal parallelization structure, i.e., how many sub-tasks exist and how the precedence constraints amongst them are. Scheduling algorithms that can feasibly schedule DAG task sets solely based on these two parameters also allow the change of the DAG structure during runtime as long as those constraints are met. The apparent downside of this abstraction is the pessimism, since the worst possible structure has to be considered regardless of the actual structure, and the scheduling algorithms have to suffice the tasks deadline for all possible structures under given parameter constraints.

Related work

The scheduling of parallel real-time DAG tasks has been widely researched in various directions. To the best of our knowledge, three general scheduling approaches exist:

  • No treatment: The DAG structure and parameters of a task are not utilized or used at all for scheduling decisions. Whenever a subtask of task is ready to be executed, the standard global or partitioned multiprocessor scheduling is used to schedule the subtasks, e.g., [1, 6, 17, 18].

  • Decomposition-based strategies: A DAG task is decomposed into a set of sequential tasks with specified relative deadlines and offsets of their release times. These sequential tasks are then scheduled accordingly without considering the DAG structure anymore, e.g., [15, 20, 14, 19, 13, 21]. Decomposition-based strategies utilize the DAG structure off-line in order to apply the decomposition.

  • Federated scheduling: The task set is partitioned into light and heavy tasks. Light tasks are those, that can be completely sequentialized and still fit on one processor. On the other hand, a task that needs more than one processor to meet its deadline is a heavy task. In the original design of federated scheduling for implicit-deadline task systems proposed by Li et al. [18], a light task is solely executed sequentially without exploiting the parallelized structure, and a heavy task is assigned to its designated processors that exclusively execute only the heavy task. Baruah [2, 3, 4] adopted the concept of federated scheduling for scheduling constrained-deadline and arbitrary-deadline task systems. Chen [7] later showed that federated scheduling does not admit any constant speedup factor with respect the optimal scheduling algorithm. Jiang et al. [12] extended the federated scheduling approach to semi-federated scheduling, in which one or two processors used by a heavy task can be shared with other tasks.

Contributions

A downside of federated scheduling is the granting of processors to heavy tasks exclusively, thus alleviating the potential to map light tasks onto the same processors. To address these limitations, this paper provides the following results:

  • We propose a reservation-based federated scheduling for DAG tasks that provides provably sufficient amount of service for each DAG task to meet its relative deadline and provides a simple, timing isolated interface for analysis. That means, the DAG task can be treated like an arbitrary- or constrained-deadline, sporadic real-time task analytically. Hence we show how to reduce the problem of scheduling sporadic, arbitrary-deadline DAG tasks to the problem of scheduling sequential sporadic, arbitrary-deadline tasks.

  • Specifically, we provide algorithms to transform a set of sporadic, arbitrarily-deadline DAG tasks into a set of sequential sporadic, arbitrary-deadline real-time tasks that can be scheduled by using any scheduling algorithm, that supports the aforementioned task model.

  • Moreover, we provide general design rules and constraints for providing provably sufficient and heuristically good reservations for use in Partitioned (and Global) Scheduling algorithms.

  • We further resolve the problem of non-constant speedup factors of federated scheduling for arbitrary-deadline DAG task sets with respect to any optimal scheduling algorithm that was pointed out by Chen [7]. We show, that this speedup factor is at most by the setting of a specific workload inflation.

2 Issues of Federated Scheduling for Constrained-Deadline Systems

Here, we reuse the example presented by Chen [7] to explain the main issue of applying federated scheduling for constrained-deadline task systems. Suppose that is a positive integer. Moreover, let be any arbitrary number with . We create constrained-deadline sporadic tasks with the following setting:

  • , , and .

  • , , and for .

Table I provides a concrete example for , and . Each task has subtasks, there is no precedence constraint among these subtasks (which is a special case of DAG), and each subtask of task has the worst-case execution time of .

An obviously feasible schedule is to assign each subtask of task to one of the processors. However, as task can only be feasibly scheduled by running on all the processors in parallel, federated scheduling exclusively allocates all the processors to task . Similarly, the semi-federated scheduling in [12] also suffers from such exclusive allocation.

10 10 20 40 80 160 320 640 1280
1 2 4 8 16 32 64 128 256
TABLE I: An example of the task set when , , and , from [7]

From this example, we can see that the main issue of applying federated scheduling for constrained-deadline task systems is the exclusive allocation of heavy tasks. Such a heavy task may need a lot of processors due to its short relative deadline, but have very low utilization in the long run if its minimum inter-arrival time is very long. Allocating many processors to such a heavy task results in a significant waste of resources.

Our proposed approach in this paper is to use reservation-based allocation instead of exclusive allocation for heavy tasks. Therefore, instead of dedicating a few processors to a heavy task, we assign a few reservation servers to a heavy task. The timing properties of a DAG task will be guaranteed as long as the corresponding reservations can be guaranteed to be feasibly provided. We will detail the concept in the next section.

3 Reservation-Based Federated Scheduling

An inherent difficulty when analyzing the schedulability of DAG task systems is the intra-task dependency in conjunction with the inter-task interference. Federated scheduling avoids this problem by granting a subset of available processors to heavy tasks exclusively and therefore avoiding inter-task interference. A natural generalization of the federated scheduling approach is to reserve sufficient resources to heavy tasks exclusively. This approach combines the advantage of avoiding inter-task interference and self-suspension with the possibilities to fit the amount of resources required more precisely. The reservation-based federated approach requires to quantify the maximum computation demand a DAG task can generate over any interval and quantify the sufficient amount of resources during that interval.

3.1 Basic Concepts

In this paper, we enforce the reservations to be provided synchronously with the release of a DAG task’s job. This means, whenever a DAG task releases a job at , the associated service is provided during the release- and deadline-interval . In order to provide a well known interface, the service providing reservations are modeled as an ordinary sporadic, arbitrary-deadline task more formally described in the following definition.

Definition 1.

A reservation generating sporadic task for serving a DAG task is defined by the tuple , such that is the amount of computation reserved over the interval with a minimum inter-arrival time of .

Over an interval of where denotes the release of a job of the DAG task , we create instances (jobs) of sporadic real-time reservation servers released with execution budgets and relative deadline , that are scheduled according to some scheduling algorithm on a homogeneous multiprocessor system with processors. Moreover, the jobs that are released at time by the reservation servers are only used to serve the DAG job of task that arrived at time . Especially, they are not used to serve any other jobs of task that arrived after . The operating system can apply any scheduling strategy to execute the instances. If an instance of a reservation server reserved for task is executed at time , we say that the system provides (or alternatively the reservation servers provide) service to run the job of task arrived at time . On the other hand, the reservation servers do not provide any service at time if none of them is executed at time by the scheduler.

The scheduling algorithm for DAG is list scheduling, which is workload-conserving with respect to the service provided by the reservation servers. Namely, at every point in time in which the DAG task has pending workload and the system provides service (to run a reservation server), the workload is executed.

In conclusion, the problem of scheduling DAG task sets and the analysis thereof is hence divided into the following two problems:

  1. Scheduling of sporadic, arbitrary-deadline task sets.

  2. Provide provably sufficient reservation to service a set of arbitrary DAG tasks.

Theorem 1.

Suppose that sequential instances (jobs) of real-time reservation servers are created and released for serving a DAG task with execution budgets when a job of task is released at time . The job of task arrived at time can be finished no later than its absolute deadline if

  • [Schedulability Condition]: the sequential jobs of the reservation servers can be guaranteed to finish no later than their absolute deadline at , and

  • [Reservation Condition]: .

Proof.

We consider an arbitrary execution schedule of the sequential jobs executed from to . Suppose, for contradiction, that the reservation condition holds but there is an unfinished subjob of the DAG job of task at time in . Since the list scheduling algorithm is applied, the schedule for a DAG job is under a certain topological order and is workload-conserving. That is, unless a DAG job has finished at time , whenever the system provides service to the DAG job, one of its subjobs is executed at time .

We define the following terms based on the execution of the DAG job of task arrived at time in the schedule

. Let the last moment prior to

when the system provides service to the DAG job be in the schedule . Moreover, is a subjob of task executed at in . Let be the earliest time in when the subjob is executed. After is determined, among the predecessors of , let the one finished last in the schedule be . Moreover, we determine as the finishing time of and as the starting time of in the schedule . By repeating the above procedure, we can define , where there is no predecessor of any more in . For notational brevity, let be .

According to the above construction, the sequence is a path in the DAG structure of . Let be the execution time of . By definition, we know that . In the schedule , whenever finishes, we know that can be executed, but there may be a gap between and .

Suppose that is the accumulative amount of service provided by the sequential jobs in an interval in . Since the list scheduling algorithm is workload-conserving, if is not executed at time where , then all the services are used for processing other subjobs of the DAG job of task . Therefore, for , the maximum amount of service that is provided to the DAG job but not used in time interval in is at most , since each of the reservation servers can only provide its service sequentially. That is, in the interval at least amount of execution time of the DAG job is executed.

Similarly, for , the maximum amount of service that is provided to the DAG job but not used in time interval in is ; otherwise should have been started before . Therefore, in the interval at least amount of execution time of the DAG job is executed.

Under the assumption that the job misses its deadline at time and the sequential jobs of the reservation servers can finish no later than their absolute deadline at in the schedule , we know that

Therefore, we reach the contradiction. ∎

3.2 Reservation Constraints

According to Theorem 1, we should focus on providing the reservations such that . The following lemma shows that any reservation with has no benefit for meeting such a condition.

Lemma 1.

If there exists a with , such a reservation has a negative impact on the condition .

Proof.

This comes from simple arithmetic. If so, removing the reservation leads to reservation servers with better reservations due to . ∎

Therefore, we will implicitly consider the property in Lemma 1, i.e., whenever the reservation condition in Theorem 1 is used. For further analysis let , with and therefore any reservation system , that suffices the following constraints

(1a)
(1b)
(1c)

is feasible for satisfying the reservation condition in Theorem 1.

The cumulative reservation budget to serve a DAG task is given by

(2)

In the special case of equal-reservations, a lower bound of the required amount of reservations can be solved analytically to

(3)

which yields

(4)

Note that the notation of changed to , due to equal size for all . Since the amount of reservations must be a natural number we know that

(5)

and that is the smallest amount of reservations required if all reservation-budgets are equal in size. Additionally, due to the fact that there are instances in which multiple settings of yield the same minimal amount of reservations, we define

(6)
Observation 1.

The left-hand side of the above equation (5) is minimised, if is maximised, i.e.,

and the corresponding smallest , that achieves an equally minimal amount of reservations is given by .

This observation motivates the idea behind the transformation algorithm R-MIN, whose properties are described in the following theorem.

Theorem 2.

The R-MIN algorithm (c.f. Alg. 1) transforms a set of sporadic, arbitrary-deadline DAG tasks into a set of sporadic, arbitrary-deadline sequential tasks, that provide sufficient resources to schedule their associated DAG tasks.       

Intuitively, R-MIN classifies tasks into light and heavy tasks. For each heavy task, it assigns the minimum number of reservation servers to the task and calculates the minimum equal-reservations for servers based on Observation 

1.

Fig. 2: An arbitrary schedule of two equal reservations, as computed by the r-min algorithm. The DAG task shown in Fig. 1 is scheduled according to the list-scheduling algorithm by any reservation server that does not service an unfinished job at that time. units of time are provided over the interval by each reservation, scheduled on two processors. The hatched areas denote a spinning reservation whereas the white areas imply that the reservation is either preempted or inactive.
Example 1.

To illustrate the proposed concept, an arbitrary schedule of two identical reservations is shown in Figure 2, servicing the DAG task in Figure 1. The schedule provides the minimal amount of identical reservations and associated budgets that are required to service the given DAG task under any preemption pattern as determined by the r-min algorithm. Over the interval , 7.5 units of time are provided by each reservation to service the DAG task using list-scheduling. The hatched areas denote that the reservation spins due to the lack of pending jobs whereas the white gaps denote that the reservation is either preempted or inactive. The amount of time the reservations are spinning may seem overly pessimistic, but note that this depends on the dependencies on the preemption patterns and the structure of the DAG task itself. Thus this approach trades resources for robustness with respect to preemption and structure uncertainty.

1:  
2:  
3:  
4:  for each task  do
5:     
6:     for  do
7:        
8:        
9:        
10:  return  
Algorithm 1 R-MIN Algorithm

Note that there are more feasible configurations to serve a DAG task as long as the conditions in Eq. (1) are met. Non-equal reservation budgets, e.g., at least one reservation budget in is different from the others, can potentially improve schedulability in partitioned or semi-partitioned scheduling. This is due to the fact that variability in reservation budgets can be helpful in packing them onto the available processor clusters whilst satisfying capacity metrics.

In order to retrieve those non-equal reservation budgets, two different approaches can be identified:

  1. Free distribution of the individual reservation budgets for a fixed cumulative reservation budget.

  2. Fixed reservation budget distribution, whilst increasing the amount of reservations and thus decreasing the individual budgets.

The first approach is illustrate in the following example.

Example 2.

Let be an implicit-deadline, sporadic DAG task with worst-case execution-time , critical-path length , period and relative deadline . In order to minimize the cumulative reservation budget as given by Eq. (4), it is mandatory to minimize the number of reservation servers . The smallest that satisfies Eq. (4) is given by

(7)

and implies that the largest possible budget, i.e., the tasks relative deadline, is selected. Therefore the smallest cumulative service, that the two reservation servers need to provide is given by . Using the budget constraints, and , any combination of the reservation budgets from up to suffices the necessary conditions, whilst using the same amount of reservation servers.

The benefit of the combination and is that one of them has a smaller execution time at a price that one of them has a higher execution time. It may be possible that such a combination is easier to be schedulable, but there is no greedy and simple criteria to find the most suitable combination in the global perspective for all the tasks.

The second approach is illustrated in the following example.

Example 3.

Let the task be the same as in Example 2 and let , then the reservation budgets are set to

(8)

for all .

The benefit of this approach is that, if is too large to fit on any processor, reservations with decreased budgets could be scheduled easier.

4 Scheduling Reservation Servers

4.1 Partitioned Scheduling

When considering arbitrary-deadline task systems, the exact schedulability test evaluates the worst-case response time using time-demand analysis and a busy-window concept [16]. The finishing time of the -th job of task can be calculated by finding the minimum in the busy window where

(9)

This means, the response time of the -th job is . If , the busy window of task finishes with the -th job. Therefore, the worst-case response time of is the maximum response time among the jobs in the busy window [16]. While this provides an exact schedulability test, the test has an exponential time complexity since the length of the busy window can be up to the task sets hyper-period which is exponential with respect to the input size.

Fisher, Baruah, and Baker [11] provided the following approximated test:

(10a)
(10b)

Eq. (10b) ensures that the workload after is not underestimated when arbitrary deadline task systems are considered, which could happen in Eq. (10a).

Bini et al. [5] improved the analysis in [11] by providing a tighter analysis than Eq. (10a), showing that the worst-case response time of task is at most

Therefore, the schedulability condition in Eqs. (10a) and (10b) can be rewritten as

(11a)
(11b)

4.2 Competitiveness

This section will analyze the theoretical properties when scheduling the reservation servers based on the deadline-monotonic (DM) partitioning strategy. It has been proved by Chen [8] that such a strategy has a speedup factor of (respectively, ) against the optimal schedule for ordinary constrained-deadline (respectively, arbitrary-deadline) task systems when the fixed-priority deadline-monotonic scheduling algorithm is used. Moreover, Chen et al. [9, 10] also showed that such a strategy has a speedup factor of (respectively, ) against the optimal schedule for ordinary constrained-deadline (respectively, arbitrary-deadline) task systems when the dynamic-priority earliest-deadline-first (EDF) scheduling algorithm is used.

Theorem 3.

Suppose that is given, , and there are exactly reservation servers for task where with . If , then .

Proof.

By the assumption and , the setting of implies that

(12)
(13)
(14)

The condition in Eq. (14) implies since . Since by definition, we know

where is due to by reorganizing the condition in Eq. (12) and is due to and . ∎

Lemma 2.

Under the same setting as in Theorem 3,

(15)
Proof.

where the inequality is due to and . ∎

1:  
2:  
3:  
4:  for each  do
5:     
6:     for  do
7:        
8:        
9:        
10:  return  
Algorithm 2 R-EQUAL Algorithm

The result in Theorem 3 can be used to specify an algorithm that transforms a collection of sporadic, arbitrary deadline DAG tasks into a transformed collection of light sporadic reservation tasks with a constant , illustrated in Algorithm 2. The algorithm simply classifies a task as a heavy task if and a light task if , respectively. If task is a heavy task, reservation servers will be provided, each with an execution time budget of .

We implicitly assume in Algorithm 2. After the transformation, we can apply any existing scheduling algorithms for scheduling ordinary sporadic real-time task systems to partition or schedule the reservation servers.

Lemma 3.

By adopting Algorithm 2, for a given ,

  • if a task is in , , Theorem 3 holds, and for ;

  • if a task is in , , and is executed sequentially without any inflation of execution time, i.e., .

Furthermore, for any , and for both light and heavy tasks.

Proof.

This holds according to the above discussions in Theorem 3 and Lemma 2. ∎

Theorem 4.

A system of arbitrary-deadline DAG tasks scheduled by reservation-based federated scheduling under partitioned DM admits a constant speedup factor of with respect to any optimal scheduler by setting to .

Proof.

We first adopt Algorithm 2 with a setting of . If there exists a DAG task in which , then we know that the speedup factor for this task set is . We focus on the case that .

Suppose that is a reservation task that is not able to be partitioned to any of the given processors, where . Let be the set of processors in which Eq. (10a) fails. Let be the set of processors in which Eq. (10a) succeeds but Eq. (10b) fails. Since cannot be assigned on any of the processors . By the violation of Eq. (10a), we know that

(16)

By the violation of Eq. (10b), we know that

(17)

By Eqs. (16) and (17), the definition , and the fact that is assigned either on a processor of or on a processor of if is assigned successfully prior to , we know that

(18)

By Lemma 3, and , the above inequality implies also

(19)

Let be . Therefore, we know that111The setting of as is in fact to maximize .

(20)
(21)

Since under deadline-monotonic partitioning, we know that the task system is not schedulable at speed . Therefore, the speedup factor of the reservation-based federated Scheduling is at most . ∎

Theorem 5.

A system of arbitrary-deadline DAG tasks scheduled by reservation-based federated scheduling under partitioned EDF admits a constant speedup factor of with respect to any optimal scheduler by setting to .

Proof.

Since EDF is an optimal uniprocessor scheduling policy with respect to schedulability, the same task partitioning algorithm and analysis used in Theorem 4 yield the result directly. ∎

Future research

We will design concrete algorithms, that create non-equal reservation budgets and compare their competitiveness against the R-EQUAL algorithm. Further we want to analyse the performance of the proposed reservation based DAG task scheduling in global scheduling algorithms. Finally the incorporation of self-suspending behaviour of the reservation servers may yield analytic and practical benefits, since in our current approach the worst-case DAG task structure has to be assumed in order to provide provably sufficient resources. This is often too pessimistic and self-suspending behaviour can potentially help to service the actual demands more precisely without spinning and blocking resources unused.

References

  • [1] Björn Andersson and Dionisio de Niz. Analyzing global-edf for multiprocessor scheduling of parallel tasks. In Principles of Distributed Systems, 16th International Conference, OPODIS, pages 16–30, 2012.
  • [2] Sanjoy Baruah. The federated scheduling of constrained-deadline sporadic DAG task systems. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, DATE, pages 1323–1328, 2015.
  • [3] Sanjoy Baruah. Federated scheduling of sporadic DAG task systems. In IEEE International Parallel and Distributed Processing Symposium, IPDPS, pages 179–186, 2015.
  • [4] Sanjoy Baruah. The federated scheduling of systems of conditional sporadic DAG tasks. In Proceedings of the 15th International Conference on Embedded Software (EMSOFT), 2015.
  • [5] Enrico Bini, Thi Huyen Chau Nguyen, Pascal Richard, and Sanjoy K. Baruah. A response-time bound in fixed-priority scheduling with arbitrary deadlines. IEEE Trans. Computers, 58(2):279–286, 2009.
  • [6] Vincenzo Bonifaci, Alberto Marchetti-Spaccamela, Sebastian Stiller, and Andreas Wiese. Feasibility analysis in the sporadic dag task model. In ECRTS, pages 225–233, 2013.
  • [7] Jian-Jia Chen. Federated scheduling admits no constant speedup factors for constrained-deadline dag task systems. Real-Time Syst., 52(6):833–838, November 2016.
  • [8] Jian-Jia Chen. Partitioned multiprocessor fixed-priority scheduling of sporadic real-time tasks. In Euromicro Conference on Real-Time Systems (ECRTS), pages 251–261, 2016.
  • [9] Jian-Jia Chen and Samarjit Chakraborty. Resource augmentation bounds for approximate demand bound functions. In IEEE Real-Time Systems Symposium, pages 272 – 281, 2011.
  • [10] Jian-Jia Chen and Samarjit Chakraborty. Resource augmentation for uniprocessor and multiprocessor partitioned scheduling of sporadic real-time tasks. Real-Time Systems, 49(4):475–516, 2013.
  • [11] Nathan Fisher, Sanjoy K. Baruah, and Theodore P. Baker. The partitioned scheduling of sporadic tasks according to static-priorities. In ECRTS, pages 118–127, 2006.
  • [12] Xu Jiang, Nan Guan, Xiang Long, and Wang Yi. Semi-federated scheduling of parallel real-time tasks on multiprocessors. In Proceedings of the 38nd IEEE Real-Time Systems Symposium, RTSS, 2017.
  • [13] Xu Jiang, Xiang Long, Nan Guan, and Han Wan. On the decomposition-based global EDF scheduling of parallel real-time tasks. In Real-Time Systems Symposium (RTSS), pages 237–246, 2016.
  • [14] Junsung Kim, Hyoseung Kim, Karthik Lakshmanan, and Ragunathan Rajkumar. Parallel scheduling for cyber-physical systems: analysis and case study on a self-driving car. In ACM/IEEE 4th International Conference on Cyber-Physical Systems (with CPS Week 2013), ICCPS, pages 31–40, 2013.
  • [15] Karthik Lakshmanan, Shinpei Kato, and Ragunathan (Raj) Rajkumar. Scheduling parallel real-time tasks on multi-core processors. In Proceedings of the 2010 31st IEEE Real-Time Systems Symposium, RTSS ’10, pages 259–268, 2010.
  • [16] John P. Lehoczky. Fixed priority scheduling of periodic task sets with arbitrary deadlines. In RTSS, pages 201–209, 1990.
  • [17] Jing Li, Kunal Agrawal, Chenyang Lu, and Christopher D. Gill. Analysis of global EDF for parallel tasks. In Euromicro Conference on Real-Time Systems (ECRTS), pages 3–13, 2013.
  • [18] Jing Li, Jian-Jia Chen, Kunal Agrawal, Chenyang Lu, Christopher D. Gill, and Abusayeed Saifullah. Analysis of federated and global scheduling for parallel real-time tasks. In 26th Euromicro Conference on Real-Time Systems, ECRTS, pages 85–96, 2014.
  • [19] Geoffrey Nelissen, Vandy Berten, Joël Goossens, and Dragomir Milojevic. Techniques optimizing the number of processors to schedule multi-threaded tasks. In 24th Euromicro Conference on Real-Time Systems, ECRTS, pages 321–330, 2012.
  • [20] Abusayeed Saifullah, Kunal Agrawal, Chenyang Lu, and Christopher D. Gill. Multi-core real-time scheduling for generalized parallel task models. In Proceedings of the 32nd IEEE Real-Time Systems Symposium, RTSS, pages 217–226, 2011.
  • [21] Abusayeed Saifullah, David Ferry, Jing Li, Kunal Agrawal, Chenyang Lu, and Christopher D Gill. Parallel real-time scheduling of dags. IEEE Transactions on Parallel and Distributed Systems, 25(12):3242–3252, 2014.