Flow Network Models for Online Scheduling Real-time Tasks on Multiprocessors

10/19/2018 ∙ by Hyeonjoong Cho, et al. ∙ 0

We consider the flow network model to solve the multiprocessor real-time task scheduling problems. Using the flow network model or its generic form, linear programming (LP) formulation, for the problems is not new. However, the previous works have limitations, for example, that they are classified as offline scheduling techniques since they establish a flow network model or an LP problem considering a very long time interval. In this study, we propose how to construct the flow network model for online scheduling periodic real-time tasks on multiprocessors. Our key idea is to construct the flow network only for the active instances of tasks at the current scheduling time, while guaranteeing the existence of an optimal schedule for the future instances of the tasks. The optimal scheduling is here defined to ensure that all real-time tasks meet their deadlines when the total utilization demand of the given tasks does not exceed the total processing capacity. We then propose the flow network model-based polynomial-time scheduling algorithms. Advantageously, the flow network model allows the task workload to be collected unfairly within a certain time interval without losing the optimality. It thus leads us to designing three unfair-but-optimal scheduling algorithms on both continuous and discrete-time models. Especially, our unfair-but-optimal scheduling algorithm on a discrete-time model is, to the best of our knowledge, the first in the problem domain. We experimentally demonstrate that it significantly alleviates the scheduling overheads, i.e., the reduced number of preemptions with the comparable number of task migrations across processors.



There are no comments yet.


page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multicore or multiprocessor platforms are becoming prevalent in numerous digital devices and this advance has been accelerated by the increasing computational demands of various emerging high-quality services. Alongside this trend, there has been a vast amount of research into multiprocessor real-time scheduling theories Davis & Burns (Oct. 2011)Baruah et al. (2015). Real-time systems are computing systems where their correct behaviors depend not only on the value of the computation but also when the results are produced Buttazzo (2011). Most research problems related to multiprocessors involve far more than a simple theoretical extension from uniprocessors to multiprocessors and thus, the real-time scheduling problems on multiprocessors are challenging.

Liu stated Liu (1969): “Few of the results obtained for a single processor generalize directly to the multiple processor case; bringing in additional processors adds a new dimension to the scheduling problem. The simple fact that a task can use only one processor even when several processors are free at the same time adds a surprising amount of difficulty to the scheduling of multiple processors”. This statement can be interpreted as saying that in uniprocessors, the constraint that each task is forbidden from executing simultaneously on more than one processor is implicit, because a single processor is the only processing capacity present in the system. By contrast, in multiprocessors, the constraint of no intra-task parallelism becomes not only explicit but also interrelated with the other constraints, which significantly increases the problem’s complexity.

1.1 Motivational examples

We provide several examples in this section to specifically illustrate our research motivation. First, we introduce some of the basic notions and terminology used throughout this study.

We assume that each task is characterized by where , and are its worst-case execution time, deadline, and period, respectively. When is identical to , it is called the implicit deadline and is then characterized by . The hyper-period of all tasks is defined as the least-common-multiple of all . The th instance (or job) of the periodic task is denoted by and the arrival time of is denoted by . The running rate is the ratio of the execution time of a job (or a part of the job) to the time interval for the execution. For example, when has its execution time within its deadline , its the running rate is .

A set containing all of the given jobs is called a set of boundaries. For convenience, each element (or boundary) of is denoted by where an earlier has a lower . The individual utilization of is defined as and the total utilization of the given task set is the sum of all . We assume that tasks run on processors.

Definition 1

(RT-optimality) An optimal real-time schedule meets all the task deadlines when the total utilization demand of a given task set does not exceed the total processing capacity , which we call RT-optimal in this study.

Several classes of RT-optimal task scheduling algorithms on multiprocessors have been developed for periodic implicit-deadline tasks. One of the well-known classes is fluid schedule-based algorithms, where each task execution attempts to track the fluid schedule that is known to be RT-optimal Baruah et al. (1993); Srinivasan et al. (2003). Here, the fluid schedule is defined as follows.

Definition 2

(Fluid schedule) A schedule is said to be fluid if and only if at any time , every instance of task that arrives at time has been executed for exactly time units Nelissen et al. (Jul. 2014).

Thus, to achieve RT-optimality, the fluid schedule assigns each task with a uniform running rate for all times, where is set to (or ). The fluid schedule is known to be RT-optimal but unrealistic, since it allocates a fraction of the computational resource to each task during each time unit, e.g., 1/3 of the processing capacity is allocated to a task per unit time.

The first example shown in Table 1 includes five tasks running on two processors. Its fluid schedule is illustrated in Figure 1. Their total utilization demand is , which is equal to the total processing capacity 2 and thus, RT-optimality of this fluid schedule holds.

2 3 2/3
2 6 1/3
2 6 1/3
3 9 1/3
3 9 1/3
Table 1: Example task set for fluid and boundary-fair schedules
Figure 1: Fluid schedule for the tasks in Table 1

One advantage of this fluid schedule is that it satisfies all of the interrelated constraints of the multiprocessor real-time scheduling problem, including the constraint of no intra-task parallelism, by using a single parameter per task. Provided that each is not greater than one and the sum of all is not greater than the total processing capacity, all tasks satisfy the deadlines without violating their constraints of no intra-task parallelism within the permitted total processing capacity.

Figure 2: Boundary-fair schedule for the tasks in Table 1

Several inheritors of the fluid schedule, which we refer to as the fluid schedule-based algorithms, attempt to track the fluid schedule in order to obtain RT-optimality. To track the fluid schedule, they restrict the difference between the actual computational resource allocation and the fluid-schedule-based resource allocation for each task. The first algorithm of this type, proportionate-fair(Pfair), strictly maintains the restriction at every time quantum Baruah et al. (1993, June 1996); its several descendants relax the restriction to being maintained at every boundary Zhu et al. (2003); Cho et al. (2006); Funk et al. (Jul. 2011); Funaoka et al. (2008); Zhu et al. (2011). Pfair and some descendants rely on a discrete-time model, since they allocate the integral units of the computational-resource to each task, e.g., integer multiples of the system time unit for execution. In addition, they are known to support fairness in the sense that the computational-resource allocated to each task is always proportionate to its during the time interval from zero to any time quantum for Pfair or to any boundary for its descendants.

Figure 2 shows the task set in Table 1 scheduled by one of Pfair’s descendants, boundary-fair scheduling (BF). It is observed that each task has the same amount of the computational resource within every time interval between two adjacent boundaries, i.e., the fairness is supported. For example, executes for 2 time units within every time interval [,].

Recently, it was observed that the scheduling overheads, including the number of preemptions and task migrations across processors, decrease as the fairness is relaxed Nelissen et al. (2011, 2012). Based on this observation, an unfair-but-optimal U-EDF was proposed. The unfairness of U-EDF implies that some of the task workload is allowed to be advanced or delayed across boundaries beyond fairness. It implies that unlike the fluid schedule-based approaches, all tasks do not even need to run for every time interval between two adjacent boundaries and thus, the number of preemptions can be reduced. U-EDF relies on a continuous-time model, since it may allocate a fractional computational-resource to a task, e.g., 1/3 processing capacity allocated to a task for a certain time interval.

In addition to the reduced scheduling overheads, we believe that the unfairness has several other significant advantages. For example, unfairness allows the total workload to be collected within certain time intervals and processors within the other idle intervals can turn into a different state to minimize their energy consumption. Figure 3 shows an unfair schedule for five tasks in Table 2. In this schedule, the workloads are aggregated around and thus, two processors have the chance to become slow or idle in both and .

1 3 1/3
2 6 1/3
2 6 1/3
1 9 1/9
1 9 1/9
Table 2: Example task set for slowing or idling processors
Figure 3: Idling processors for the tasks in Table 2

1.2 Contribution

In order to support the unfair-but-optimal scheduling, we build a framework that allows us to manipulate the task workload efficiently across boundaries while holding RT-optimality, which is the primary focus of this study. Our contributions include the following:

(1) We formulate the problem for online-scheduling the periodic implicit-deadline tasks on multiprocessors by specifying its constraints and we propose a flow network model to solve the formulated problem. Again, using the flow network model or the LP formulation for multiprocessor real-time task scheduling is not new. However, the previous works have limitations, such as that some are used as offline scheduling techniques since they require a single flow network model or a single LP problem to be constructed considering a very long time interval from to of the given tasks Lawler & Martel (Feb.1981); Megel et al. (2010); Legout et al. (2013) or that others are applicable only to the aperiodic tasks Horn (Mar. 1974). To overcome the limitations, we propose how to construct the flow network model only for the active instances of tasks at the current scheduling time while guaranteeing the existence of an RT-optimal schedule for the future instances of the tasks.

(2) Based on the framework, we introduce an unfair-but-optimal multiprocessor scheduling algorithm, called flow network-based Earliest-Deadline-First (fn-EDF), for periodic tasks on both continuous and discrete-time models. In particular, to the best of our knowledge, fn-EDF on a discrete-time model is the first online unfair-but-optimal scheduling algorithm in the given problem domain. We experimentally show that fn-EDF on the discrete-time model significantly reduces the number of preemptions with the comparable number of migrations against an existing BF algorithm.

Table 3 compares fn-EDF with existing algorithms in terms of their problem domains. BF is fair-and-optimal on a discrete-time model and both U-EDF and RUN Regnier et al. (2011) are unfair-but-optimal on a continuous-time model. The original LP-based scheduling focused on constrained-deadline tasks, which have their deadlines less than their periods Lawler & Martel (Feb.1981); Megel et al. (2010). In Table 3, it is assumed that LP-based scheduling can easily solve the implicit-deadline task scheduling problem by setting . As discussed, the traditional LP-based scheduling is classified as an offline approach.

The remainder of this paper is organized as follows. Section 2 presents the problem formulation and the corresponding flow network model for scheduling implicit-deadline periodic tasks; the unfair-but-optimal scheduling algorithms on both continuous and discrete-time models are also discussed. Section 3 experimentally evaluate the performance of the proposed algorithm compared with an existing method. In Section 4, we discuss two issues about the proposed approach, complexity and extensibility. Section 5 summarizes the related work. In Section 6 we give our conclusion.

Algorithm Continuous time model Discrete time model Reference
online fn-EDF RT-optimal - Section 2.5
- RT-optimal Section 2.6
BF - RT-optimal Zhu et al. (2011)
U-EDF RT-optimal - Nelissen et al. (2012)
RUN RT-optimal - Regnier et al. (2011)
offline LP-based RT-optimal RT-optimal Lawler & Martel (Feb.1981); Megel et al. (2010)
Table 3: Comparison

2 Scheduling Algorithms

2.1 System model

We consider the implicit-deadline task characterized by . We assume that the task parameters, and , are multiples of the system time unit. The active job of at time has its arrival time subject to . At time , has its remaining execution time , where . The active job set is denoted by . Both and correspond to and , respectively and called absolute deadlines of the job.

is the th boundary of . The earlier is assumed to have the lower index . When we consider the active jobs only, contains absolute deadlines (boundaries) of the active jobs, where is less than or equal to . The window is the time interval ranging [,). The length of is and the permitted processing capacity of is upper-bounded by . The allocated execution time for within is denoted by .

2.2 Problem formulation with an example

Assume that 5 tasks in Table 1 are running on 2 processors. To consider active jobs at the current time , we focus on the time interval from the current time to the latest deadline of all active jobs . In , the boundary set is . For the given example, all active jobs at current time 0 are illustrated in white in Figure 4 and is 9. For the active jobs, are assigned as shown in Figure 4.

Figure 4: Active jobs at time 0

Next, we formulate a linear programming problem for the active jobs at time .

Equation 1 defines the constraint that each active job completes its execution within the permitted time interval, i.e., . We call this the job completion constraint (JCC).


Inequality 2 is the constraint that the sum of the allocated execution times of the active jobs within a given window does not exceed the permitted processing capacity of the window. We call this the processing capacity constraint (PCC).


Inequality 3 is the constraint that each active job within a given window does not simultaneously occupy more than one processor. We call this the constraint of no intra-task parallelism (NIP).


To facilitate the discussion, the white region ranging in Figure 4 is called the active job area at time and it is denoted by . In other words, is a collection of the maximum capacity per window that can be utilized to execute the active jobs at . For each , three types of constraints, JCC, PCC and NIP, are defined. No objective function is required and a feasible solution that satisfies all the constraints is sufficient for our purpose.

In the constraints above, all right-hand side values except can be determined easily when for the active jobs is fixed. The rationale behind the determination of is explained using the following example. Figure 4 indicates that some capacity of is used for the active jobs, i.e., {, , , }, and the remainder is reserved for a future job . Thus, each should be determined in order to support the schedulabilities of the current active jobs and future jobs as well. Therefore, we determine using the fluid schedule. For example, is calculated by reserving the execution time of {, , } within from the permitted processing capacity of , e.g., . When a schedulable implicit-deadline task set is given, since we reserve a suitable amount of processing capacity for future jobs based on the fluid schedule, the schedulability of future jobs remains valid regardless of our scheduling decision within the current AJA.

To schedule real-time tasks up to their hyper-period, we assume that AJA is constructed repeatedly at each boundary (scheduling event). Whenever AJA is constructed at time , the right-hand sides of JCC equations are filled with the remaining execution time of each . PCCs and NIPs are formed in the same manner as shown for time 0. Since a new AJA is constructed at each boundary, some in the previous AJA are recalculated. In this example, at time 0, only in are used for scheduling. The other allocated execution times are recalculated in the next AJA.

Figure 5 shows both AJA(3) and AJA(6). We assume that the indices of and are updated for each AJA(). Once all in are obtained, the actual schedule within can easily be determined, e.g., using McNaughton’s wrap around algorithm McNaughton (1959).

Figure 5: Active jobs at time 3 and 6

We emphasize that a solution satisfying three types of constraints may yield the different running rates of a job between two adjacent windows, e.g., , which implies that this approach can generate an unfair schedule.

2.3 Problem formulation with the general task model

Figure 6 shows a set of general tasks with implicit deadlines. At the current time , is constructed with three types of constraints. We set the current time to the boundary . The first window then ranges from to (= ). The active job is assumed to have its remaining execution time at . Within the time interval [, ()), a series of windows, {, …, }, is built, based on . The number of windows for is less than or equal to .

Figure 6: General task model with implicit deadlines and AJAs

For convenience, we define two sets for the current AJA as follows:


contains all indices of the window which are placed in the time interval [,]. contains all indices of the active jobs at time which are still active in . Using these two sets, the constraints for are defined as follows:


For PCC, is set as follows:


At a scheduling event (or boundary), the corresponding AJA is established and three types of constraints are defined. After a feasible solution is found for AJA, the part of the feasible solution for is used to allocate the computational resource to each task for their execution during . At the next scheduling event, the next AJA is established and a similar procedure follows. This iteration continues until , which provides the schedulability for the given tasks. Note that the LP formulation includes no objective function and a feasible solution that satisfies all constraints is sufficient for our purpose.

RT-optimality of the proposed approach can be proved by showing that as long as the fluid schedule can be defined for a given task set, the proposed approach finds at least a feasible schedule.

Lemma 1

If a fluid schedule exists for the given periodic implicit-deadline task set, the fluid schedule satisfies the three types of constraints in the first AJA().


The fluid schedule guarantees schedulability by providing each task with the processing capacity based on its running rate that is equivalent to its individual utilization . For AJA(), the fluid schedule ensures that every is . First, JCCs for AJA() are satisfied as follows.


Second, NIPs for AJA() are satisfied since . Third, PCCs for AJA() are satisfied as follows. Since the fluid schedule is assumed, the following equation holds.


By the assumption of the fluid schedule, and . Thus,


From Equation 9, .

Theorem 2.1

If a fluid schedule exists for the given periodic implicit-deadline task set, the proposed approach provides a feasible solution satisfying all constraints established by the task set.


The proof is obtained by induction on the increasing boundary. We assume that AJA(0) has a feasible solution that satisfies all constraints. Then, we try to show that if AJA() has a feasible solution, AJA() has at least one feasible solution, where is the next boundary to as shown in Figure 6. Note that the index is not updated at the next scheduling event point simply for convenience in the proof.
Basis. From Lemma 1, AJA() in Figure 6 has at least one feasible solution, when . Here, assume that the active jobs at are sorted in the increasing deadline order.
Induction step. Assume that is a feasible solution for AJA(). After is allocated for , is still a feasible solution for the remaining area of AJA(). Therefore, satisfies the following equations.


At when a new job arrives, AJA() is established using the fluid schedule. For AJA() that covers the array of windows {,,…}, we define three types of constraints as follows.


For AJA(), we select as a candidate solution. Now, we need to check if the candidate solution satisfies all constraints.

For JCCs, let Equation 16 be split as follows.


Since when , Equation 19 becomes the following.


The candidate solution satisfies the first case of Equation 20 since Equation 13 holds. It also satisfies the second case of Equation 20 as follows.


Consequently, the candidate solution satisfies JCCs for AJA().

For PCCs, the left-hand side of Inequality 17 is modified as follows.


If the variables above are substituted with the candidate solution, it becomes the following due to Equation 14.


Therefore, the candidate solution satisfies PCCs.

In addition, since satisfies NIPs and , , the candidate solution satisfies NIPs.

In summary, the candidate solution satisfies all constraints of AJA(). It implies that if a feasible solution exists for AJA(), then at least one feasible solution for AJA() can be found. The feasible solution for every AJA from time 0 to is ensured using the induction steps.

Corollary 1

The proposed approach is an RT-optimal real-time scheduling algorithm.


According to Theorem 2.1, if the fluid schedule exists for a given task set, our technique provides the feasible solutions for the repeated AJAs. It is also known that if of the task set is less than or equal to , RT-optimal fluid schedule exists. Therefore, if of a task set is less than or equal to , our technique provides the feasible solutions for the repeated AJAs, which implies its RT-optimality.

2.4 Flow network model

The objective of the maximum flow problem is to find the maximal flow from a single source to a single sink in the given flow network. To efficiently solve the LP problem for AJA, we transform it into the maximum flow problem by slightly changing JCCs as follows.


The maximum flow problem for AJA() consists of the constraints, including Inequalities 257 and 8, and the objective function as follows.

Definition 3

(Maximum flow problem for AJA())


Note that when the maximum flow occurs in the given problem, the left-hand sides of all JCCs, i.e., Inequality 25, should be equal to , which satisfies the original JCC equations. We call it the complete maximum flow.

Definition 3 turns our problem into a maximum flow problem. Since all constraints become the upper bounded inequalities, each upper bound acts as the capacity of each edge in the flow network. Using these edges, the flow network is built as follows.

To construct the flow network, we add two additional nodes, the source and the sink . Between and in the network, two intermediate layers of nodes are placed, i.e., the first layer contains the nodes of all active jobs and the second layer contains the nodes of all windows. We name each node based on its corresponding job and window. Each edge is denoted by e(,) where and are the source and destination nodes, respectively. denotes that the edge has its flow capacity .

From to node , a directional edge is inserted and the flow capacity is determined as . From node to , a directional edge is inserted with the capacity . Between node to node , an edge whose flow is constrained by is inserted.

Formally, when AJA() is given, it is transformed into a capacitated network , where a network contains a set of nodes and a set of edges . We call the flow network for real-time scheduling (FNRT).


The actual flow on an edge from node to node is denoted by . In FNRT, corresponds to for the given AJA. In addition, the set of capacitated edges represents JCCs. The sets of capacitated edges and represent NIPs and PCCs, respectively. The complete maximum flow for FNRT satisfies all constraints even including the original JCCs in the linear programming problem. A flow network example for Figure 4 is shown in Figure 7.

Figure 7: Flow network for the example
input :  the current time and the task set
Data: F is the flow network to be constructed
Data: is for all in AJA
Data: is an edge from node to
1 B = ObtainAllBoundaries(,) C = ComputeWindowCapacities(,B) for  do
2       for  do
3             if  then
4                   Add to F
5            if  then
6                   Add to F break
8      Add to F
solve(F) return
Algorithm 1 Schedule(,) based on the flow network

A high-level description of the algorithm that is invoked at each scheduling event (boundary) is shown in Algorithm 1. Lines 1-12 shows how to construct FNRT. The maximum flow problem is solved by line 13. The computational complexity of constructing FNRT is proportional to the nested loops from lines 5-12 and though, the complexity of solving the maximum flow problem at line 13 is dominant.

The computational complexity of the algorithm depends primarily on both the number of nodes and the number of edges . The number of nodes in the flow network is . The number of windows is less than or equal to the number of tasks. Thus, in the worst case. The number of edges is . In the worst case, and . Thus, .

Maximum flow problem has been intensively studied in the graph theory research community and several polynomial algorithms have been found Goldberg & Tarjan (Aug. 2014). In terms of complexity, the algorithms are categorized into two groups, i.e., weakly polynomial and strongly polynomial algorithms Goldberg & Tarjan (Aug. 2014). The complexity of weakly polynomial algorithms is upper-bounded by a combination of , and the largest capacity among all edges. On the other hand, the complexity of the strongly polynomial algorithms is upper-bounded by a combination of and alone. A strongly polynomial algorithm with complexity Orlin (2013) was introduced recently and further improvement continues. We use the complexity algorithm for line 13 in Algorithm 1. Since and are proportional to and in the worst case respectively, the complexity of Algorithm 1 is . Several other maximum flow algorithms with complexity are possible alternatives Goldberg & Tarjan (1986).

2.5 fn-EDF on the continuous-time model

The objective of maximum flow algorithms is to send the maximal flow from the source to the sink. In general, multiple sets of flows that achieve the goal could exist, which implies that the formulated problem could have multiple solutions. Since maximum flow algorithms simply find one of the solutions depending on the preference of the selected algorithm, the amount of flow on each individual edge is not controlled, but only upper-bounded. In a real-time scheduling context, this implies that a task schedule determined by a maximum flow algorithm may be, for example, non-work-conserving or work-conserving from time to time. To a more carefully control the flow over FNRT, additional features of the flow network are required to use.

To control the flow over the network, we consider Minimum cost flow problem (MCFP). Each edge on the flow network for MCFP contains one more parameter, called cost, . When the actual flow is sent on an edge, the cost of the flow on the edge is calculated by . The objective of MCFP is to find the set of the actual flow on all edges that minimizes the total cost of the flow, when a total flow value is given. Since the total flow value is known to be the sum of the right-hand sides of JCCs for the given AJA, MCFP is easily applicable. The cost to each edge allows us to control the actual flows in the network.

For example, assume that we try to generate an EDF-like schedule for the given AJA(). Among all edges directed to node , to prioritize the task execution in EDF order, we assign the lower cost to the edges where has the earlier deadline. For example, the cost are assigned to all in EDF order.


where denotes that the edge has its capacity and cost .

In the following windows, to make the schedule work-conserving, the costs are increasingly assigned to edges, i.e., the cost is assigned to where .


To implement this scheduling algorithm, line 8 in Algorithm 1 should be updated with the previous two Equations, 29 and 30, which yields Algorithm 2. We call this flow network-based EDF (fn-EDF).

Figure 8: Flow control

Figure 8 shows the flow network for fn-EDF that is extended from Figure 7.

To analytically describe the cost assignment, we introduce a notion, cost-slope, , at which is defined to be . Assume that the costs are assigned to all in EDF order and that the cost is assigned to all . Then, the earliest deadline task node has the highest cost-slope at . We prove that the steeper cost-slope imposes the higher priority.

Theorem 2.2

When two job nodes and are connected to and with edges , , , in the flow network, if at , then the flow network gives higher priority to the flow from to be sent to than that from .


Assume that one time unit (or flow) is available within and that and compete for it. In the case where occupies one time unit in and is delayed to take one time unit in , we assume that the total flow cost is , where is the set of actual flows on all edges. If the units of and are swapped between and , the new cost becomes . Since , . Therefore, MCFP algorithms prefer to assign the available single time unit in to rather than in order to minimize the total cost. All available time units are allocated in the same manner and thus, the flow from achieves a higher priority of being sent to than that from .

input :  the current time and the task set
Data: is the active job set sorted by EDF
Data: in satisfies
B = ObtainAllBoundaries(,) C = ComputeWindowCapacities(,) = sortByEDF() /* Assume in satisfies */
1 for  do
2       for  do
3             if  then
4                   if  then
5                         Add to
6                  else
7                         Add to
9            if  then
10                   Add to break
12      Add to
solve() return
Algorithm 2 Schedule(,) for flow control

Note that the prioritization based on the cost slopes differs from the traditional prioritization in the real-time scheduling context. The assignment of traditional priorities to tasks may result in deadline misses, whereas prioritization using the cost slopes never attenuates the schedulability ensured by the flow network model. Thus, we can call this a weak priority if distinction is needed.

We also need to consider the maximum magnitude of costs, because the computational complexity of a certain class of MCFP algorithms depends on the cost. To generate the EDF-like schedule, the costs 1, …, are used. Since is proportional to , the maximum magnitude of costs is proportional to . We emphasize that different cost assignments are also possible for various scheduling purposes.

For the line 17 in Algorithm 2, any minimum cost flow algorithm can be used. In terms of complexity, the algorithms are also categorized into two groups, i.e., weakly polynomial and strongly polynomial algorithms Goldberg & Tarjan (Feb. 2015). The complexity of the weakly polynomial algorithms is upper-bounded by a combination of , , the largest cost and (or) the largest capacity of all edges. We restrict for our scheduling purposes, e.g., for prioritizing task execution. Therefore, not only the strongly polynomial algorithms but also the polynomial algorithms bounded by a combination of , and are of our interest. Goldberg et. al introduced a cost-scaling algorithm with dynamic trees having complexity Goldberg & Tarjan (1987). Orlin introduced an enhanced capacity-scaling algorithm having complexity where denotes the time complexity of solving the single-source shortest path problem Orlin (1993). Dijkstra’s algorithm with Fibonacci heaps is known to provide an bound for  M. L. Fredman (Jul. 1987). In our context, the complexity of the former algorithm is and that of the latter is .

2.6 fn-EDF on the discrete-time model

One of our assumption in this study is that the task parameters, , are multiples of the system time unit, i.e., integers. Despite the assumption, manipulating tasks for scheduling can generate non-integral numbers, which leads to two issues that we must consider.

First, several maximum flow and minimum cost flow algorithms assume that the parameters of the flow networks are integers. However, our flow network models permit non-integer network parameters. For example, in Algorithm 1, the edges may have non-integral , which is calculated using of several tasks. Nevertheless, since ’s are rational numbers, this issue can be easily solved by linearly scaling all of the rational numbers in the flow network up to appropriate integers. It does not raise any concern about increasing complexity if a strongly polynomial algorithm is used. This simple technique is sufficient for scheduling tasks on the continuous-time model where the non-integral units of time are allowed to be allocated for task execution.

Second, on the discrete-time model, the allocation of the non-integral units of time for task execution is not allowed. To run tasks on the discrete-time model, all ’s of the flow network solution should be integers.

RT-optimal scheduling on the discrete-time model has been studied by several researchers and BF algorithms were proposed recently Nelissen et al. (Jul. 2014). To ensure RT-optimality, BF algorithms are designed to maintain their deviation from the fluid schedule to be less than one time unit at every boundary. Informally, when the fluid schedule yields a non-integral execution time between two adjacent boundaries for a task, the algorithm divides the non-integral execution time into mandatory and optional executions where the mandatory execution time is determined to be and the optional execution time is determined to be . The optional execution times for all tasks are accumulated and additionally allocated to the tasks based on specific criteria.

In Algorithms 1 and 2, is calculated based on the fluid schedule of each task. For the discrete-time model, we instead use the BF schedule to determine