1 Introduction
When realtime tasks suspend themselves (due to blocking I/O, lock contention, etc.), they defer a part of their execution to be processed at a later time. A consequence of such deferred execution is a potential interference penalty for lowerpriority tasks [LSS:87, LSST:91, Ra:90, ABRTW:93, SLS:95, WC16suspendDATE, ecrts15nelissen]. This penalty, which is maximized when a task defers the completion of one job just until the release of the next job, can manifest as responsetime increases and thus may lead to deadline misses.
To avoid such detrimental effects, Rajkumar [Raj:suspension1991] proposed the period enforcer algorithm, a technique to control (or shape) the processor demand of selfsuspending tasks on uniprocessors and partitioned multiprocessors under preemptive fixedpriority scheduling. In a nutshell, the period enforcer algorithm artificially increases the length of certain suspensions whenever a task’s activation pattern carries the risk of inducing undue interference in lowerpriority tasks.
The period enforcer algorithm is worth a second look for a number of reasons. First, in the words of Rajkumar, it “forces tasks to behave like ideal periodic tasks from the scheduling point of view with no associated scheduling penalties” [Raj:suspension1991], which is obviously highly desirable in many practical applications in which selfsuspensions are inevitable (e.g., when offloading computations to coprocessors such as GPUs or DSPs). Second, the laterproposed, but more widelyknown release guard algorithm [SL:96] uses a technique quite similar to period enforcement to control scheduling penalties due to release jitter in distributed systems. The period enforcer algorithm has also attracted renewed attention in recent years and has been discussed in several current works (e.g., [DBLP:conf/rtss/ChenL14, LNR:09, LR:10, Lak:11, LC:14, KANR:13, HY:11, CA:09, CA:10, CA:10b]), at times controversially [BA:08a]. And last but not least, the period enforcer algorithm plays a significant role in Rajkumar’s seminal book on realtime synchronization [Raj:91].
In this note, we revisit the period enforcer [Raj:suspension1991] to carefully reexamine and explain its underlying assumptions and limitations, and to point out potential misconceptions. The main contributions are three observations that, to the best of our knowledge, have not been previously reported in the literature on realtime systems:

period enforcement can be a cause of deadline misses in selfsuspending task sets that are otherwise schedulable (Section 3);

to match the assumptions underlying the analysis of the period enforcer, a schedulability analysis of selfsuspending tasks subject to period enforcement requires a task set transformation for which no solution is known in the general case, and which is subject to exponential time complexity (with current techniques) in the limited case of a single selfsuspending task (Section 4); and

the period enforcer algorithm is incompatible with all existing analyses of suspensionbased locking protocols, and can in fact cause everincreasing suspension times until a deadline is missed (Section 5).
2 Preliminaries
The period enforcer algorithm [Raj:suspension1991] applies to selfsuspending tasks on uniprocessors under fixedpriority scheduling, and hence by extension also to multiprocessors under partitioned fixedpriority scheduling (where tasks are statically assigned to processors and each processor is scheduled as a uniprocessor). In this section, we review the underlying task model (Section 2.1), introduce the period enforcer algorithm (Section 2.2), summarize its analysis (Section 2.3), and finally restate our observations in more precise terms (Section 2.4).
2.1 Task Models
Since the analysis of the period enforcer requires reasoning about different task models and their relationships, we carefully introduce and precisely define the relevant models in this section.
2.1.1 Periodic Tasks
The most basic and best understood task model is the periodic task model due to Liu and Layland [LL:73]. In this model, each task is characterized as a tuple , where denotes an upper bound on the total execution time of any job of and denotes the (exact) interarrival time (or period) of . Each such periodic task releases a job at time 0, and periodically every units thereafter. Each job must finish by the time the next arrives. Importantly, Liu and Layland assume both that the job of arrives exactly at time , and that an incomplete job is always available for execution (i.e., jobs never block on I/O or locks).
A straightforward generalization of the periodic task model is to introduce an explicit relative deadline parameter . In this case, each task is represented by a threetuple , with the interpretation that every job of must finish within time units after its release. Task is said to have an implicit deadline if , a constrained deadline if , and an arbitrary deadline otherwise. We primarily consider implicit deadlines in this note.
2.1.2 Sporadic Tasks
Mok [Mo:83] introduced the sporadic task model, a widely used generalization of the periodic task model in which each task is still specified by a tuple . However, the sporadic task model relaxes the interarrival constraint to specify a minimum (rather than an exact) separation between jobs. In this interpretation, the first job is not necessarily released at time 0, and the exact release times of future jobs cannot be predicted, which is an appropriate modeling assumption for eventtriggered tasks.
On uniprocessors, the relaxation from periodic to sporadic job arrivals does not introduce additional pessimism:^{1}^{1}1Assuming that all periodic tasks synchronously release a job at time zero. since any two jobs of a sporadic task are known to be released at least time units apart, the sporadic task model [Mo:83] still allows for schedulability analysis that is as accurate as Liu and Layland’s analysis of periodic tasks [LL:73].
Mok retained the assumption that incomplete jobs are always ready for execution (i.e., no suspensions), and that jobs, once released, are immediately available for execution.
2.1.3 Release Jitter
The latter assumption — immediate availability for execution — is inappropriate in many practical systems (especially in networked systems) if events (e.g., messages) that trigger job releases can incur nonnegligible delays (e.g., network congestion). Such delays in task activation can be accounted for by introducing a notion of release jitter. To this end, each task is represented by a fourtuple , where the parameter is a bound on the maximum time that a job remains unavailable for execution after it should have started to run. Release jitter can be incorporated in both the periodic and the sporadic task models.
In the presence of release jitter, the terms “job arrival” and “job release,” which are often used interchangeably, take on distinct meanings: a job’s arrival time denotes the point in time when it actually becomes available for execution, whereas a job’s release time is the instant that is relevant for the (minimum) interarrival time constraint. Any job of task arrives at most time units after it is released.
Notably, nonzero release jitter does cause additional pessimism: in the worst case, two consecutive jobs of a task can be separated by as little as time units (if the earlier job incurs maximum release jitter and the successor job incurs none). As a result, a task may “carry in” some additional work into a given interval. Taking this effect into account, Audsley et al. [ABRTW:93] developed a responsetime analysis for sporadic and periodic constraineddeadline tasks subject to release jitter under preemptive fixedpriority scheduling.
However, even in the presence of release jitter, a key assumption remains that jobs do not selfsuspend (e.g., wait for I/O).^{2}^{2}2Audsley et al. [ABRTW:93] do present a responsetime analysis that takes into account a limited form of suspensions due to semaphores (“blocking”). However, their analysis does not apply to general selfsuspensions (i.e., the kind of selfsuspensions targeted by the period enforcer algorithm) and is not relevant in the context of this paper. That is, Audsley et al. [ABRTW:93] assume that, once a job has arrived, it continuously remains available for dispatching until it completes. This restriction is removed next.
2.1.4 SelfSuspending Tasks
When a job selfsuspends, it becomes unavailable for execution until some external event occurs (e.g., a disk I/O operation completes, a network packet arrives, a coprocessor signals completion, etc.). This has the effect of deferring (a part of) the job’s processing requirement until the time that it resumes from its suspension, which causes massive analytical difficulties [LSS:87, LSST:91, Ra:90, ABRTW:93, SLS:95, WC16suspendDATE, ecrts15nelissen, Ri:04, Raj:suspension1991, Chen2016].
To date, the realtime literature on selfsuspensions has focused on two task models: the dynamic selfsuspension model, which we discuss first, and the (multi)segmented suspension model, which we discuss next in Section 2.1.5. Selfsuspensions can arise in both periodic and sporadic tasks (i.e., both interpretations of the parameter are possible). The observations that we make in this note apply equally to both periodic and sporadic tasks; for convenience, we focus primarily on periodic tasks.
The dynamic selfsuspending task model characterizes each task as a fourtuple : the parameters , , and have their usual meaning (i.e., as in the periodic and sporadic task models), and denotes an upper bound on the total selfsuspension time of any job of . The dynamic selfsuspension model does not impose a bound on the maximum number of selfsuspensions, nor does it make any assumptions as to where during a job’s execution selfsuspensions occur. That is, how often a job defers its execution, when it does so, and how much of its execution it defers may vary unpredictably from job to job.
Allowing tasks to selfsuspend can impose substantial scheduling penalties (an example is provided shortly in Section 2.2) and greatly complicates schedulability analysis (e.g., see [ecrts15nelissen, Ri:04, Chen2016]). In particular, release jitter and selfsuspensions are not interchangeable concepts and it is not safe [Chen2016, ecrts15nelissen] to simply substitute with in Audsley et al.’s analysis [ABRTW:93]. (Nonetheless, under the dynamic suspension model, it is possible for jobs of selfsuspending tasks to defer their entire execution requirement, so selfsuspensions can be seen as a generalization of release jitter.)
The period enforcer algorithm aims to mitigate the negative effects of selfsuspensions. However, for reasons that will be explained in Section 2.2.4, the period enforcer algorithm cannot be meaningfully combined with the dynamic suspension model. Instead, it requires the segmented suspension model, which we discuss next.
2.1.5 Segmented SelfSuspending Tasks
The (multi)segmented selfsuspending sporadic task model extends the fourtuple by characterizing each selfsuspending task as a fixed, finite linear sequence of computation and suspension intervals. These intervals are represented as a tuple , which is composed of computation segments separated by suspension intervals.
The first selfsuspension segment , prior to the first execution segment, is equivalent to release jitter (i.e., the parameter in Section 2.1.3). However, in much of the literature on the segmented selfsuspending task model, the segment is assumed to be absent (i.e., ), such that there are only suspension intervals (and jobs arrive jitterfree). Unless noted otherwise we adopt this convention.
We say that a segment arrives when it becomes available for execution. The first computation segment arrives immediately when the job is released (unless ); the second computation segment (if any) arrives when the job resumes from its first selfsuspension, etc.
The advantage of the dynamic model (Section 2.1.4) is that it is more flexible since it does not impose any assumptions on a task’s control flow. The advantage of the segmented model is that it allows for more accurate analysis. The period enforcer algorithm and its analysis [Raj:suspension1991] applies (only) to the segmented model, as explained in Sections 2.2.4 and 2.3.
A note on terminology: for the sake of consistency with the recent literature on selfsuspensions in realtime systems, we favor the term “segmented selfsuspending tasks” to refer to tasks under the justintroduced model. However, Rajkumar’s original description of the period enforcer [Raj:suspension1991] refers to such tasks as deferrable tasks, as it predates the widespread adoption of the former term. We use both terms interchangeably in this paper.
2.1.6 SingleSegment SelfSuspending (aka Deferrable) Tasks
An important special case is segmented selfsuspending tasks with exactly one selfsuspension interval followed by exactly one computation segment (, ), which we refer to as singlesegment selfsuspending tasks. This special case is central to Rajkumar’s original analysis of the period enforcer [Raj:suspension1991], as we will explain in Section 2.3. Regarding terminology, Rajkumar [Raj:suspension1991] does not use a special term for singlesegment selfsuspending tasks, simply referring to them as deferrable tasks. To avoid ambiguity, we instead explicitly mention the “singlesegment” qualifier.
Note also that singlesegment selfsuspending sporadic tasks, which are “suspended” only prior to commencing execution, are analytically fully equivalent to sporadic tasks subject to release jitter (i.e., the model described in Section 2.1.3). We nonetheless use the term “singlesegment selfsuspending task,” or interchangeably “singlesegment deferrable task,” to remain close to Rajkumar’s original description [Raj:suspension1991], and to highlight the connection to the (multi)segmented selfsuspending task model (Section 2.1.5).
This concludes our review of relevant task models. Before reviewing the period enforcer and its original analysis, we briefly introduce some essential concepts.
2.1.7 Assumptions, Busy Periods, and Task Set Transformations
We focus exclusively on preemptive fixedpriority scheduling in this note, as the period enforcer is explicitly designed for this setting. For simplicity, we assume that tasks are indexed in order of decreasing priority (i.e., is the highestpriority task).
A key concept in the period enforcer’s runtime rules (discussed next) is the notion of a level busy interval, which is a maximal interval during which the processor executes only segments of tasks with priority or higher.
Finally, Rajkumar’s original analysis [Raj:suspension1991] of the period enforcer is rooted in the concept of a task set transformation. In general, such a task set transformation is simply a function that maps a given task set to a transformed task set such that is schedulable only if the original task set is schedulable, too. The basic idea is that such a transformation allows schedulability analysis by reduction: given a suitable transformation , can be indirectly shown to be schedulable by computing and establishing that is schedulable.
Importantly, the tasks in and do not have to be of the same task model, nor does the number of tasks have to remain the same (i.e., is possible). Specifically, the task set transformation underlying the analysis of the period enforcer maps each multisegmented selfsuspending task to singlesegmented selfsuspending tasks in (i.e., ).
With these definitions in place, we can now introduce the period enforcer.
2.2 The Period Enforcer Algorithm
The period enforcer consists of two parts: a runtime rule that governs when each segment of a selfsuspending task may be scheduled, and an (offline) analysis that may be used to assess the temporal correctness of a set of selfsuspending tasks (Section 2.1.5) subject to period enforcement. Initially, we focus on the runtime rule (i.e., the actual period enforcer algorithm) and then review the corresponding original analysis thereafter in Section 2.3. We begin with a simple example that highlights the effect that the period enforcer is designed to control.
2.2.1 The Problem: BacktoBack Execution
The scheduling penalty associated with selfsuspensions is maximized when a task defers the completion of one job just until the release of the next job. This effect is illustrated in Figure 1, which shows a case in which the selfsuspension of the higherpriority task from time 1 until time 5 results in a deadline miss of the lowerpriority task at time 15.
The root cause is increased interference due to the “backtoback” execution effect [LSS:87, LSST:91, Ra:90, ABRTW:93, SLS:95]. In the example shown in Figure 1, two jobs of execute in close succession (i.e., separated by less than a period) because the second job, released at time 10, selfsuspended for a (much) shorter duration than the first job. Consequently, suffers from increased interference when ’s second job resumes “too soon” at time 12 after having been suspended for only one time unit, rather than four time units like the first job of .
2.2.2 The Period Enforcement Rule
The key idea underlying the period enforcer algorithm is to artificially delay the execution of computation segments if a job resumes “too soon.” To this end, the period enforcer determines for each computation segment an eligibility time. If a segment resumes before its eligibility time, the execution of the segment is delayed until the eligibility time is reached.
A segment’s eligibility time is determined according to the following rule. Let denote the eligibility time of the computation segment of the job of task . Further, let denote the segment’s arrival time. Finally, let denote the last time that a level busy interval begins on or prior to time (i.e., the processor executes only or higherpriority tasks throughout the interval ). The period enforcer algorithm defines the segment eligibility time of the segment as
(1) 
where [Raj:suspension1991, Section 3.1]. This simple and elegant rule has the desirable effect of avoiding all backtoback execution, which can be easily observed with an example.
2.2.3 Example: Avoiding BacktoBack Execution
Figure 2 illustrates how the definition of eligibility time in Equation (1) restores the schedulability of the task set depicted in Figure 1. Consider the eligibility times of the second segment of task .
By definition, . At time 5, when the second computation segment of the first job resumes (), we thus have
since the arrival of ’s second segment (and the release of ) starts a new level2 busy interval at time . The second segment of ’s first job is hence immediately eligible to execute; however, due to the presence of a pending higherpriority job, is not actually scheduled until time 8 (just as without period enforcement as depicted in Figure 1).
The second segment of the second job of arrives at time . In this case, the segment is not immediately eligible to execute since
Hence, the execution of ’s second computation segment does not start until time , which gives sufficient time to finish before its deadline at time .
The examples in Figures 1 and 2 suggest an intuition for the benefits provided by period enforcement: computation segments of a selfsuspending task are forced to execute at least time units apart (hence the name), which ensures that it causes no more interference than a regular (nonselfsuspending) sporadic task.
2.2.4 Incompatibility with the Dynamic SelfSuspension Model
Before reviewing the classic analysis based on this intuition, we briefly comment on the difficulty of combining period enforcement with the dynamic selfsuspension model (Section 2.1.4).
In short, to be effective, the period enforcer fundamentally requires the segmented selfsuspension model (Section 2.1.5) because it cannot cope with the unpredictable execution times between (the unpredictably many) selfsuspensions that jobs may exhibit under the dynamic selfsuspension model.
A simple example can explain why the period enforcer algorithm is not compatible with the dynamic selfsuspending task model. Consider a trivial system that has only one task with a total execution time , a total selfsuspension length , and a period and relative deadline of . Suppose the first job of task arrives at time , suspends itself for one time unit, and then executes for one time unit. Further suppose the second job of task arrives at time , first executes for time units, then suspends for time unit, and finally executes for time units. With the period enforcer algorithm in place, the second job of task starts its execution at time , at which point it will clearly miss its deadline at time .
In this example, the problem is that the eligibility time of the first computation “segment” of the second job is determined by the selfsuspension pattern of the first job, even though the first job deferred all of its execution, whereas the second job deferred only a part of its execution. Under the more restrictive segmented selfsuspension model (Section 2.1.5), the pattern of selfsuspension and computation times is statically fixed; such a mismatch is hence not possible.
Next, we revisit the original analysis of the period enforcer algorithm.
2.3 Classic Analysis of the Period Enforcer Algorithm
The central notation in Rajkumar’s analysis [Raj:suspension1991] is a deferrable task, which matches our notion of segmented tasks, as already discussed in Section 2.1.5. Specifically, Rajkumar states that:
“With deferred execution, a task can execute its units of execution in discrete amounts , with suspension in between and .” [Raj:suspension1991, Section 3]^{3}^{3}3The notation has been altered here for the sake of consistency.
Central to Rajkumar’s analysis [Raj:suspension1991] is a task set transformation (recall Section 2.1.7) that splits each deferrable task with multiple segments (Section 2.1.5) into a corresponding number of singlesegment deferrable tasks (Section 2.1.6). In the words of Rajkumar [Raj:suspension1991, Section 3]:
“Without any loss of generality, we shall assume that a task can defer its entire execution time but not parts of it. That is, a task executes for units with no suspensions once it begins execution. Any task that does suspend after it executes for a while can be considered to be two or more tasks each with its own worstcase execution time. The only difference is that if a task is split into two tasks followed by , then has the same deadlines as .”
In other words, the transformation can be understood as splitting each selfsuspending task into a matching number of singlesegment deferrable tasks (Section 2.1.6), which are equivalent to nonselfsuspending sporadic tasks subject to release jitter (Section 2.1.3), which can be easily analyzed with classic fixedpriority responsetime analysis [ABRTW:93]. To constitute an effective schedulability analysis, the transformation must ensure that, if the transformed set of singlesegment deferrable tasks can be shown to be schedulable (e.g., with responsetime analysis [ABRTW:93]), then the original set of multisegment deferrable tasks is also schedulable under period enforcement.
To summarize, as illustrated in Figure 1, uncontrolled deferred execution can impose increased interference on lowerpriority tasks because of the potential for “backtoback” execution [LSS:87, LSST:91, Ra:90, ABRTW:93, SLS:95]. The purpose of the period enforcer algorithm is to reduce such penalties for lowerpriority tasks without detrimentally affecting the schedulability of selfsuspending, higherpriority tasks. The latter aspect — no detrimental effects for selfsuspending tasks — is captured concisely by Theorem 5 in the original analysis of the period enforcer algorithm [Raj:suspension1991].
Theorem 5: A [singlesegment] deferrable task that is schedulable under its worstcase conditions is also schedulable under the period enforcer algorithm [Raj:suspension1991].
The “worstcase conditions” mentioned in the theorem simply correspond to the case when (i) a job of a singlesegment deferrable task defers its execution for the maximally allowed time (i.e., when it incurs maximal release jitter) and (ii) it incurs maximum higherpriority interference (i.e., when its start of execution coincides with a critical instant [LL:73]).
2.4 Questions Answered in This Paper
Theorem 5 (in [Raj:suspension1991]) is a strong result: it implies that the period enforcer does not induce any deadline misses. This seemingly enables a powerful analysis approach: if the corresponding transformed set of singlesegment deferrable tasks can be shown to be schedulable without period enforcement under fixedpriority scheduling using any applicable analysis (e.g., [ABRTW:93]), then the period enforcer algorithm also yields a correct schedule.
However, recall that, in the original analysis [Raj:suspension1991], deferrable tasks are assumed to defer their execution either completely or not at all (but not parts of it). It is hence important to realize that Theorem 5 in [Raj:suspension1991] applies only to the transformed set of singlesegment deferrable tasks, and that it does not apply to the original set of multisegmented selfsuspending tasks.
This leads to the first question: If the original set of segmented selfsuspending tasks is schedulable without period enforcement, is it then also schedulable under period enforcement? That is, can Theorem 5 (in [Raj:suspension1991]) be generalized to multisegmented selfsuspending tasks? In Section 3, we answer this question in the negative.

There exist sets of segmented selfsuspending tasks that are schedulable under fixedpriority scheduling without any enforcement, but that are infeasible under period enforcement. This shows that Theorem 5 in [Raj:suspension1991] has to be used with care — it may be applied only in the context of the transformed singlesegment deferrable task set, but not in the context of the original multisegmented selfsuspending task set.
Therefore, to apply Theorem 5 to conclude that a set of segmented selfsuspending task sets remains schedulable despite period enforcement, we first have to answer the taskset transformation question: given a set of segmented selfsuspending tasks , how do we obtain a corresponding set of singlesegment deferrable tasks such that is schedulable (without period enforcement) only if is schedulable (with period enforcement)? That is, as discussed in Section 2.3, the classic analysis of the period enforcer [Raj:suspension1991] presumes that it is possible to convert a set of multisegmented selfsuspending tasks into a corresponding set of singlesegment deferrable tasks, but it is left undefined in [Raj:suspension1991] how this central step should be accomplished. In Section 4, we make a pertinent observation.

How to derive a singlesegment deferrable task set corresponding to a given set of multisegmented selfsuspending tasks is an open problem. Recent findings by Nelissen et al. [ecrts15nelissen] can be applied in a special case, but their method takes exponential time (even in the special case).
Finally, we consider the use of the period enforcer in conjunction with suspensionbased multiprocessor locking protocols for partitioned fixedpriority scheduling (such as the MPCP [LNR:09, Ra:90] or the FMLP [BLBA:07, BA:08]). While it is certainly tempting to apply period enforcement with the intention of avoiding the negative effects of deferred execution due to lock contention (as previously suggested elsewhere [Raj:91, Lak:11, LNR:09]), we ask: does existing blocking analysis remain safe when combined with the period enforcer algorithm? In Section 5, we show that this is not the case.

The period enforcer algorithm invalidates all existing blocking analyses for realtime semaphore protocols as there exist nontrivial feedback cycles between the period enforcer rules and blocking durations.
3 Period Enforcement Can Induce Deadline Misses
In this section, we demonstrate with an example that there exist sets of sporadic segmented selfsuspending tasks that both (i) are schedulable without period enforcement and (ii) are not schedulable with period enforcement.
To this end, consider a task system consisting of tasks. Let denote a sporadic task without selfsuspensions and parameters and , and let denote a selfsuspending task consisting of two segments with parameters , , , and . Suppose that we use the ratemonotonic priority assignment, i.e., has higher priority than . This task set is schedulable without any enforcement since at most one computation segment of a job of can be delayed by :

if the first segment of a job of is interfered with by , then the second segment resumes at most after time units after the release of the job and the response time of task is hence ; otherwise,

if the first segment of a job of is not interfered with by , then the second segment resumes at most time units after the release of the job and hence the response time of task is at most even if the second segment is interfered with by .
Figure 3 depicts an example schedule of the task set assuming periodic job arrivals.
Next, let us consider the same task set under control of the period enforcer algorithm, as defined in Section 2.2. Figure 4 shows the resulting schedule for a periodic release pattern. The first job of task (which arrives at time ) is executed as if there is no period enforcement since the definition ensures that both segments are immediately eligible. Note that the first segment of ’s first job is delayed due to interference from . As a result, the second segment of ’s first job does not resume until time . Thus, we have
In contrast to the first job, the second job of task (which is released at time ) is affected by period enforcement. The first segment of the second job arrives at time , incurs interference for one time unit during , and suspends at time . The second segment of the second job hence resumes only at time . Thus, we have
According to the rules of the period enforcer algorithm, the processor therefore remains idle at time because the segment is not eligible to execute until time . However, at time , the third job of is released. As a result, the second job of suffers from additional interference and misses its deadline at time .
This example shows that there exist sporadic segmented selfsuspending task sets that (i) are schedulable under fixedpriority scheduling without any enforcement, but (ii) are not schedulable under the period enforcer algorithm.
One may consider to enrich the period enforcer with the following scheduling rule: when the processor becomes idle, a task immediately becomes eligible to execute regardless of its eligibility time. However, even with this extension, the above example remains valid by introducing one additional lowerpriority task with execution time (to be executed from time to time and time to time ) and . With task , the processor is always busy from time to time and consequently still misses its deadline at time .
Furthermore, the example also demonstrates that the conversion to singlesegment deferrable tasks does incur a loss of generality since it introduces pessimism. In the context of the above example, if we convert the multisegmented suspending task into two singlesegment deferrable tasks, called and , where task never defers its execution and task defers its execution by at most time units, the resulting singlesegment deferrable task set is in fact not schedulable under the given priority assignment: if a job of coincides with the arrival of a job of after it has maximally deferred its execution, the job of has a response time of time units, which exceeds its relative deadline of 11 time units. This shows that any restriction to singlesegment deferrable tasks — that is, assuming that “[w]ithout any loss of generality […] a task can defer its entire execution time but not parts of it” [Raj:suspension1991] (recall Section 2.3) — does in fact come with a loss of generality.
4 Deriving a Corresponding Deferrable Task Set
To apply an analysis of the period enforcer based on Theorem 5 in [Raj:suspension1991], we first need to convert a given set of multisegment selfsuspending tasks into a corresponding set of singlesegment deferrable tasks. This raises the question: how can we efficiently derive the corresponding set of singlesegment deferrable tasks?
The original period enforcer proposal [Raj:suspension1991] is silent on this issue and does not spell out a procedure for converting a multisegmented selfsuspending task to a corresponding set of singlesegment deferrable tasks. However, in our opinion, performing such a transformation without introducing additional pessimism is not at all easy in the general case.
In the following, we illustrate the inherent difficulty of the problem by focusing on a special case to which we can apply a recent result of Nelissen et al. [ecrts15nelissen], which allows analyzing the exact worstcase response time of multisegmented selfsuspending sporadic tasks, albeit with exponential time complexity. Nelissen et al.’s worstcase response time analysis [ecrts15nelissen] is exact under the following conditions:^{4}^{4}4We refer to the characteristics of the worstcase release pattern provided in Lemma 2 in [ecrts15nelissen]. The exact worstcase response time can be obtained by exploring all release patterns that satisfy these conditions.

the task set contains only one selfsuspending task,

the selfsuspending task is the lowestpriority task,

the scheduling policy is preemptive fixedpriority scheduling, and

all tasks have constrained deadlines (i.e., for all ).
For an arbitrary number of tasks , suppose that the system has regular sporadic tasks and only one segmented selfsuspending task , and that all tasks have implicit deadlines (i.e., for all ). Further suppose that task has segments with .
To convert a computation segment of into a singlesegment deferrable task, we need to derive the segment’s latestpossible arrival time, relative to the release of a job. Formally, for the computation segment of task , we let denote its latestpossible arrival time, with the interpretation that, if a job of task arrives at time , then it is guaranteed that the computation segment of this job will not arrive later than at time .
How can we compute ? Suppose that the worstcase response time of the computation segment of task is , and recall that denotes the maximum selfsuspension length before the computation segment of . Then can be expressed in terms of :
where . Therefore, if we can derive the exact segment worstcase response time for , we can easily compute for . And conversely, if we can somehow obtain for , we can trivially infer for . Based on these considerations, it appears that the transformation problem is — at least in the considered special case — equivalent to the worstcase response time analysis of a multisegmented selfsuspending task.
However, deriving an exact bound for for task is not easy: even for the above “simple” case, Nelissen et al.’s solution [ecrts15nelissen] for calculating the exact worstcase response time requires exponential time complexity if . Furthermore, Nelissen et al. [ecrts15nelissen] identified several misconceptions in prior analyses, and after correcting those misconceptions, observed that the problem of deriving the worstcase response time of a computation segment in pseudopolynomial time seems to be very challenging indeed.^{5}^{5}5In fact, in ongoing work, it has recently been shown that verifying the schedulability of task is coNPhard in the strong sense even in the considered simplified case [Chen2016b].
Nelissen et al. [ecrts15nelissen] did not study the period enforcer; rather, they considered unrestricted selfsuspensions. However, given that the period enforcer has no effect on tasks that do not selfsuspend [Raj:suspension1991], and given that in the considered special case only the lowestpriority task selfsuspends, we believe that these observations transfer to the period enforcement case.
To summarize, to analyze the period enforcer based on Theorem 5 in [Raj:suspension1991], a procedure for transforming multisegmented selfsuspending tasks into sets of singlesegment deferrable tasks is needed, but no such procedure is given in the original proposal [Raj:suspension1991]. Based on the presented considerations, we conclude that filling in this missing step is nontrivial and observe that the closest known solution by Nelissen et al. [ecrts15nelissen] requires exponential time even in the greatly simplified special case of a single selfsuspending task. It thus remains unclear how Theorem 5 in [Raj:suspension1991] can be used for schedulability analysis of sets of multisegmented selfsuspending tasks. While we did search for alternative analysis approaches that do not rely on Theorem 5, we did not find a simple or efficient schedulability test for the period enforcer without introducing substantial additional pessimism. The problem remains open.
Next, we take a look at the period enforcer in the context of synchronization protocols.
5 Incompatibility with SuspensionBased Locking Protocols
Binary semaphores, i.e., suspensionbased locks used to realize mutually exclusive access to shared resources, are a common source of selfsuspensions in multiprocessor realtime systems. When a task tries to use a resource that has already been locked, it selfsuspends until the resource becomes available. Such selfsuspensions due to lock contention, just like any other selfsuspension, result in deferred execution and thus can detrimentally affect a task’s interference on lowerpriority tasks. It may thus seem natural to apply the period enforcer to control the negative effects of blockinginduced selfsuspensions.^{6}^{6}6The use of period enforcement in combination with suspensionbased locks has indeed been assumed in prior work [Raj:91], stated as a motivation and possible use case in the original period enforcer proposal [Raj:suspension1991], and suggested as a potential improvement elsewhere [Lak:11, LNR:09]. However, as we demonstrate with two examples, it is actually unsafe to apply period enforcement to lockinduced selfsuspensions.
5.1 Combining Period Enforcement and SuspensionBased Locks
Whenever a task attempts to lock a shared resource, it may potentially block and selfsuspend. In the context of the multisegmented selfsuspending task model, each lock request hence marks the beginning of a new segment.
The period enforcer algorithm may therefore be applied to determine the eligibility time of each such segment (which, again, all start with a critical section). There is, however, one complication: when does a task actually acquire a lock? That is, if a task’s execution is postponed due to the period enforcement rules, at which point is the lock request processed, with the consequence that the resource becomes unavailable to other tasks?
There are two possible interpretations of how period enforcement and locking rules may interact. Under the first interpretation, when a task requires a shared resource, which implies the beginning of a new segment, its lock request is processed only when its new segment is eligible for execution, as determined by the period enforcer algorithm. Alternatively, under the second interpretation, a task’s request is processed immediately when it requires a shared resource.
As a consequence of the first rule, a task may find a required shared resource unavailable when its new segment becomes eligible for execution even though the resource was available when the prior segment finished. As a consequence of the second rule, a shared resource may be locked by a task that cannot currently use the resource because the task is still ineligible to execute.
We believe that the first interpretation is the more natural one, as it does not make much sense to allocate resources to tasks that cannot yet use them. However, for the sake of completeness, we show that either interpretation can lead to deadline misses even if the task set is trivially schedulable without any enforcement.
5.2 Case 1: Locking Takes Effect at Earliest Segment Eligibility Time
In the following example, we assume the first interpretation, i.e., that the processing of lock requests is delayed until the point when a resuming segment would no longer be subject to any delay due to period enforcement. We show that this interpretation leads to a deadline miss in a task set that would otherwise be trivially schedulable.
Consider the following simple task set consisting of two tasks on two processors that share one resource. Task , on processor 1, has a total execution cost of and a period and deadline of . After one time unit of execution, jobs of require the shared resource for two time units. thus consists of two segments with costs and . Task , on processor 2, has the same overall WCET (), a slightly shorter period (), and requires the shared resource for one time unit after two time units of execution ( and ). Without period enforcement (and under any reasonable locking protocol), the task set is trivially schedulable because, by construction, any job of incurs at most one time unit of blocking, and any job of incurs at most two time units of blocking.
In contrast, with period enforcement, deadline misses are possible. Figure 5 depicts a schedule of the two tasks assuming periodic job arrivals and use of the period enforcer algorithm. We focus on the eligibility times of the second segment of .
Since ’s first job requests the shared resource only after two time units of execution, it is blocked by ’s critical section, which commenced at time . At time , releases the shared resource and consequently resumes (i.e., ). According to the period enforcer rules [Raj:suspension1991], the second segment is immediately eligible because, according to Equation 1 (in Section 3),
(Recall that , and interpret with respect to ’s processor.)
At time , the second job of is released. Its first segment ends at time . However, its second segment is not eligible to be scheduled before time since . At time , the second job of , released at time , can thus lock the shared resource without contention. Consequently, when ’s request for the shared resource takes effect at time , the resource is no longer available and must wait until time before it can proceed to execute. We thus have
The third job of is released at time . Its first segment ends at time , but since , the second segment may not commence execution until time and the shared resource remains available to other tasks in the meantime. The third job of is released at time and acquires the uncontested shared resource at time . Thus, the segment of cannot resume execution before time . Therefore
The same pattern repeats for the fourth job of , released at time : when its first segment ends at time , the second segment is not eligible to commence execution before time since . By then, however, has already locked the shared semaphore again, and the second segment of the fourth job of cannot resume before time , at which point
However, this leaves insufficient time to meet the job’s deadline: as the second segment of requires time units to complete, the job’s deadline at time is missed.
By construction, this example does not depend on a specific locking protocol; for instance, the effect occurs with both the MPCP [Ra:90] (based on priority queues) and the FMLP [BLBA:07, BA:08] (based on FIFO queues). The corresponding responsetime analyses for both protocols [Br:13, LNR:09] predict a worstcase response time of for task (i.e., four time units of execution, and at most two time units of blocking due to the critical section of ). This demonstrates that, under the first interpretation, adding period enforcement to suspensionbased locks invalidates existing blocking analyses. Furthermore, it is clear that the devised repeating pattern can be used to construct schedules in which the response time of grows beyond any given implicit or constrained deadline.
Next, we show that the second interpretation can also lead to deadline misses in otherwise trivially schedulable task sets.
5.3 Case 2: Locking Takes Effect Immediately
From now on, we assume the second interpretation: all lock requests are processed immediately when they are made, even if this causes the shared resource to be locked by a task that is not yet eligible to execute according to the rules of the period enforcer algorithm. We construct an example in which a task’s response time grows with each job until a deadline is missed.
To this end, consider two tasks with identical parameters hosted on two processors. Task is hosted on processor 1; task is hosted on processor 2. Both tasks have the same period and relative deadline and the same WCET of . They both access a single shared resource for two time units each per job. Both tasks request the shared resource after executing for at most one time unit. They both thus have two segments each with parameters and .
The example exploits that a job may require less service than its task’s specified WCET. To ensure that the shared resource is acquired in a certain order, we assume the following deterministic pattern of the actual execution times. Let be an arbitrarily small, positive real number with .

The first segment of evennumbered jobs of executes for only time units.

All other segments execute for their specified worstcase costs.
Figure 6 shows an example schedule assuming periodic job arrivals.
At time , the first job of acquires the shared resource because does not issue its request until time . Consequently, is blocked until time , and we have
and  
The roles of the second jobs of both tasks are reversed: since the second job of locks the shared resource already at time , is blocked when it attempts to lock the resource at time . However, according to the rules of the period enforcer algorithm, the second segment of the second job of is not actually eligible to execute before time since
Consequently, even though the lock is granted to already at time , the critical section is executed only starting at time , and is thus delayed until time . At time , is immediately eligible to execute since
The third jobs of both tasks are released at time . The roles are swapped again: because ’s first segment requires only time units of service, it acquires the lock at time , before issues its request at time . However, according to the period enforcer algorithm’s eligibility criterium, cannot actually continue its execution before time since
This, however, means that cannot use the shared resource before time , which leaves insufficient time to complete the second segment of ’s third job before its deadline at time . Furthermore, if both tasks continue the illustrated execution pattern, the period enforcer continues to increase their response times. As a result, the pattern may be repeated to construct schedules in which any arbitrarily large implicit or constrained deadline is violated.
As in the previous example, the responsetime analyses for both the MPCP [Br:13, LNR:09] and the FMLP [Br:13] predict a worstcase response time of for both tasks (i.e., four time units of execution, and at most two time units of blocking). The example thus demonstrates that, if lock requests take effect immediately, then the period enforcer is incompatible with existing blocking analyses because, under the second interpretation, it increases the effective lockholding times.
5.4 Other Protocols and Interpretations
The examples in Sections 5.2 and 5.3 assume a sharedmemory locking protocol: once a lock is granted, tasks execute their own critical sections on their assigned processors. One may wonder whether effects similar to those described in Sections 5.2 and 5.3 can also occur under distributed realtime locking protocols such as the Distributed Priority Ceiling Protocol (DPCP) [RSL:88, Raj:91] or the Distributed FIFO Locking Protocol (DFLP) [Br:13, Br:14], where critical sections may be executed on dedicated synchronization processors. In this case, the selfsuspension occurs on the task’s application processor, which is different from the (remote) synchronization processor on which the critical section is executed.
This separation allows employing period enforcement only on application processors (while avoiding it on synchronization processors) without incurring the feedback cycle between blocking times and selfsuspension times highlighted in Sections 5.2 and 5.3.
However, period enforcement still invalidates all existing blocking analyses for distributed realtime semaphore protocols [RSL:88, Raj:91, Br:13] because it artificially increases blocking times if tasks contain multiple accesses to shared resources. An example demonstrating this effect is shown in Figure 7. Two segmented selfsuspending tasks and share a resource using a distributed realtime locking protocol. The choice of protocol is irrelevant; the example works with both the DPCP and the DFLP. The tasks have parameters , , and , , , and . The computation segments are separated by selfsuspensions that arise while the tasks wait for the completion of critical sections that are executed remotely on a dedicated synchronization processor ; the corresponding suspension segment parameters , , and will be defined shortly.
The first jobs of and are both released at time 0 and attempt to access the shared resource at time 1. Task ’s request is serviced first; as a result resumes only at time after having been suspended for four time units:
Task then executes its second computation segment for time units until time 11, when the job accesses the shared resource for a second time. Since there is no contention from at this time, resumes after only two time units at time 13. This leaves the job sufficient time to complete at time 14, one time unit before its deadline at time 15.
The second job of is released at time 15 and issues a request for the shared resource at time 16. Since there is no contention from at the time, the second computation segment arrives already at time , after having been selfsuspended for only two time units. However, since the second segment of the first job arrived at time , the second segment of the second job is not eligible to start execution until time since
As a result, faces contention from when it issues its second request for the shared resource at time 26, which ultimately leads to a deadline miss at time 30.
In contrast, without period enforcement, does not miss its deadline at time 30 because, across its two requests, a job of is delayed by at most one request of , for a total selfsuspension time of at most two six time units. That is, even though the individual selfsuspension segments of the two tasks are each up to four time units long (i.e., ), the fact that selfsuspensions arise due to the same cause (resource contention) means that the total selfsuspension time is actually less than the sum of the individual persegment bounds.
Existing analyses for the DPCP [RSL:88, Raj:91, Br:13] and the DFLP [Br:13] exploit this knowledge and therefore predict task to be schedulable with a worstcase response time of 14.^{7}^{7}7The analyses in [RSL:88, Raj:91] do assume a segmented task model, but bound the total blocking across all segments. The analysis in [Br:13] also bounds the total blocking across all segments and can be applied to both the segmented and the dynamic selfsuspension model. The example in Figure 7 thus demonstrates that the period enforcer invalidates existing blocking bounds for distributed semaphore protocols. As an aside, the example in Figure 7 further highlights limitations of the segmented selfsuspension model in the context of synchronization protocols, where the lengths of selfsuspensions encountered at runtime are inherently not independent.
Returning to the sharedmemory case, as a third possible interpretation, one could also exclude critical sections from period enforcement such that only the rest of the computation segment after a critical section is subject to period enforcement (i.e., making critical sections immediately eligible to execute).^{8}^{8}8This interpretation does not fit the assumptions stated in [Raj:suspension1991, Raj:91]. This can be understood as making each critical section an individual computation segment (exempt from period enforcement) that is separated from the following computation by a “virtual” selfsuspension of maximum length zero. As in the case of distributed semaphore protocols, this interpretation breaks the feedback cycle highlighted in Sections 5.2 and 5.3, but still invalidates all existing blocking analyses as it artificially inflates the synchronization delay.
An example of this effect is shown in Figure 8, which depicts the same scenario as in Figure 7 under the assumption that a sharedmemory semaphore protocol is used (i.e., critical sections are executed locally by each job) and that critical sections are exempt from period enforcement. As in the distributed case, period enforcement induces a deadline miss, whereas existing blocking analyses [Br:13, LNR:09] exploit the fact that a remote critical section can block only once, thus arriving at a worstcase response time bound of 14 for .
5.5 Discussion
While it is intuitively appealing to combine period enforcement with suspensionbased locking protocols [Raj:91, Lak:11, LNR:09], we observe that this causes nontrivial difficulties. In particular, our examples show that the addition of period enforcement invalidates all existing blocking analyses.
If critical sections are subject to period enforcement, our examples also suggest that devising a correct blocking analysis would be a substantial challenge due to the demonstrated feedback cycle between the period enforcer rules and blocking durations. Fundamentally, the design of the period enforcer algorithm implicitly rests on the assumption that a segment can execute as soon as it is eligible to do so. In the presence of locks, however, this assumption is invalidated. As demonstrated, the result can be a successive growth of selfsuspension times that proceeds until a deadline is missed. The period enforcer algorithm, at least as defined and used in the literature to date [Raj:suspension1991, Raj:91], is therefore incompatible with the existing literature on suspensionbased realtime locking protocols (e.g., [Raj:91, Lak:11, LNR:09, BLBA:07, Br:13]).
Finally, it is worth noting that our examples can be trivially extended with lowerpriority tasks to ensure that no processor idles before the described deadline misses occur. It is also not difficult to extend the examples in Figures 6 and 8 with a task on a third processor such that all critical sections of and are separated from their predecessor segments by a nonzero selfsuspension.
6 Concluding Remarks
We have revisited the underlying assumptions and limitations of the period enforcer algorithm, which Rajkumar [Raj:suspension1991] introduced to handle segmented selfsuspending realtime tasks.
One key assumption in the original proposal [Raj:suspension1991] is that a deferrable task can defer its entire execution time but not parts of it. This creates some mismatches between the original segmented selfsuspending task set and the corresponding singlesegment deferrable task set, which we have demonstrated with an example that shows that Theorem 5 in [Raj:suspension1991] does not reflect the schedulability of the original segmented selfsuspending task system.
The original proposal [Raj:suspension1991] further left open the question of how to convert a segmented selfsuspending task set to a corresponding set of singlesegment deferrable tasks. This problem remains open. Taking into account recent developments [Chen2016b, ecrts15nelissen], we have observed that such a transformation is nontrivial in the general case.
Finally, we have demonstrated that substantial difficulties arise if one attempts to combine suspensionbased locks with period enforcement. These difficulties stem from the fact that period enforcement can increase contention or lockholding times, which increases the lengths of selfsuspension intervals, which then in turn feeds back into the period enforcer’s minimum suspension lengths. As a consequence, period enforcement invalidates all existing blocking analyses.
Nevertheless, the period enforcer algorithm per se, and Theorem 5 in [Raj:suspension1991], could still prove to be useful for handling selfsuspending tasks (that do not use suspensionbased locks) if efficient schedulability tests or methods for constructing sets of singlesegment deferrable tasks can be found. However, such tests or transformations have not yet been obtained and the development of a precise and efficient schedulability test for selfsuspending tasks remains an open problem.
Acknowledgements
We thank James H. Anderson and Raj Rajkumar for their comments on early drafts of this paper. This work has been supported by DFG, as part of the Collaborative Research Center SFB876.
Comments
There are no comments yet.