REORDER: Securing Dynamic-Priority Real-Time Systems Using Schedule Obfuscation

06/04/2018 ∙ by Chien-Ying Chen, et al. ∙ University of Illinois at Urbana-Champaign 0

Modern real-time systems (RTS) are increasingly the focus of security threats. The design of such systems often aids attackers since RTS are engineered to be predictable. This predictability can be used to mount side-channel attacks, destabilize the system (by denying access to critical resources at important times), etc. In this paper, we propose methods to obfuscate the predictable (scheduling) behavior of RTS that use dynamic-priority real-time scheduling algorithms (e.g., EDF). We developed the REORDER protocol for this purpose. Such obfuscation will make it difficult for attackers to target RTS. We also developed a metric (called "schedule entropy") to measure the amount of obfuscation. We integrated our REORDER protocol into the Linux real-time EDF scheduler and evaluated our scheme using both - a realistic embedded platform based on Raspberry Pi and also synthetic workloads.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Systems with real-time properties are often engineered to be very predictable[1]. This is necessary for their correct operation and ensuring safety guarantees. Most real-time systems (RTS) are designed to execute repeating jobs (either periodic or sporadic111Jobs with bounded inter-arrival times. ones) that have explicit “deadline” requirements. Hence, the schedule repeats. Any deviations in timing behavior, for the real-time schedule, can result in the system becoming unstable – thus, adversely affecting the safety of the system. Adversaries can take advantage of this inherent determinism by focusing their attacks on the schedulers in real-time systems [2, 3]. Traditionally, security has always been an afterthought in the design of RTS but that is changing with the advent of high-profile attacks (e.g., denial-of-service attacks using Internet-of-Things devices [4], Stuxnet [5], BlackEnergy [6], etc.). The increase usage of commodity-off-the-shelf (COTS) components along with emerging technologies (e.g., IoT) only exacerbates security problems in RTS.

Hence, the scheduler in RTS is a critical component for maintaining the integrity of the system. In fact, the predictable behavior can be used to improve the security of such systems [7, 8, 9, 10]. On the other hand, there are significant vulnerabilities that adversaries could exploit due to the repeating nature of the real-time task schedules. Consider the spectrum analysis of a -task real-time system (from Example 1 introduced in Section III-A

) using discrete Fourier transforms (DFT) (Fig.

0(a)). An adversary can easily reconstruct the execution frequencies (and hence, periods) of all four real-time tasks (annotated by the red arrows) from this information! Such information can be used to launch other attacks222Attackers can launch these attacks with greater accuracy/success since they can predict exactly when the victim tasks are released based on the information presented in Fig. 0(a).e.g., scheduler side-channels that can then be used to leak critical information [2, 3] or deny critical services, power consumption [11], schedule preemptions [12], electromagnetic (EM) emanations [13] and temperature [14], etc. In fact, defensive techniques for RTS is fairly limited [15, 16, 17, 7, 8, 9, 18, 10].

Obfuscating the schedules, i.e., introducing randomness into the execution patterns of real-time tasks, could be one way to improve the security of RTS. This must be done in a careful manner, so as to not interfere with the timing guarantees that the system can provide, while still introducing diversity into the schedule. Figure 0(b) shows the results of applying our randomization protocol (introduced next) to the same 4-task example mentioned earlier. As the figure shows, DFT analysis applied to an obfuscated schedule results in less regular execution patterns – hence, it is harder to identify the frequencies of two of the tasks (second and third red arrows), thus thwarting some of the potential attacks mentioned earlier.

(a) Frequency Spectrum of the EDF Schedule
(b) Frequency Spectrum of the Obfuscated Schedule
Fig. 1: Results of the frequency spectrum analysis for execution sequences scheduled by: (a) a deterministic scheduler (e.g., Vanilla EDF) and (b) randomized scheduler (REORDER). A taskset consisting of periodic tasks is considered and the arrows (in red) in each plot indicate their true frequencies (i.e., Hz where is the inter-arrival time of the tasks.)

We propose a schedule randomization protocol (Section III) that we named REORDER (REal-time ObfuscateR for Dynamic SchedulER). We achieve this by using bounded priority inversions at runtime (see Section III-A for more details). REORDER obfuscates the earliest deadline first (EDF) scheduling policy [19]; EDF is a dynamic task scheduler that can, theoretically, utilize a CPU to its fullest. It is widely supported by many real-world RTS and operating systems, e.g., Erika Enterprise [20], RTEMS [21], etc. and even Linux [22]. Existing work on protecting real-time schedulers [23, 24] is (a) focused on static scheduling algorithms and (b) inadequate for measuring the effects of obfuscation. Obfuscating the schedules for dynamic priority algorithms such as EDF, to achieve a high level of randomization, is a much harder proposition than that for static algorithms. One important problem is how to bound the time allocated for allowing priority inversions since the job deadlines dynamically change as the execution proceeds333In static algorithms, these bounds can easily be computed offline and stored in lookup tables.. REORDER guarantees that if a given real-time system was schedulable (i.e., meets all of its timing and deadline constraints) by the vanilla EDF scheduler, then the obfuscated schedule will also meet the same guarantees.

A challenge in any security framework is to measure

the effectiveness of the solution. In this case, designers of RTS need to estimate the amount of randomness introduced into the real-time schedule by

REORDER. Hence, we developed a metric that we named “schedule entropy” (Section IV-A) that measures the amount of obfuscation for each given real-time task set/schedule. Hence, schedule entropy can be used to not just capture the amount of randomness introduced into the system but also compare different obfuscation schemes.

REORDER is implemented in a (real-time) Linux kernel444Please see repository [25] for the source code of our implementation.. It was evaluated (Section VI) (a) on a realistic embedded platform (Raspberry Pi) (b) using an automotive benchmark suite (MiBench) [26]. In addition, we also carry out a design-space exploration using synthetic real-time task sets555A common practice in the real-time community. (Section IV-C). This paper makes the following contributions:

  • a randomization algorithm that shuffles EDF schedules (Section III-B),

  • “schedule entropy” – a new metric to calculate the amount of randomness in the task schedules (Section IV) and

  • an implementation of the REORDER algorithm in the Linux kernel (Section VI).

We first present some background information as well as the system and adversary models.

Ii System and Adversary Model

Ii-a Background

Standard real-time scheduling theory generally considers periodically executing tasks666This trivially maps with the concept of a process in general purpose OS. [27, 19, 28] that models typical real-time control systems. Each task generates a potentially infinite sequence of jobs and is modeled by the worst-case computation time (WCET) and a defined minimum inter-arrival time (i.e., period) . Also tasks have a strict (relative) deadline by which a computation must be finished. Task priorities can be static or dynamic [19]. The optimal static scheme is the RM priority assignment where shorter period implies higher priority. RM can guarantee schedulability of a given set of tasks as long as the total utilization is below . The overall optimal scheme is EDF – a dynamic-priority algorithm that always picks the job of a task whose deadline is closest. EDF can schedule any set of tasks if the total system utilization does not exceed 100% (e.g., sum of the WCET to period ratio for all tasks in the system is less than unity: ).

Ii-B System Model

Let us consider the problem of scheduling a set of periodic tasks on a single processor777Since most RTS are still using single core platforms., using the EDF scheduling policy. For simplicity of notation, we use the same symbol to denote a task’s jobs and use the term task and job interchangeably. We also denote as the absolute deadline of (i.e., deadline of any given job of ). We assume cache related preemption delay is negligible compared to WCET of the tasks. We do not consider any precedence or synchronization constraints among tasks and . We further assume that the tasks have constrained-deadlines, i.e., and the taskset is schedulable by the EDF scheduling policy, that is the worst-case response time (WCRT)888The calculation of WCRT is presented in Section III-A. of each task is less than its deadline – since REORDER will be trivially ineffective for an unschedulable taskset.

Under the periodic task model, the schedule produced by any preemptive scheduling policy, for a periodic taskset, is cyclic i.e., the system will repeat the task arrival pattern after an interval that coincides with the taskset’s hyperperiod999The hyperperiod of the taskset is the least common multiple (LCM) of the periods of the tasks. [29], denoted by . Furthermore, we consider a discrete time model (e.g., integral time units [30]) where system and task parameters are multiples of a time unit101010We denote an interval starting from time point and ending at time point that has a length of by or ..

Ii-C Adversary Model

We assume that the attackers have access to the timing parameters of the tasksets and also have knowledge of which scheduling policy is being used. The adversary’s objective is to get detailed information about the execution patterns of the real-time tasks and cause greater damage [3, 2], to the system by exploiting the precise schedule information.

As introduced in Section I, the attacker may exploit some side-channels (e.g., power consumption, schedule preemptions, electromagnetic (EM) emanations and temperature) to observe and reconstruct the system schedule [3]. A smart attacker possessing sufficient system information can carry out more advanced attacks under the right conditions, to move the system to an unsafe state. For example, in the now famous Stuxnet attack [5], the malware was remnant in the system for months to collect sensitive information before the main attack. It is possible for a denial-of-service attack to target only a specific service handled by a critical task when the precise schedule information is obtainable. A side-channel attack [31, 32] is also another typical class of attacks that can benefit from such schedule reconstruction attacks. For example, it was shown that the precise schedule information can be exploited to assist in determining the prime and probe [33] instants in a cache side-channel attack to increase the chance of success [2].

We further assume that the scheduler is not compromised and the attacker does not have access to the scheduler. Without this assumption, the attacker can undermine the scheduler or directly obtain the schedule information. Our objective, then, is to reduce the inferability of the schedule for real-time tasksets (and also reduce possibility of other attacks that depend of predictable schedules) while meeting real-time guarantees. The randomness introduced to the schedule increases variations in the system and hence makes attacks that rely on the determinism of the real-time schedule, harder.

Iii Schedule Randomization Protocol

In this section we describe the REORDER protocol. The focus of our design is such that, even if an observer is able to capture the exact schedule for a period of time (for instance, for a few hyperperiods), REORDER will schedule tasks in a way that succeeding execution traces will show different orders (and timing) of execution for the tasks. The main idea is that at each scheduling point, we pick a random task from the ready queue and schedule it for execution. However such random selection may lead to priority inversions [34] and any arbitrary selection may result in missed deadlines – hence, putting at risk the safety of the system. REORDER solves this problem by allowing bounded priority inversions. It restricts how the schedule may use priority inversions without violating real-time constraints (e.g., deadline) of the tasks. To ensure this, REORDER calculates an “acceptable” priority inversion budget. If the budget is exhausted during execution, then we stop allowing lower priority tasks to execute ahead of the higher priority task that has the empty budget. The following sections present the details of the REORDER protocol.

Iii-a Randomization with Priority Inversion

A key step that is necessary for randomization is to calculate the maximum amount of time that lower priority jobs, , can execute before . This is much harder in EDF compared to the fixed-priority system (that prior work, TaskShuffler [24], was focused on) due to the dynamic nature of EDF (i.e., the task priority varies at run-time). Therefore we define the worst-case inversion budget (WCIB) that represents the maximum amount of time for which a job of some task with relative deadline may be blocked by a job of some task with (and hence lower relative priority than ). In the following we illustrate how we calculate WCIB for each task by utilizing the response time analysis [35, 36] for EDF.

Iii-A1 Bounding Priority Inversions

The WCRT of is the maximum time between the arrival of a job of and its completion. Our idea of bounding priority inversions is to calculate the slack times for each task (e.g., difference between deadline and response time) and allow low priority tasks to execute up to that amount of time. We therefore define the WCIB of as follows:

(1)

where represents an upper bound of WCRT (see Appendix -A for the calculation of ). The represents the maximum amount of time for which all lower priority jobs (e.g., ) are allowed to execute while an instance of is still unfinished without missing its deadline, even in the worst-case scenario. The REORDER protocol guarantees that the real-time constraints are satisfied by bounding priority inversions using . Note that WCIB can be negative for some – although non-positive WCIB does not attribute that the taskset is unschedulable. At each scheduling point , our idea is to execute some low priority job with up to additional time-units before it leaves the processor for highest priority job where represents the remaining execution time of at .

We enforce the WCIB at run-time by maintaining a per-job counter, remaining inversion budget (RIB) , . RIB is initialized to upon each activation of the jobs of and decremented for each time unit when is blocked by any lower priority job. When reaches zero no job with absolute deadline greater than is allowed to run until completes. Note that not all the jobs of may need time unit for computation (recall that is the worst-case bound of the execution time). If some low-priority job (e.g., ) that blocks finishes earlier than , the RIB will not be decreased accordingly.

For a given non-negative WCIB, jobs of can be delayed for up to by priority inversions. The WCRT of is bounded by . Hence, is schedulable with the REORDER protocol and we can assert the following:

Proposition 1.

If is schedulable under EDF, WCIB is non-negative for some and low priority jobs of do not delay more than then REORDER will not violate the real-time constraints of .

Iii-A2 Selection of Candidate Jobs for Randomization

As we mentioned earlier, when the run-time counter RIB (i.e., ) reaches zero, no jobs with deadline greater than can run while has an outstanding job. However, lower priority jobs could cause to miss its deadline by inducing the worst-case interference from the higher priority jobs, i.e., , due to the chain reaction. Therefore, to preserve the schedulability of such jobs we must prevent it from experiencing such additional delays. We achieve this by the following inversion policy:

Randomization Priority Inversion Policy (RPIP): If RIB for some , no job with is allowed to run while any of high priority job with has an unfinished job.

In order to enforce RPIP at run-time, at each scheduling decision point, we now define the variable minimum inversion deadline for jobs of as follows: where is the ready queue at scheduling point . When there is no such task as , is set to an arbitrarily large (e.g., infinite) deadline. The variable allows us to determine which jobs to exclude from priority inversions. That is, no job that has a higher deadline than can be scheduled as long as has an unfinished job. Otherwise, the job with relative deadline (not the job ) could miss its deadline.

Example 1.

The taskset contains the following parameters:

Task

At , , , , . For notational convenience, let us denote as . Hence , , and . Therefore at the job and are not allowed to participate in priority inversion (since and , have not completed.

It can be shown that at any scheduling point we can enforce RPIP by only examining the inversion deadline of highest priority (e.g., shortest deadline) job, [24]. Hence, at each scheduling decision, REORDER excludes all ready jobs from the selection that have higher deadline than .

Iii-B Overview of The Randomization Protocol

The REORDER protocol selects a new job using the following sequence of steps (refer to Algorithm 1 for a formal description) at every scheduling decision point.

  • Step 1 (Candidate Selection): At each scheduling point , the REORDER protocol searches for possible candidate jobs (that can be used for priority inversion) in the ready queue. Let us denote as the set of ready jobs, is the highest priority (i.e., shortest deadline) job in the ready queue and represents the set of candidate jobs at some scheduling point .

    • We first check the RIB of the highest priority job . If the RIB is zero, then is added to the candidate and REORDER moves to Step 2 since priority inversion is not possible due to its inversion budget being non-positive.

    • When RIB is non-negative (i.e., ), we iterate through the ready queue and add the job to the candidate list if its deadline is less than or equal to (i.e., the minimum inversion deadline of the highest-priority job at scheduling point ).

  • Step 2 (Randomizing the Schedule): This step selects a random job from the ready queue for execution. The selected job will run until the next scheduling decision point . We randomly pick a job from and set the next scheduling decision point as follows:

    • If is the highest priority job in the ready queue, the next decision point will be either when the job finishes or a new job of another task arrives.

    • Otherwise, the next decision111111Section III-E presents another approach to trigger scheduling decisions. will be made at when completes or the inversion budget expires, that is,

      (2)

      unless a new job arrives before time where and represents the remaining execution time of . Note that the variable is always positive since every job with a higher priority than the selected job has some remaining inversion budget. Otherwise, would not have been added to the candidate list in Step 1.

We now use a simple example to illustrate our randomization protocol.

Fig. 2: An instance of schedule randomization protocol for the taskset presented in Example 2. The length of the taskset hyperperiod is and represents idle time (e.g., when no other tasks are active). The upward and downward arrows represent job activations and deadlines, respectively.
Example 2.

Let us consider the taskset with following parameters:

Task

The taskset is schedulable by EDF (e.g., ). The schedule of the vanilla EDF and an instance of randomization protocol is illustrated in Fig. 2. At time , all three jobs are in ready queue and have positive inversion budget. Notice that is the highest priority job, and all three jobs are in the candidate list. Let the scheduler randomly pick . From Eq. (2), the next scheduling decision will be taken at . At , , and both and are in candidate list. Let be randomly scheduled (that is also the highest priority job). The next scheduling decision will be at . At , only is active and scheduled (next scheduling decision will be at ). is the only active job at time and hence scheduled. At , both and are active, and hence both are in candidate list. is randomly scheduled and the next scheduling point will be at . At , only is active and scheduled. Likewise, is the only active job at and scheduled.

Iii-C Unused Time Reclamation

As mentioned earlier, not all the jobs of a task may require worst-case unit of time for its computation. We propose to reclaim this unused time (e.g., difference between WCET and actual execution time) to increase the inversion budget for lower priority jobs. In the case that the (randomly) selected job finishes earlier (i.e., the actual execution time is smaller than its WCET), the unused time that is reserved for this job can be transferred to its lower priority jobs (i.e.,

those ready jobs that have higher deadlines at the moment) as extra inversion budget. Therefore, when enabling this feature, the RIBs of the lower priority jobs are updated (at the scheduling point

when the selected job finishes its execution) as follows: where represents the unused time over WCET and is the ready queue at time . Note that the real-time constraints (i.e., deadlines) are respected since Eq. 1 for every ready job at time still holds (i.e., ) with the unused time transferring.

When there are no tasks in the ready queue (e.g., during slack time), the processor is idle, e.g., nothing is executing in the system. Although REORDER brings variations between the hyperperiods when compared to the vanilla EDF, randomizing only real-time tasks results in the schedule being somewhat predictable since the idle times (i.e., slack) appear in nearly same slots. We address this problem by scrambling the idle times along with the real-time tasks in the next section.

Iii-D Idle Time Scheduling

One of the limitations of randomizing only the tasks is that the task execution is squeezed between the idle time slots and the latter remain predictable. The work-conserving nature of EDF causes separations between task executions and idle times. Hence some tasks appear at similar places over multiple hyperperiods. One way to address this problem and improve schedule randomness is to idle the processor, intentionally, at random times [24]. We achieve this by considering idle times as instances of an additional task, referred to as the idle task, . Then, the randomization protocol can be applied over the augmented taskset .

It can be noted that has infinite period, deadline and execution time, and hence always executes with the lowest priority. Hence can force all other tasks to maximally consume their inversion budgets. During randomization the idle task will convert a work-conserving schedule to a non-work-conserving one, but it will not cause any starvation for other tasks. This is because Step 2 of the REORDER protocol (see Section III-B) selects candidate tasks in a way that real-time constraints for all tasks in the system will always be respected. Randomizing the idle task effectively makes tasks appear across wider ranges and thus reduces predictability. As a result, the schedule can be less susceptible to attacks that depend on the predictability of RTS.

Iii-E Fine-Grained Switching

In prior work [24] researchers proposed to decrease the inferability of the fixed-priority scheduler by randomly yielding a job, early, during execution. As a result the schedule will be fragmented at different time-points and thus will bring more variations across execution windows. Our proposed REORDER protocol can also be modified to incorporate such a feature. Recall that the scheduling decisions in our scheme are made either when: (i) a new job arrives, (ii) a job completes, or (iii) the inversion budget expires (refer to Step 2 in Section III-B). Therefore we can achieve fine-grained switching by modifying the next scheduling decision point in Eq. (2) as follows: where the function outputs a random number between .

Iii-F Algorithm

Algorithm 1 formally presents the proposed schedule randomization protocol. This event-driven algorithm executes at the scheduler-level and takes the taskset (with idle time) as an input. At each scheduling decision point , a ready job is (randomly) selected for scheduling and the next scheduling decision point is determined.

0:  Augmented task set and current scheduling point
0:  The randomized schedule and the next scheduling point
1:   set of ready jobs
2:  Add the highest priority job to the candidate list, i.e.,
3:  /* Search candidate jobs if the highest priority job has non-zero inversion budget */
4:  if  then
5:      for each  do
6:          if  then
7:               /* add to candidate list */
8:          end if
9:      end for
10:  end if
11:  if   then
12:      /* schedule the highest priority (shortest deadline) job */
13:      
14:      Set next scheduling point when new job arrives or current job completes
15:  else
16:      /* randomly select a job from */
17:      
18:      if  then
19:          Set next scheduling point when new job arrives or current job completes
20:      else
21:          /* set the next random scheduling point as a function of current job completion or budget expiration time (unless a new job arrives before ) */
22:          
23:          Set next scheduling point
24:      end if
25:  end if
26:  /* return the scheduled job and the next scheduling point */
27:  return  
Algorithm 1 Schedule Randomization Protocol

In Lines 3-10, the algorithm first selects the set of candidate jobs using the procedure described in Section III-B (see Step 1). If the highest priority job has negative inversion budget (e.g., ), it will be scheduled for execution (Line 13). Otherwise it schedules a random job from the candidate list (Line 17). If the selected job is the highest priority job, the next scheduling point is set when the job completes or a new job of another task arrives (Line 14 and 19). If the selected job is not the highest priority one, the algorithm selects when the current inversion budget expires, unless the job completes or a new job arrives before (Line 23).

The algorithm iterates over the jobs in the current ready queue once and makes a single draw from the candidate list

. Assuming a single draw from a uniform distribution (Line 17 and 22) takes no more than

, the complexity121212Section VI-2 presents empirical evaluations for scheduling overhead. of each instance of the algorithm is .

Iv Schedule Entropy: A Measure of Randomness

While the mechanisms presented in Algorithm 1 obfuscates the inherent determinism in conventional dynamic-priority schedules, we still need to quantify the randomness that has been introduced into the schedule. This can be addressed by analyzing the schedule entropy that measures the randomness (or unpredictability) in the real-time schedule. Since prior entropy calculations do not capture the randomness of a schedule correctly (refer to Appendix -B for details) we now introduce a better approach to measure the schedule entropy.

Iv-a Entropy of a Reorder Schedule

The proposed concept is based on a statistical model – approximate entropy (ApEn) [37] that is used to evaluate the amount of regularity in time series data. Let us consider hyperperiods for a taskset (with hyperperiod-length ) that is represented as vectors of length as follows: . Each vector includes intervals of length of the form , and hence, we have total number of intervals of length . Let us consider as the interval of size starting from on the -th hyperperiod where and . For all intervals , let us define the following variable: where denotes the dissimilarity between two intervals of different hyperperiod, is a given dissimilarity threshold and represents the set cardinality. We use Hamming distance [38] to evaluate the dissimilarity between intervals – since this a relatively simple and widely used dissimilarity measure. For two vectors and of size , Hamming distance is calculated as follows: , where is the indicator function that equals if the condition is satisfied or otherwise. Notice that, represents the number of intervals of length starting from , with dissimilarity (in terms of Hamming distance) less than or equal to from and normalized by the number of observed hyperperiods (i.e., ).

Let us now define the variable as an estimation of the entropy of variable , i.e., an estimation of the entropy of a vector that starts from slot with length as follows: Therefore for a given interval length and dissimilarity threshold , the randomness (entropy) of a schedule observed over hyperperiods is given by the following equation: Notice that, for a deterministic scheduler (e.g., when all the jobs of the tasks take WCET for computation for vanilla EDF) the schedule entropy will be equal to zero (i.e., there is no randomness, as expected).

Iv-B Interpretation of Entropy

The schedule entropy depicts the randomness for a given schedule. When comparing the entropies of two schedule sequences with equal lengths, a higher value implies that more variations are introduced in each time slot and the chance for a task appearing at the same time slot in every hyperperiod is smaller. Consider the taskset presented in Example 1 as an example – the schedule entropies are and when scheduled by vanilla EDF and REORDER, respectively. As we can see from the frequency spectrum of the two schedule sequences (Fig. 1), the higher randomness reduces the determinism in the schedule and some periods become unidentifiable from the spectral analysis.

Other methods, such as side-channel attacks, also suffer since the victim tasks can potentially appear in larger ranges of executions. Such attacks typically require “prepping” (e.g., prime and probe [33]) of the system and the closer this is done to the actual execution of the victim task, the better it is for the adversary. With increasing entropy values, the attacker has lesser precision in narrowing the exact arrival times for the victim task(s) and hence, experiences more noise in measurements. Similarly, covert channels [39] will also suffer since the expected execution order of tasks is broken due to the randomization – hence, higher entropy values result in larger variations from the “expected” covert channel.

Iv-C Evaluation of Schedule Entropy

We now evaluate the REORDER protocol with synthetic workloads. This is to understand the degree of randomness introduced into the schedule and we use the schedule entropy calculations from Section IV-A. The evaluation of scheduling overhead on a real platform is presented in Section VI.

Iv-C1 Simulation Setup

We used the parameters similar to that in earlier research [8, 24, 40, 41]. The tasksets were grouped into base-utilization buckets (e.g., total sum of the task utilizations) from where . Each base-utilization group contained tasksets and each of which had tasks. We only considered tasksets that were schedulable by EDF.

For a given base-utilization bucket, the utilization of individual tasks were generated from a uniform distribution using the UUniFast [42] algorithm. The period of each task was greater than with a divisor of . This allowed us to set a common hyperperiod (e.g., ) for all the tasksets. We assumed that the deadlines are implicit, e.g., . The execution time for each of the tasks in the taskset was computed using the generated period and utilization: . The execution time of each job of was randomly selected from where . The interval window size was and the dissimilarity threshold , (Appendix -C). For each schedulable taskset, we observed the schedule for hyperperiods.

Iv-C2 Results

Fig. 3: The average entropy of the system with varying total utilization for different randomization schemes. The REORDER schedule shows maximum randomness (e.g., entropy) in the low-to-medium base-utilization (e.g., ).

We now evaluate how much randomness (viz.,  unpredictability) the REORDER protocol incurs w.r.t. vanilla EDF using the following schemes:

  • REORDER (Base): only tasks are randomized;

  • REORDER (IT): randomization with augmented tasksets (e.g., including idle time randomization);

  • REORDER (FT): fine-grained switching for augmented taskset (e.g., yielding tasks at random points); and

  • REORDER (UTR): randomization with fine-grain scheduling and unused time reclamation.

In these experiments we focus on observing the average behavior of randomization schemes. In Fig. 3 we present the average schedule entropy of vanilla EDF (e.g., no randomization) along with different randomization schemes.

The X-axis of Fig. 3 shows the total system utilization. The Y-axis represents mean schedule entropy (normalized to ), e.g., , where represents the number of schedulable tasksets for a given base-utilization group and is the entropy of taskset . For higher utilizations entropy for vanilla EDF increases since the schedule across multiple hyperperiods becomes different because of less slack (e.g., idle times). As we can see from this figure, the randomization protocol significantly increases schedule entropy. The idle time randomization with fine-grained scheduling and unused time reclamation (e.g., REORDER (UTR)) significantly improves the entropy over base randomization. Note that for higher utilization the improvement is marginal. This is due the fact that for higher utilization, the system does not have enough slack (e.g., idle times) to randomize much – and hence all three schemes show similar results (in terms of schedule entropy). As the utilization increases (e.g., lesser slack), there are very few candidate jobs for priority inversions because of higher load. Hence, the entropy (i.e., randomness) drops – albeit the schedule is still less predictable compared to the vanilla EDF (since the mean entropy is greater than entropy of EDF).

Another way to observe the schedule randomness is to measure the ranges within which each task can appear. A wider range implies that is is harder to predict when a task executes. In this experiment we measured the first and the last time slots where a job of each task appears and used the difference between them as the range of execution for (denoted as ). In Fig. 4 we show the ratio of execution range to deadline (e.g.,

) of the tasks. The X-axis of the figure shows total utilization and Y-axis represents the geometric mean of the task execution range to deadline ratios in each taskset.

For low utilization situations, tasks appear within narrow ranges because of the work-conserving nature of the EDF algorithm. With increasing utilization the ranges become wider. This is because the worst-case response times of tasks (particularly, for lower priority tasks) increases due to the higher loads. For lower utilization, the system is dominated by slack times and hence randomizing tasks do not improve the execution range compared to vanilla EDF. This is because some (low-priority) jobs finish earlier due to priority inversions and hence the response time of those jobs is actually lower than the EDF scheme. As a result mean ratio for REORDER decreases. As the figure shows, for higher utilization (e.g., utilization greater than ) tasks appear in wider ranges (e.g., higher mean ratio) when REORDER is enabled. This is due to the fact that priority inversions with REORDER (FT/UTR) increase task response times (especially for higher priority tasks). Besides, inverting the priority can also move lower priority jobs closer to their release times, thus widening the execution range. Since the tasks with REORDER appear in wider ranges this also prevents an attacker from triggering side channel attacks as we mentioned in Section IV-B.

Fig. 4: The geometric mean of the execution range to the deadline ratio. For REORDER the mean ratio is higher when utilization is greater than – that implies the tasks appear in wider ranges and hence it is harder to infer the actual execution time.

V Implementation

We implemented REORDER in a real-time Linux kernel running on a realistic embedded platform to validate its usability and to evaluate its overhead. To this end we also measure the overheads by comparing this to an existing vanilla EDF scheduler. In this section we provide platform information and a high level overview of the implementation. We have open-sourced our implementation and make it available on an anonymized public repository [25]. The platform information and configurations are summarized in Table I.

Artifact Parameters
Platform ARM Cortex-A53 (Raspberry Pi 3)
System Configuration 1.2 GHz 64-bit processor, 1 GB RAM
Operating System Debian Linux (Raspbian)
Kernel Version Linux Kernel 4.9.48
Real-time Patch PREEMPT_RT 4.9.47-rt37
Kernel Configuration
()
disabled
enabled
Boot Commands =1
Run-time Variables =1
=
MiBench Applications Security:
Consumer:
Automotive:
             
Table I: Summary of the Implementation Platform

V-a Platform and Operating System

We used a Raspberry Pi 3 (RPi3) Model B131313https://www.raspberrypi.org/products/raspberry-pi-3-model-b/. development board as the base platform for our implementation. The RPi3 is equipped with a 1.2 GHz 64-bit quad-core ARM Cortex-A53 CPU developed on top of Broadcom BCM2837 SoC (System-on-Chip). RPi3 runs on a vendor-supported open-source operating system, Raspbian (a variant of Debian Linux). We forked the Raspbian kernel and modified it (refer to the following sections) to implement the REORDER protocol. Since we focus on the single core EDF scheduler in this paper, the multi-core functionality of RPi3 was deactivated by disabling the flag during the Linux kernel compilation phase. The boot command file was also set with to further ensure the single core usage.

V-B Real-time Environment

The mainline Linux kernel does not provide any hard real-time guarantees even with the custom scheduling policies (e.g., , ). However the Real-Time Linux (RTL) Collaborative Project [43] maintains a kernel (based on the mainline Linux kernel) for real-time purposes. This patched kernel (known as the PREEMPT_RT) ensures real-time behavior by making the scheduler fully preemptable. In this paper, we applied the PREEMPT_RT patch on top of vanilla Raspbian (kernel version 4.9.48) to enable the real-time functionality. To further enable the fully preemptive functionality from the PREEMPT_RT patch, the flag was enabled during the kernel compilation phase. Furthermore, the system variable /proc/sys/kernel/sched_rt_runtime_us was set to to disable the throttling of the real-time scheduler. This setting allowed the real-time tasks to use up the entire CPU utilization if required141414This change in system variable settings was mainly configured for the purpose of experimenting with the ideas of REORDER only. For most real use-cases, users can keep this system variable untouched for more flexibility.. Also, the active core’s scaling_governor was set to “performance” mode to disable dynamic frequency scaling during the experiments.

V-C Vanilla EDF Scheduler

Since Linux kernel version 3.14, an EDF implementation () is available in the kernel[22]. Since our PREEMPT_RT patched kernel supports , we used this as the baseline EDF implementation and extended the scheduler to implement the REORDER protocol.

In Linux the system call is invoked to configure the scheduling policy for a given process151515Since there is no distinction between processes and threads in the Linux kernel’s scheduler, for the simplicity of the illustration, we use the term process, thread and task interchangeably in the following context.. By design, has the highest priority among all the supported scheduling policies (e.g., , and ). It’s also worth noting that the Linux kernel maintains a separate run queue for (i.e., ). Therefore, it is possible to extend while keeping other scheduling policies untouched. Note that this vanilla EDF scheduler is also used as a base for comparison with the REORDER protocol. The experimental results are presented in Section VI.

V-D Implementation of Reorder

V-D1 Task/Job-specific Variables

The Linux kernel defines a structure, , dedicated to , to store task and job-related variables (both run-time and static variables). They include typical EDF task parameters (e.g., period, deadline and WCET).

To implement REORDER we added two additional variables, named and , both (signed 64 bit integer) type variables, to store the WCIB for the task and to track the RIB for the task’s active job at any given moment, respectively. Each task’s is initialized and updated when a new task is created. The job-specific run-time variable, , is initialized to the precomputed every time when a new job arrives. During run-time, the inversion budget was updated (i.e., decreased by the elapsed time in the case of priority inversion) along with other run-time variables in the function . It is used to determine whether the inversion budget was consumed and a random selection of a job was allowed at a scheduling point.

In our implementations we did not use any external libraries and only used the built-in kernel functions. The following listing shows a part of the existing variables as well as the newly added ones (the highlighted lines). Other variables added for the REORDER protocol are shown in Appendix -D.

struct sched_dl_entity {
/* task specific parameters */
  u64 dl_runtime;   // WCET
  u64 dl_deadline;  // relative deadline
  u64 dl_period;    // period
  s64 reorder_wcib; // worst-case inversion budget
/* task instance (job) specific parameters */
  s64 runtime;      // remaining runtime
  u64 deadline;     // absolute deadline
  s64 reorder_rib;  // remaining inversion budget
  ....
/* Other variables are omitted for readability. */
};

V-D2 Task Selection Function

The REORDER protocol was implemented as a function, named , that selects a task and sets the next scheduling point based on the REORDER algorithm. It replaces the original function, (i.e., one that picks the task that has the next absolute deadline from the run queue, viz., the leftmost node in the scheduler’s red-black tree). This function is indirectly called by the main scheduler function when the next task for execution is needed.

V-D3 Randomization Function

We used the built-in random number generator in the kernel. It supports the system call defined in linux/random.h. It is used by the function to select a random task and a random execution interval for the next scheduling point as explained in Algorithm 1.

V-D4 Schedule Timer

A high-resolution timer (i.e., ) was used to trigger the additional scheduling points introduced by the REORDER protocol, as described in Algorithm 1 (Line 22 and 23). Since this timer is a scheduler-specific timer, it is stored in , as . It is worth noting that is also used by to enforce the task periods.

V-D5 Idle Time Scheduling

As introduced in Section III-D, idle times are considered when the idle time scheduling scheme is deployed. In our Linux kernel implementation, we utilized the native idle task maintained under the scheduler for this purpose. The REORDER protocol yields its scheduling opportunities (to other schedulers such as ) if , the idle task in the REORDER protocol, is selected and running. The subsequent scheduling point is enforced by .

Vi Evaluation

In this section, we evaluate REORDER using a prototype implemented on an embedded platform (i.e., RPi3) running the real-time Linux kernel. We mainly focus on overheads for computing and selecting a task at each scheduling point. Recall that our implementation is based on the vanilla EDF scheduler, , on Linux. Therefore, we evaluate the overheads of the REORDER protocol by comparing them with . The key observations from our performance evaluation results are summarized below.

  • REORDER works in practice on realistic embedded RTS and is able to meet the real-time guarantees.

  • The randomization logic adds minimal scheduling overhead in Linux kernel (Fig. 5). This overhead is arguably very small w.r.t. the task execution time (Fig. 6).

Vi-1 Experimental Setup

We use the RPi3 platform as introduced in Section V. The operating system is patched and configured to enable the real-time capability, as shown in Table I. To keep the vanilla EDF unpolluted from our implementation, we used two separately compiled kernels during the experiments. In the vanilla EDF kernel, the scheduling functions remained untouched. Only the necessary code to benchmark the overhead were added. Note that PREEMPT_RT real-time patch was still applied on this kernel.

We used a mixture of MiBench benchmark automotive programs [26] and synthetically generated tasks. The goal of the experiments was to evaluate the performance on both real and synthetic workloads on a real platform. A total of tasksets were tested. Each taskset was configured with the number of tasks from to ( groups) and of the tasks are drawn from the MiBench programs (Table I). The utilization was set between the ranges and ( utilization groups, tasksets per group) when generating the tasksets. Each task’s period was randomly selected from the range ms and ms. Taskset parameters were randomly generated using the taskset generator from the simulation (see Section IV-C1). The generated parameters (e.g., the task’s period and WCET) were multiples of . In the experiments, the actual execution time performed by a synthetic task was limited to (i.e., of its WCET) to accommodate realistic task execution behaviors. Both, vanilla EDF and REORDER-based schedulers were tested with the same tasksets.

To profile the number of context switches, we directly recorded their occurrence in the scheduler. We did not use external profiling tools (e.g., [44]) because we only focus on the context switches that occur in the scheduler (for both the vanilla EDF and the randomized EDF). Using the profiling tool may include unnecessary context switch counts from other coexisting Linux schedulers. To measure the execution time of the scheduling functions, the function , defined in linux/timekeeping.h, was used. For the experiments, we let each taskset run for seconds. The measurements and the scheduling trace were stored in the kernel log for further analysis.

Vi-2 Results

We first examine the execution time overhead of the scheduling functions. As mentioned in Section V-D2, the main algorithm for the REORDER protocol was implemented in the function . This replaces the scheduling function in (vanilla EDF). As this was the main change between the two schedulers, our test here was focused on measuring the execution time of (for vanilla EDF) and (for REORDER) rather than the higher level scheduler function. Fig. 5 shows the results of this experiment.

From the figure, we can observe that the mean execution cost of for the vanilla EDF remains about the same across different taskset groups. This result is expected because the vanilla EDF always selects the leftmost node from the Linux red-black tree (i.e., run queue) that is independent to the number of tasks in a taskset and has complexity . On the other hand, the mean execution cost of for the base randomization (without idle time randomization) is generally larger than the vanilla EDF mainly due to the calls (that takes an average ns to generate a 64-bit random number) for the random task selections. When there is only one job in the run queue at a scheduling point, the base randomization scheme directly selects the job and omits the call. In the case of the idle time scheduling, REORDER (IT), since the idle task is always considered in every scheduling point, the algorithm reaches the final step with a randomly selected task most of the time. This leads to the overhead roughly corresponding to one call. For the fine-grained switching with idle time randomization scheme and unused time reclamation (i.e., REORDER (FG/UTR)), the overhead remains at a higher level since, in the worst case, two calls are present for each scheduling point: one for the random task selection and the other for the random scheduling points. This results in the scheduling overhead corresponding to the execution cost of two calls. As a result, the overhead contributed by the other part of the algorithm that has complexity (as discussed in Section III-F) is negligible compared to the randomization function.

Fig. 5: The execution time cost (in nanoseconds) for the scheduling functions of the vanilla EDF and the randomized EDF. The vanilla EDF bar represents the mean execution times processed by the function while the other three EDF bars present the mean execution times for that carries out the randomization algorithm.
Fig. 6: The proportion of the cost of the REORDER protocol to the task execution times in the fine-grained switching with idle time randomization scheme. The randomization protocol overhead is provably inversely proportional to the taskset’s total utilization. The upper part of the figure is scaled to for better readability.

Next we examine the proportion of the scheduling overhead to the task’s execution. We do this by comparing the cumulative time cost of the randomization protocol with the cumulative task execution times during the second test duration for each taskset. Here, we consider the fine-grained switching with idle time randomization and unused time reclamation scheme (REORDER (UTR)) as it has the largest overhead among all possible schemes. Fig. 6 shows the mean proportion of the cost of the REORDER (UTR) protocol to the task execution times with varying total utilization. The results indicate that the overhead of the REORDER protocol is inversely proportional to the taskset’s total utilization. Since a taskset with higher utilization spends more of its time executing actual RT jobs, it dilutes the influence from the overhead. The utilization group has an average of overhead while it is for the utilization group. Considering there is typically an overestimation in the range and for task WCET calculations [45], the overhead of the REORDER protocol is negligible for most RTS.

Vii Discussion

Although we only focused on the fact that REORDER can reduce the predictability of conventional dynamic priority scheduler, this idea improves the security posture of future RTS in a more fundamental way. For any scheduling policy, one can infer the amount of information leaked from the system. This information, for instance, will be useful for the engineers to analyze the potential vulnerability (associated with timing inference attacks) of the given system.

Consider a schedule that is output from the randomization protocol (referred to as ground-truth process) and let be the attackers (potentially semi-correct) observation about the schedule (noted as observation). We can define the information leakage as the amount of uncertainty (of the adversary) as follows: the uncertainty about the ground-truth process minus the attackers uncertainty (about the true schedule) after receiving the (fuzzy) observation (i.e., the amount of the reduction of uncertainty due to receiving the observation). One can then use mutual information [46, Ch. 2] (e.g., where is the possible decoding strategies that an adversary can use and is the mutual information) between the ground-truth and the observation as a measure of leakage. A high dependency between the ground-truth and the observation leads to a high information leakage. This implies that the adversary can have a good estimation of the ground-truth. The frameworks developed in this work aims to increase the randomness of the output of the scheduler and reduce the dependency between and the . This is because, for the randomized scheduler, there are more true schedules that are consistent with a given observation. We highlight that defining the exact relationship between the produced randomness and the leakage of the system will require further study. We intend to explore this aspect in future work.

While REORDER reduces the chances of the success of timing inference attacks (and hence improves the security), it is not free from trade-offs. For instance, as we observe in Fig. 5 and 6, the randomization logic adds extra overheads to the scheduler. In this work we did not attempt to derive any analytic upper-bound on the number of context switches and leave this for future work.

Note that it may be possible that some (heavily utilized) tasksets can not be randomized and in that case both EDF and REORDER output the same schedule. For instance, let us consider the taskset with the following parameters: and (with ). The taskset is schedulable by EDF since . However, in this case the budgets (e.g., WCIB) are always negative for all the tasks, e.g., . Therefore, at each scheduling point all the low-priority jobs will be excluded from priority inversion and only the shortest deadline job will be selected – i.e., the same schedule as EDF.

Viii Related Work

Krüger et al. [23] proposed a combined online/offline randomization scheme to reduce determinisms for time-triggered (TT) systems where tasks are executed based on a pre-computed, offline, slot-based schedule. The scheduling paradigms for TT systems are different than dynamic priority RTS. The closest line of work is TaskShuffler [24] where authors proposed to randomize task schedules for fixed-priority (e.g., RM) systems. However the methods developed in both of the above are not directly applicable for dynamic priority systems. Unlike fixed priority systems, obfuscating schedules for EDF scheduling is not straightforward due to run-time changes to task priorities. Besides, as we describe in Appendix -B, the calculation of schedule entropy in prior work does not correctly capture the randomness for all scenarios. Prior work also assumes all the jobs of the tasks always execute with WCET and hence may not be practical for real applications.

Zimmer et al. [47] propose the mechanisms to detect the execution of unauthorized instructions that leverages the information obtained by static timing analysis. An architectural approach that aims to create hardware/software mechanisms to detect anomalies is studied by Yoon et al. [15]. Threats to covert timing channels for RTS has been addressed in prior research for fixed-priority systems [39]. A scheduler-level modification is proposed in literature [48] that alters thread blocks (that may leak information) to the idle thread – the aim is to avoid the exploitation of timing channels while achieving real-time guarantees. The authors also developed locking protocols for preventing covert channels [49].

Issues regarding information leakage through storage timing channels (e.g., caches) in RTS, with different security levels, has been studied [8, 9] and further generalized [50]. The authors proposed a modification to the fixed-priority scheduling algorithm and introduced a state cleanup mechanism to mitigate information leakage through shared resources. However, this leakage prevention comes at a cost of reduced schedulability and is focused on fixed-priority systems. Besides, they may not be completely effective against timing inference attacks that focus on deterministic scheduling behaviors. REORDER works to break this inherent predictability of real-time scheduling by introducing randomness.

Bao et al.[51] model the behavior of the attacker and introduce a scheduling algorithm for a system with aperiodic tasks that have soft deadlines. They provide a trade-off between side-channel information leakage and the number of deadline misses. To the best of our knowledge REORDER is the first work that focuses on obfuscating schedule timing information for dynamic priority RTS with hard deadlines.

Ix Conclusion

Malicious attacks on systems with safety-critical real-time requirements could be catastrophic since the attackers can destabilize the system by inferring the critical task execution patterns. In this work we focus on a widely used optimal real-time scheduling policy and make progress towards developing a solution for timing side-channel attacks. By using the approaches developed in this work (along with our open-source Linux kernel implementation) engineers of the systems can now have enough flexibility, as part of their design, to secure such safety-critical systems. While our initial findings are promising, we believe this is only a start towards developing a unified secure real-time framework in general.

-a Calculation of an Upper Bound of the Response Time

Under EDF, the response time calculation involves computing the busy-period161616A busy-period [52] of is the interval within which jobs with priority higher or equal than are processed throughout but no jobs with priority higher or equal than are processed in or for a sufficiently small . of a task’s instance with deadline less than or equal to that instance [36]. Real-time theory uses the notion of interference, e.g., the amount of time a ready job of is blocked due to the execution of other higher priority jobs. To calculate the WCIB of a task, we measure the worst-case interference from its higher priority jobs. Note that with arbitrary priority inversions, any job could be delayed because of chain reactions, i.e., some low priority jobs in , delay the higher priority jobs (e.g., ), that in turn delay – hence may need more than its WCRT as calculated by the response time analysis [53, 35]. This phenomenon is known as back-to-back hit [53] and can be addressed by considering an extra instance of higher priority jobs. Therefore, without any assumptions on the execution patterns of , for a given release time we can calculate the upper bound of interference [35, 36, 53] experienced by as follows:

(3)

Note that the extra execution times (e.g., in Eq. (3)) are added in the interference calculation to prevent the effects of back-to-back hit from higher priority jobs. For a given release time , the response time of [35, 36] (relative to ) is given by: where denotes the workload of and calculated by . Finally we can compute the upper bound of WCRT of as follows: where is calculated by an iterative fixed point search, that is for some iteration where is the upper bound of any busy-period length. We can calculate this upper bound using the following recurrence relation: . This sequence converges to in a finite number of steps if we assume that the taskset is schedulable (i.e., ) [36].

-B Limitations of Existing Entropy Calculation Approach

In order to evaluate the performance of a randomized scheduler, we need to have a measure of the randomness of the output of the scheduler. For a taskset with hyperperiod of length , define the dimensional random vector representing the schedule of hyperperiod

, where the random variable

denotes the task (including the idle task) scheduled at the -th slot of hyperperiod . Note that the random vectors for different values of are independent and identically distributed (i.i.d.) random variables. Therefore the average randomness of the whole output is equal to the randomness in a single hyperperiod.

In the past [24], researchers defined the entropy of the schedule using Shannon entropy [46, Ch. 2] as a measure of the randomness, i.e., with the assumption that . There are two major issues in calculating the schedule entropy using the above method.

First, in order to obtain , we need to calculate the distribution – calculating this distribution has exponential complexity and is not computationally tractable in practice. Also, estimating this distribution requires a very high number of samples. To address this problem, we proposed [24] to use the sum of the entropy of random variables , as the measure of randomness (referred to as upper-approximated schedule entropy): where . Note that for , choosing gives us and choosing outputs , as . The main limitations of upper approximated schedule entropy is that it completely ignores the regularities that exist in due to the dependencies among random variables. For instance, suppose a taskset contains two tasks: and a schedule for first slots in individual hyperperiods is as follows:

, assuming each vector with equal probability. Let us consider another schedule

that has all possible vectors of and of length with equal probability. Then even though the randomness of is much higher (i.e., while ). Therefore, cannot capture the randomness correctly.

Second, consider an instance where many of the schedules produced in different hyperperiods have very similar patterns in the first few slots and different patterns in the latter slots (or vice versa). In such cases cannot capture the similarities and considers the observed hyperperiods as distinct ones – this leads us to search for dissimilarities in intervals smaller than the whole length of the hyperperiod. In what follows we propose an entropy measure to capture the randomness of a schedule using the concept of limited size intervals that resolves both the aforementioned issues and provides a better way to quantitatively compute randomness.

-C Comparison With True and Approximate Entropy

Recall that obtaining the true entropy (e.g., ) is not feasible in practice since it has an asymptotic complexity. Therefore, we compare approximate entropy (e.g., with by measuring the correlation observed from small tasksets. We generate tasksets that have tasks with where the task utilizations and WCET are generated using methods from Section IV-C1. Each taskset has a common hyperperiod (allowing us to evaluate enough schedules for a reasonable time). For each taskset we observe the schedule for hyperperiods and estimate the true entropy. Given a fixed taskset, generating unique schedules (e.g., ) leads to actual entropy since more tasks appear at each slot. For approximate entropy we set interval length and dissimilarity threshold by trial-and-error and measure the correlation.

Fig. 7: The correlation between true and approximate entropy (the values are normalized to ).

The true and approximate entropy do not depend on the length of the hyperperiod – instead, the approximation error (as can be seen from Fig. 7) is due to the assumption of independence between intervals. While we observe that the correlation between true and approximate entropy is relatively high (e.g., ) the approximated schedule entropy, should be used to compare the relative randomness of two schedules (that is also the focus of our evaluation).

-D Reorder Variables in Real-time Linux Implementation

The implementation of the REORDER protocol on the Linux kernel modifies four files:

  • include/linux/sched.h (task/job-specific variables introduced in Section V-D1).

  • kernel/sched/sched.h (scheduler-specific variables, as presented below).

  • kernel/sched/core.c (scheduling functions that govern all schedulers in the kernel).

  • kernel/sched/deadline.c (scheduling functions for – the main REORDER algorithms were implemented here).

Besides the task-specific variables introduced in Section V-D1, there are scheduler-specific variables declared and used in our implementation, as shown in the listing below.

struct dl_rq {
  ....
/* scheduler specific parameters */
  struct hrtimer reorder_pi_timer; // schedule timer
  u64 reorder_pi_timer_start_time; // timer start
  bool reorder_idle_time_acting;   // idle status
  // scheme (Base, IT, FT, UTR)
  enum reorder_scheduling_mode reorder_mode;
  ....
};

The variable is used to determine the randomization scheme to be used in the scheduler. The enumeration for the scheme options are defined in the same source file (kernel/sched/sched.h) and shown in the following listing.

enum reorder_scheduling_mode {
  REORDER_NORMAL,      // task only randomization
  REORDER_IDLE_TIME,   // + idle time scheduling
  REORDER_FINE_GRAINED,// + fine-grained switching
  REORDER_RECLAMATION  // + unused time reclamation
};

References

  • [1] R. Wilhelm, J. Engblom, A. Ermedahl, N. Holsti, S. Thesing, D. Whalley, G. Bernat, C. Ferdinand, R. Heckmann, T. Mitra, F. Mueller, I. Puaut, P. Puschner, J. Staschulat, and P. Stenström, “The worst-case execution-time problem—overview of methods and survey of tools,” ACM TECS, vol. 7, no. 3, pp. 36:1–36:53, 2008.
  • [2] C.-Y. Chen, S. Mohan, R. B. Bobba, R. Pellizzoni, and N. Kiyavash, “How to precisely whack that mole: Predicting task executions in real-time systems using a novel (scheduler) side-channel,” 2018. [Online]. Available: https://arxiv.org/abs/1806.01814
  • [3] C.-Y. Chen, A. Ghassami, S. Nagy, M.-K. Yoon, S. Mohan, N. Kiyavash, R. B. Bobba, and R. Pellizzoni, “Schedule-based side-channel attack in fixed-priority real-time systems,” Tech. Rep., 2015.
  • [4] J. Westling, “Future of the Internet of things in mission critical applications,” 2016.
  • [5] N. Falliere, L. O. Murchu, and E. Chien, “W32. Stuxnet dossier,” White paper, Symantec Corp., Security Response, vol. 5, p. 6, 2011.
  • [6] R. M. Lee, M. J. Assante, and T. Conway, “Analysis of the cyber attack on the ukrainian power grid,” SANS Industrial Control Systems, 2016.
  • [7] M.-K. Yoon, S. Mohan, J. Choi, J.-E. Kim, and L. Sha, “SecureCore: A multicore-based intrusion detection architecture for real-time embedded systems,” in IEEE RTAS, 2013, pp. 21–32.
  • [8] S. Mohan, M.-K. Yoon, R. Pellizzoni, and R. B. Bobba, “Real-time systems security through scheduler constraints,” in IEEE ECRTS, 2014, pp. 129–140.
  • [9] R. Pellizzoni, N. Paryab, M.-K. Yoon, S. Bak, S. Mohan, and R. B. Bobba, “A generalized model for preventing information leakage in hard real-time systems,” in IEEE RTAS, 2015, pp. 271–282.
  • [10] M. Hasan, S. Mohan, R. B. Bobba, and R. Pellizzoni, “Exploring opportunistic execution for integrating security into legacy hard real-time systems,” in IEEE RTSS, 2016, pp. 123–134.
  • [11] K. Jiang, L. Batina, P. Eles, and Z. Peng, “Robustness analysis of real-time scheduling against differential power analysis attacks,” in IEEE ISVLSI, 2014, pp. 450–455.
  • [12] J. Son and J. Alves-Foss, “Covert timing channel analysis of rate monotonic real-time scheduling algorithm in MLS systems,” in IEEE Inf. Ass. Wkshp, 2006, pp. 361–368.
  • [13] D. Agrawal, B. Archambeault, J. R. Rao, and P. Rohatgi, “The em side—channel (s),” in International Workshop on Cryptographic Hardware and Embedded Systems.   Springer, 2002, pp. 29–45.
  • [14] H. Bar-El, H. Choukri, D. Naccache, M. Tunstall, and C. Whelan, “The sorcerer’s apprentice guide to fault attacks,” Proc. of the IEEE, vol. 94, no. 2, pp. 370–382, 2006.
  • [15] S. Mohan, S. Bak, E. Betti, H. Yun, L. Sha, and M. Caccamo, “S3A: Secure system simplex architecture for enhanced security and robustness of cyber-physical systems,” in ACM HiCoNS, 2013, pp. 65–74.
  • [16] T. Xie and X. Qin, “Improving security for periodic tasks in embedded systems through scheduling,” ACM TECS, vol. 6, no. 3, p. 20, 2007.
  • [17] M. Lin, L. Xu, L. T. Yang, X. Qin, N. Zheng, Z. Wu, and M. Qiu, “Static security optimization for real-time systems,” IEEE Trans. on Indust. Info., vol. 5, no. 1, pp. 22–37, 2009.
  • [18]

    M. M. Z. Zadeh, M. Salem, N. Kumar, G. Cutulenco, and S. Fischmeister, “SiPTA: Signal processing for trace-based anomaly detection,” in

    ACM EMSOFT, 2014.
  • [19] C. L. Liu and J. W. Layland, “Scheduling algorithms for multiprogramming in a hard-real-time environment,” JACM, vol. 20, no. 1, pp. 46–61, 1973.
  • [20] “Erika Enterprise,” http://erika.tuxfamily.org/drupal.
  • [21] “Real-time executive for multiprocessor systems (RTEMS),” https://www.rtems.org.
  • [22] D. Faggioli, F. Checconi, M. Trimarchi, and C. Scordino, “An EDF scheduling class for the Linux kernel,” in Real-Time Linux Wkshp