On the benchmarking of partitioned real-time systems

07/15/2020 ∙ by Felipe Gohring de Magalhaes, et al. ∙ Corporation de l'ecole Polytechnique de Montreal 0

Avionic software is the subject of critical real time, determinism and safety constraints. Software designers face several challenges, one of them being the estimation of worst-case execution time (WCET) of applications, that dictates the execution time of the system. A pessimistic WCET estimation can lead to low execution performances of the system, while an over-optimistic estimation can lead to deadline misses, breaking one the basic constraints of critical real-time systems (RTS). Partitioned systems are one special category of real time systems, employed by the avionic community to deploy avionic software. The ARINC-653 standard is one common avionic standard that employs the concept of partitions. This standard defines partitioned architectures where one partition should never directly interfere with another one. Assessing WCET of general purpose RTSs is achievable by the usage of one of the many published benchmark or WCET estimation frameworks. Contrarily, partitioned RTSs are special cases, in which common benchmark tools may not capture all the metrics. In this document, we present SFPBench, a generic benchmark framework for the assessment of performance metrics on partitioned RTSs. The general organization of the framework and its applications are illustrated, as well as an use-case, employing SFPBench on an industrial partitioned operating system (OS) executing on a Commercial Off-The-shelf (COTS) processor.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 5

page 6

page 7

page 9

Code Repositories

benchmark

Set of ARINC-653 applications for assessing system performance.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Avionic applications must follow strict certification rules. They are hard real-time systems, which means that they are considered to have failed if they do not completer their functions within stringent deadlines. Predictability and determinism of such systems is crucial and are major components of certification for aerospace systems. This is due to the fact that these systems are one of the most common critical systems which civilians have daily access to. What differentiates avionic systems to others is their catastrophe potential. While a failing ship can easily stop and wait for maintenance, as much as a problematic car can park on the side of the road, where usually there is low to no risk involved, a failing airplane cannot simply stop in the middle of a trip for repairs.

Operating Systems (OSs) provide an interface between the software applications and their host hardware systems. One of their main roles is to manage the system’s resources to make their usage transparent to the user. This enables applications to run on hardware systems without being aware of these systems’ specifications, easing the development of applications. Real-Time Operating Systems (RTOS) are operating systems designed for real-time management. RTOSs are used for critical systems for which a given functionality must be done within a given time interval. An RTOS must ensure that the worst-case execution time (WCET) of each task is respected. The ARINC-653 [arinc653] is an avionic standard which defines a general-purpose APplication/EXecutive (APEX) interface (API) between the Core Software (CSW) of an Avionics Computer Resource (ACR) and the application software. ARINC-653 compliant RTOS extend the functionalities provided by regular RTOS enabling time and space isolation of applications.

Assessing RTOS performances is performed either using intrusive methods or non-intrusivy methods. Intrusive assessment is usually precise, but requires access to all modules of the RTOS (i.e. source code). Performance metrics are directly read during execution time relying on defined points in the RTOS modules. Non-intrusive assessment is used when not all the modules are available (i.e. only the APIs are available, but not their implementation). In this scheme, the performance metrics are obtained using only API calls, collecting the metrics before and after their execution. The main advantage of intrusive assessment is the seamless access to specific points of the OS, while non-intrusive assessment eases the portability of tests, enabling the generalization of tests. Assessing WCET estimation can be done statically or dynamically [wcet]

. For safety reasons, there is always a margin added to the estimated WCET. By providing a more precise WCET estimation, it is possible to reduce this margin, leading to more tasks being scheduled and improve the hardware resources usage. The determinism of execution time can be measured with its standard deviation. The smaller the standard deviation is, the more deterministic the system execution time is. The determinism is critical for real-time systems, since it can guarantee the timing behavior of a system.

State-of-the-art RTOSs (i.e. VxWorks [vxworks], PikeOS [pikeos], Integrity178 [integrity]) are characterized by performance metrics showing their efficiency using in-house assessments. Although published results show precise metrics, it is a fairly difficult task for third-part users / developers to replicate these results in the context of their applications. To tackle this limitation, different solutions seek on providing standard benchmark frameworks, replicable by any provider or user. These enable a common point for different providers to assess their RTOS performances as well as compare with competitors. Nevertheless, ARINC-653 compliant RTOSs cannot profit from these benchmark frameworks, due to the special characteristics of ARINC-653 compliant RTOSs. ARINC-653 systems have time and space division and follow specific design recommendations, such flushing the entire cache memory for each partition switch , which differs them from general RTOSs.

This document presents an open-source benchmark framework for ARINC-653 compliant RTOSs, called

Straight-Forward Partitions-Aware Benchmark framework (SFPBench). Different classes of benchmark applications are developed in order to assess different metrics of the RTOS. The tests are organized as gray-box applications relying solely on standard ARINC-653 APEX interface. The benchmark seeks to be openly accessible and modifiable by the community, hosted on the public domain (i.e. github) [github_sff].

The remainder of this document is organized as it follows. Next section presents an overview of the ARINC-653 standard, followed by Section 3, where the general organization of SFPBench and its applications is introduced. Section 4 illustrates the validation hardware structure as well as the system configuration parameters. Next, Section 5 presents the results obtained using SFPBench with an industrial RTOS. Finally, 6 concludes this documents and points to possible extentions of SFPBench functionalities.

2 Avionics systems and the ARINC-653 standard

Former avionic systems relied on stand-alone architectures to deploy applications. The main motivation was due to security reasons, in a way to isolate each one of the components of the system on its own domain, without any interference between applications. In the beginning of the twenty-first century, avionics systems underwent the transition from federated, isolated, architectures to Integrated Modular Avionic (IMA) architectures [4391842]. The motivation was to reduce Size, Weight, and Power (SWaP) issues, which are common to most embedded systems.

Figure 1 illustrates the difference between applications deployed with federated architectures and IMA. In a federated architecture, each application has its own hardware called a Line Replaceable Unit (LRU). LRUs can be seen as a set of interconnected boxes. The drawback of these easily replaceable hardware units is the cost of redundancy of hardware. This is one of the reasons IMA architectures are used today: one computing unit can be used to support multiple applications, allowing a hardware computing unit to be used by multiple applications, therefore the cost of redundancy is reduced.

Figure 1: Federated and IMA architectures comparison.

One of the most common standards being used by the avionic society is the ARINC-653 [arinc653] standard. It is a standard for partitioned RTOS which gives specifications to ensure the isolation of applications. According to this standard, the partitioning must be done in space and in time.

Space partitioning

Each partition is isolated regarding hardware usage, such as memory space: each partition has a set of addresses in memory and it is the only one having the rights to access them.

Time partitioning

The CPU time is divided in several time windows. Each time window is allocated to a partition. During one of its time windows, a partition is the only one executing on a CPU. All partitions are allocated time windows within a period of time called major time frame. The schedule is then repeated every major time frame.

The standard also specifies services that the RTOS must offer. These services are called APEX services.

Apex Api

The services offered by this API are the ones used to create the ARINC-653 partitions and perform synchronization and communication between them. An ARINC-653 partition can be composed of several ARINC-653 processes that share the partition’s context. An analogy with POSIX’s API would be that ARINC-653 partitions are POSIX processes and ARINC-653 processes are POSIX threads.

Interpartition communication

ARINC-653 specifies how two partitions can communicate. The communication means are messages using channels or ports. There are two modes of communication:

  • Sampling mode: only one message is stored in the source port, it is overwritten each time the source partition writes. It is useful when a partition requires the latest status of a data.

  • Queuing mode: messages sent are stored in a FIFO order. Each partition (sender and receiver) is responsible in handling the situations when the queue is full or empty.

Intrapartittion communication

There are several communication means between the processes of a same partition:

  • Blackboard: very similar to sampling mode; instead of being between partitions, it is between ARINC-653 processes.

  • Buffers: very similar to queue mode; instead of being between partitions, it is between ARINC-653 processes.

  • Semaphore: ARINC 653 semaphores conform with the classical definition: WAIT_SEMAPHORE is called to wait on a semaphore (if the value of semaphore not equal to zero, the semaphore is decremented and the process continues, or else it is blocked until the semaphore is incremented). SIGNAL_SEMAPHORE is used to increment the semaphore’s value and potentially freeing a locked process. Waiting processes are queued in FIFO order, and freed one at a time.

  • Events: processes can wait on custom events, which have two states ("up" if the event occurred or "down" if not). All processes waiting on an event with a "down" state are blocked until either they timed out or the event’s state changes. When an event is "up" all waiting processes are freed at the same time, making all of them candidates to be scheduled, unlike for semaphores.

  • Mutex: as semaphores, ARINC-653 mutexes conform with the classical definition. A mutex can be owned by only one process at a time. Waiting processes are queued in a FIFO, similarly to semaphores.

Health Monitor

The Health Monitor is a feature of the RTOS which must handle unexpected error during the execution of partitions, such as deadline misses or arithmetical errors. Through configurations by the user, the Health Monitor then decides what behavior the partition must have, whether it must shutdown or reset the partition, ignore the error or try recovering from it. The ARINC-653 standard requires the RTOS to have a Health Monitor.

3 SFPBench - Collaborative open source benchmark framework

Benchmarking real-time systems is a widely studied subject, where several approaches are possible. Different solutions opt for to exploit specifics metrics [bench1] or require specialized libraries or system modifications to be used [bench2][bench3]. These design decisions affect the level of accuracy of the evaluations obtained by using each solution. Another important aspect is the type of access the end-user has regarding the target system implementation. Usually, the end-user, which its deploying its application on the target system, only has access to the RTOS services through APIs, masking the under-layers design aspects. This imposes several restrictions to test the system, as the abstraction level can be too elevated to actually capture desired metrics. Ideally, benchmark applications should require little or no modification in the host RTOS, for the collected results to be less prone to interferences. In [bench4], a benchmark framework is presented, where no modification in the host RTOS or the usage of specific libraries is demanded. The set of tests is meant for basic services, such as task switch time and semaphore latency. A set of benchmark applications is introduced in [bench3]. These applications do not require additional libraries nor specific RTOS services, such as mutexes or pthreads, serving as good metrics to establish worts-case execution time of host systems. Although these may serve as good common baseline applications for systems’ performance assessment, their contribution is limited by their conception. Most important, these solutions are not meant for ARINC-653 compliant RTOSs, jeopardizing their usage for the context of this document.

Respecting the basic premise of ARINC-653 systems, in which partitions are memory and time isolated, SFPBench framework is deployed to stress specific points of such systems. The benchmark framework provides several benchmark applications developed to assess different aspects of the RTOS, such as partition and process switch times, semaphore latency as well as covering ARINC-653 APEX calls latency. The framework is organized such as the applications do not require any modification to execute with the target RTOS, hence eliminating any source of interference in the application level. To enable that, an abstraction layer is provided as part of the framework, where RTOS specific definitions are linked. Run-time macros (i.e. START_TIME_MEASURE and END_TIME_MEASURE) are provided with the porting layer such as new applications and performance metrics can be easily added to the original benchmark set. The measurements rely on the hardware clock tick of the system, thus enhancing the measurement accuracy. Figure 2 illustrates the SFPBench abstraction layer placement in the ARINC-653 compliant system. As the figure shows, the abstraction layer is placed between the ARINC-653 RTOS and the deployed application.

Figure 2: SFPBench organization layout, showing the abstraction layer between the ARINC-653 implementation layer and deployed application(s).

The main purpose of this layer is to abstract the API calls used by the test application. This enables the application to remain completely agnostic of the host RTOS, making it possible to use SFPBench on different targets RTOSs without modifying the application. The only requirement is to update the abstraction layer, mapping the host RTOS API to the standard defined for SFPBench.

Listing 1 presents an example of a function definition in the abstraction layer. First, a generic function to create processes is declared: perf_create_task. As the target system is an ARINC-653 compliant RTOS, the function body implements (between lines 1 and 37) the default process creation. The application main function (partition entry point) can simply call this function (perf_create_task) and the process creation is performed in the abstraction layer. This is illustrated between lines 39 and 47.

1perf_task_handle_t perf_create_task(perf_task_entry_t task_entry,
2                                    char task_name[4],
3                                    unsigned int prio)
4{
5  perf_task_handle_t thandle;
6  RETURN_CODE_TYPE errCode = NO_ERROR;
7
8  PROCESS_ATTRIBUTE_TYPE processAttributes;
9  processAttributes.BASE_PRIORITY = (PRIORITY_TYPE)prio;
10  processAttributes.DEADLINE = SOFT;
11  processAttributes.ENTRY_POINT = (SYSTEM_ADDRESS_TYPE)task_entry;
12  strncpy(processAttributes.NAME, task_name, MAX_NAME_LENGTH);
13  processAttributes.PERIOD = INFINITE_TIME_VALUE;
14  processAttributes.STACK_SIZE = 1024;
15  processAttributes.TIME_CAPACITY = INFINITE_TIME_VALUE;
16  CREATE_PROCESS(&processAttributes, &thandle, &errCode);
17
18  if(errCode != NO_ERROR)
19  {
20    PERF_PRINT_STRING("Just had an error creating the task:  ");
21    PERF_PRINT_NUMBER(errCode);
22    PERF_PRINT_EOL();
23  }
24
25  else
26  {
27    START(thandle, &errCode);
28    if(errCode != NO_ERROR)
29    {
30      PERF_PRINT_STRING("Just had an error starting the task:  ");
31      PERF_PRINT_NUMBER(errCode);
32      PERF_PRINT_EOL();
33    }
34  }
35
36  return thandle;
37}
38////
39void main()
40{
41  RETURN_CODE_TYPE retCode = NO_ERROR;
42
43  perf_create_task(RunTest, "Test1", BASE_PRIO);
44
45  SET_PARTITION_MODE(NORMAL, &retCode);
46
47}
Listing 1: Abstraction layer function example.

To collect execution metrics, four macros are defined in the abstraction layer:

  • DECLARE_TIME_MEASURE()

  • INITIALIZE_TIME_VARS(name)

  • START_TIME_MEASURE()

  • END_TIME_MEASURE()

  • VALIDATE_TIME_MEASURE(std_variation_enable)

These macros can be used to collect performance metrics for any application or function, provided by the framework or not. Their usage is straight-forward: one should place the desired point of measurement between the two macros - START_TIME_MEASURE() and END_TIME_MEASURE(). Listing 2 presents an usage example of these macros. In the example, the main() function creates a process called RunSobelTests (line 4). The process body implements the execution of the test: control variables are declared (line 12); performance variables are declared and initialized for the "SOBEL" test (lines 13 and 14); the execution time recording is started before the target functions are called (line 22) and the measured functions are executed (lines 23 and 24); after execution of the target functions, the time collection is stopped and the results are calculated and validated (lines 25 and 26); after executing the same tests a given amount of times (configured in the for..loop on line 15), the test is finalized and the obtained results exhibited (line 28). Please notice that the implementation of tested functions (do_gaussian and do_sobel) and most of control variables are omitted for the sake of better readability.

49void main()
50{
51  RETURN_CODE_TYPE retCode = NO_ERROR;
52  perf_create_task(RunSobelTests, "SOBEL", BASE_PRIO);
53  SET_PARTITION_MODE(NORMAL, &retCode);
54  while(1)
55}
56
57void  RunSobelTests ()
58{
59  int32_t runs, j;
60  DECLARE_TIME_MEASURE()
61  INITIALIZE_TIME_VARS("SOBEL")
62  for(runs = 0; runs < SOBEL_RUNS_PER_MEASURE; runs++)
63    {
64      for (j = 0; j < height * width; j++)
65        {
66          image[j] = (uint8_t) rand();
67        }
68
69      INIT_TIME_MEASURE()
70      do_gaussian();
71      do_sobel();
72      FINISH_TIME_MEASURE()
73      VALIDATE_TIME_MEASURE(1)
74    }
75  PRINT_PERFORMANCE_INFO()
76  perf_task_suspend_self();
77  while(1);
78}
Listing 2: Performance macros usage example.

The SFPBench framework - at the state of the time of the submission of this document - provides eighteen (18) independent test applications, divided in three groups: grey-box applications, APEX API and complete applications. Each one is presented next.

3.1 Grey-box applications

These test applications are developed with the intent of testing specific RTOS metrics, such as the general overhead to use a given service (i.e., semaphore) in the context of an application. This group of tests is composed by the following applications:

  • Process Switch creates ’n’ processes and collects the time between the end of a process and the begining of the next one. This enables the assessment of the process switch time of the target RTOS, where the RTOS overhead to switch between two processes can be evaluated;

  • Mutex Acquire and Mutex Release creates different processes that use mutex primitives to define a critical region. The collected time refers to the time it takes for one process to request access to a mutex and for another process to release it;

  • Mutex Acquire 2 and Mutex Release 2

    similarly to the previous test, this test creates different processes that use mutex primitives to define a a critical region. The difference is in the moment the time collection is enabled. In this test, the total time to loop the execution is collected;

  • Mutex Workload creates two process that dispute a critical region. When in the the critical region, the process with access granted performs a series of operations within the region, emulating the processing of data in a real critical region;

  • Sem Wait and Sem Signal creates different processes that use semaphores primitives to synchronize the execution of the application. The collected time refers to the time it takes for one process to request access to a wait on a semaphore and for another process to release it;

  • Priority Sem creates processes that compete for a semaphore, each with a different priority;

  • Sem Signal 2 and Sem Wait 2 similarly to the previous test, this test creates different processes that use semaphores primitives to synchronize the execution of the application. The difference is in the moment the time collection is enabled. In this test, the total time to loop the execution is collected;

  • Sem Workload creates two process that use semaphores primitives to synchronize the execution of the application. When a process passes the wait semaphore primitive, it performs a series of operations, emulating the processing of data, and;

  • Partition Switch measures the time before the end of one partition and the beginning of the next. This enables the assessment of the partition switch time of the target RTOS, where the RTOS overhead to switch between two partitions can be evaluated.

3.2 Apex Api

This group has one application that is designed to individually assess the performance of each one of the APEX calls defined in the ARINC-653 Part 1 documentation. The complete list of calls is presented in Table 1.

GET_PARTITION_STATUS CREATE_SEMAPHORE CREATE_BUFFER CREATE_BLACKBOARD
READ_BLACKBOARD GET_BUFFER_ID SEND_BUFFER RECEIVE_BUFFER
DISPLAY_BLACKBOARD WAIT_SEMAPHORE SET_PRIORITY GET_MY_ID
GET_SEMAPHORE_STATUS CREATE_EVENT SET_EVENT GET_EVENT_ID
GET_CURRENT_TICKS CREATE_QUEUING_PORT GET_QUEUING_PORT_ID GET_QUEUING_PORT_STATUS
SEND_QUEUING_MESSAGE RECEIVE_QUEUING_MESSAGE WRITE_SAMPLING_MESSAGE READ_SAMPLING_MESSAGE
SIGNAL_SEMAPHORE GET_PROCESS_STATUS WAIT_EVENT GET_SAMPLING_PORT_ID
GET_SEMAPHORE_ID GET_PROCESS_ID GET_EVENT_STATUS CREATE_SAMPLING_PORT
UNLOCK_PREEMPTION LOCK_PREEMPTION
Table 1: Covered APEX Calls.

The method to collect metrics of these calls is the same as the ones illustrated in Section 3. Listing 3 presents an example of data collection for the CREATE_BLACKBOARD call. In the example, the collection tags are placed before and after the call, collection the time for two calls (between lines 6-8 and 16-18) and storing the results.

80void test_create_blackboard(uint8_t calc_dev) {
81  RETURN_CODE_TYPE   err_code;
82  DECLARE_TIME_MEASURE()
83  INITIALIZE_TIME_VARS("BLACKBOARD");
84
85  INIT_TIME_MEASURE();
86  CREATE_BLACKBOARD ("BB", 64, &black_id1, &err_code);
87  FINISH_TIME_MEASURE();
88  if(err_code == NO_ERROR){VALIDATE_TIME_MEASURE(calc_dev) }
89  else{
90      PERF_PRINT_STRING("BB Create 1: ");
91      PERF_PRINT_NUMBER(err_code);
92      PERF_PRINT_EOL();
93  }
94
95  INIT_TIME_MEASURE();
96  CREATE_BLACKBOARD ("BB2", 64, &black_id2, &err_code);
97  FINISH_TIME_MEASURE();
98  if(err_code == NO_ERROR){ VALIDATE_TIME_MEASURE(calc_dev) }
99  else{
100      PERF_PRINT_STRING("BB Create 2: ");
101      PERF_PRINT_NUMBER(err_code);
102      PERF_PRINT_EOL();
103  }
104}
Listing 3: APEX call execution time collection example.

3.3 Complete applications

This group presents full applications, where the collected execution times collect end-to-end metrics: from the beginning of the application execution until its end. Six (6) tests are in this group:

  • ADPCM implements the Adaptive Differential Pulse-Code Modulation algorithm [adpcm], collecting the execution time to perform a given (configurable) number of executions of the entire algorithm for a predefined data-set;

  • Dijkstra defines and application that executes the Dijkstra algorithm [dijkstra] for a predefined graph and collects the execution time for a number of executions, configurable during test creation;

  • Sobel implements the image edge detection Sobel algorithm [sobel] and executes it a given (configurable) number of times, for a predefined input image. The execution time for each execution loop is annotated;

  • APEX APP 1 synthetic application that emulates the execution flow of a real application. It is composed by four (4) processes that use semaphores to synchronize their execution. Each one of the processes is blocked by another one, and when have their access granted, perform a different task (i.e., process two waits on the semaphore, and when it has its access granted, it makes calls to APEX services). The collected time is the end-to-end execution time, before the beginning of the first process and after the end of the fourth process;

  • APEX APP 2 synthetic application that emulates the execution flow of a real application. It is composed by four (4) processes that use events to synchronize their execution. Each one of the processes is blocked by another one, and when have their access granted, perform a different task (i.e., process two waits for an event, and when it detects it, it makes calls to APEX services). The collected time is the end-to-end execution time, before the beginning of the first process and after the end of the fourth process, and;

  • APEX APP 3 synthetic application that emulates the execution flow of a real application. It is composed by two (2) partitions, with two (2) processes each. For each partition, one process is periodic, and the other one is sporadic. The application uses sampling ports to synchronize processes in different partitions, and semaphores to synchronize processes in the same partition. Partition one (1) executes CRC calculations, using randomly created data. The data is them transferred to partition two (2) using a sampling port. Partition two (2) waits on the sampling port for data to be consumed, and using this data, creates matrices and performs the multiplication of these matrices. The collected time is the end-to-end execution time for each partition.

4 Validation platform

In order to validate the SFPBench framework, it was integrated with an ARINC-653 compliant RTOS and deployed in real hardware. The target RTOS is a popular ARINC-653 compliant RTOS of large deployment in the market111Due to licensing agreement, the real name of the RTOS will be omitted. From here and on now, in this document, it will be called ARINCRTOS., executing on the P2020RDB-PC board [p2020].

The P2020RDB-PC communications processors deliver high performance per watt for dual- and single-core applications. It uses 45nm process technology fabrication to operate on frequencies up to 1.2 GHz. The main components of the architecture are:

  • Dual high-performance Power Architecture e500 cores

  • 32 KB L1 (32kb for instructions and 32kb for data) and 512KB L2 caches

  • Three 10/100/1000 Mbps enhanced three-speed Ethernet controllers (eTSECs)

  • Two PCI Express interfaces

  • Two Serial RapidIO interfaces

  • Two SGMII interfaces

  • Integrated security engine

  • Dual high-speed USB controllers (USB 2.0)

  • 1GB DDR3 DRAM

  • eLBC, eSDHC, Dual I2C, DUART, PIC, DMA, GPIO

Figure 3 illustrates the P2020RDB-PC block diagram of the board architecture.

Figure 3: P2020RDB-PC Block Diagram[p2020_img].

ARINCRTOS is configured to execute using top frequency of the processor (1.2 GHz), executing at full speed. L1 cache is enabled for both data and instructions and cache L2 is configure as SRAM, to improve the performance of the RTOS kernel. The compiler (GCC) is configure to enable optimizations, using -O2 optimization flag.

5 SFPBench results exploration

To illustrate the kind of results that are generated by the usage of SFPBench framework, the entire set of applications was executed with ARINCRTOS, on the P2020RDB-PC board. The next tables present all the data generated by the framework, divided by application and for each application, the execution times in clock ticks and microseconds are illustrated. Further, best-case execution time (BCET), worst-case execution time (WCET) and average execution time are given. Table LABEL:tab:grey_results presents the obtained results for the grey-box applications, followed by Table LABEL:tab:apex_results, which presents the APEX calls obtained results. Finally, Table LABEL:tab:app_results presents the results obtained for the complete applications.

Time (ticks) Time (us)
BCET WCET Average BCET WCET Average STD Dev.
Process Switch 113.00 1290.00 113.00 1.50 17.20 1.63 0.85
Mutex Acquire 75.00 947.00 76.00 1.00 12.62 1.05 1.23
Mutex Release 79.00 1128.00 79.00 1.05 15.04 1.05 0.17
Mutex Acquire 2 27.00 112.00 27.00 0.36 1.49 0.36 0.00
Mutex Release 2 29.00 203.00 29.00 0.38 2.70 0.38 0.01
Mutex Workload 142.00 181.00 146.00 1.89 2.41 1.94 0.01
Sem Wait 118.00 831.00 119.00 1.57 11.08 1.58 0.17
Sem Signal 75.00 879.00 75.00 1.00 11.72 1.00 0.01
Priority Sem 44.00 1213.00 52.00 0.58 16.17 0.69 0.01
Sem Signal 2 28.00 128.00 39.00 0.37 1.70 0.52 0.29
Sem Wait 2 34.00 196.00 94.00 0.45 2.60 1.25 0.81
Sem Workload 138.00 301.00 456.00 1.84 6.08 4.01 2.11
Partition Switch 1662.00 3056.00 1682.00 22.16 40.74 22.44 5.68
Table 2: Grey-box applications execution times.
Time (ticks) Time (us)
BCET WCET Average BCET WCET Average STD Dev.
INIT_PROCESS1 8054.00 8054.00 8054.00 107.38 107.38 107.38 0.00
INIT_PROCESS2 2796.00 2796.00 2796.00 37.27 37.27 37.27 0.00
SEMAPHORE_CREATE 342.00 2555.00 1004.00 4.56 34.06 13.38 0.00
BUFFER_CREATE 335.00 490.00 388.00 4.46 6.53 5.18 0.00
BB_CREATE 284.00 415.00 349.00 3.78 5.53 4.65 0.00
CREATE_EVENT 329.00 594.00 407.00 4.38 7.92 5.43 0.00
PARTITION_STATUS 22.00 169.00 58.00 0.29 2.25 0.78 0.84
PREEMPTION_LOCK 111.00 298.00 157.00 1.48 3.97 2.10 1.07
PREEMPTION_UNLOCK 129.00 130.00 129.00 1.72 1.73 1.72 0.05
DISPLAY_BB(16) 31.00 114.00 51.00 0.41 1.52 0.68 0.47
READ_BB(16) 59.00 147.00 83.00 0.78 1.96 1.11 0.48
DISPLAY_BB(64) 34.00 43.00 35.00 0.45 0.57 0.48 0.05
READ_BB(64) 63.00 65.00 63.00 0.84 0.86 0.85 0.01
SEND_BUFFER(16) 90.00 226.00 134.00 1.20 3.01 1.79 0.73
SEMAPHORE_SIGNAL 30.00 79.00 42.00 0.40 1.05 0.56 0.27
PROCESS_PRIORITY 65.00 209.00 101.00 0.86 2.78 1.34 0.82
MY_ID 43.00 80.00 52.00 0.57 1.06 0.69 0.21
PROCESS_ID 34.00 40.00 36.00 0.45 0.53 0.48 0.03
PROCESS_STATUS 67.00 166.00 91.00 0.89 2.21 1.22 0.56
SEMAPHORE_ID 32.00 59.00 44.00 0.42 0.78 0.59 0.13
SEMAPHORE_STATUS 38.00 112.00 57.00 0.50 1.49 0.76 0.42
EVENT_SET 29.00 75.00 50.00 0.38 1.00 0.68 0.29
EVENT_ID 42.00 105.00 65.00 0.56 1.40 0.87 0.31
EVENT_STATUS 30.00 104.00 48.00 0.40 1.38 0.64 0.42
CREATE_SAMPLING 752.00 2671.00 1436.00 10.02 35.61 19.14 0.00
CREATE_QUEUE 1262.00 2439.00 1683.00 16.82 32.52 22.45 0.00
SAMPLING_ID 34.00 873.00 279.00 0.45 11.64 3.73 4.60
SAMPLING_STATUS 17.00 45.00 26.00 0.22 0.60 0.34 0.14
QUEUE_STATUS 184.00 575.00 295.00 2.45 7.66 3.94 2.15
QUEUE_ID 28.00 35.00 30.00 0.37 0.46 0.41 0.03
QUEUE_WRITE 841.00 1681.00 1261.00 11.21 22.41 16.81 5.60
SAMPLING_WRITE 335.00 757.00 546.00 4.46 10.09 7.27 2.81
SAMPLING_READ 314.00 636.00 475.00 4.18 8.48 6.33 2.14
Table 3: APEX calls execution times.
Time (ticks) Time (us)
BCET WCET Average BCET WCET Average STD Dev.
SOBEL 393084.00 393834.00 393204.00 5240.64 5251.12 5242.86 2.89
ADPCM 26724.00 29304.00 27360.00 356.32 390.72 364.85 8.53
DIJKSTRA 2322.00 4364.00 2488.00 30.95 58.18 33.38 5.78
APEX APP 1 578.00 2845.00 629.00 7.70 37.93 8.65 4.12
APEX APP 2 838.00 2698.00 919.00 11.17 35.97 12.53 4.29
SAMPLE APEX APP A 54469113.00 54469113.00 54469113.00 726197.73 726197.73 726197.73 0.00
SAMPLE APEX APP B 36710197.00 36710197.00 36710197.00 489469.29 489469.29 489469.29 0.00
SAMPLE APEX TOTAL 91179310.00 91179310.00 91179310.00 1215667.03 1215667.03 1215667.03 0.00
Table 4: Complete applications execution times.

Tables LABEL:tab:grey_results, LABEL:tab:apex_results and LABEL:tab:app_results show that SFPBench enables designers to collect a rich gamma of information regarding execution metrics of the system. Its usage is simple, from a deployment perspective and the results publishing are automated. The provided tests assess the main aspects of an ARINC-653 compliant RTOS, enabling avionic application engineers to optimize their designs, relying on the collected results.

To verify the accuracy of the results collected with SFPBench, the same application execution metrics were assessed using a tracing probe, with timing capabilities. When compared to the traced data, the error rate associated with the higher level of abstraction of SFPBench is up to 2%.

6 Conclusion and Future work

ARINC-653 RTOSs are a particular class of RTOSs, where time and space of different applications should be isolated. On the opposite of regular RTOS that have available different benchmark frameworks, ARINC-653 compliant RTOSs do not have any. This document presented an open-source [github_sff], collaborative benchmark framework targeting ARINC-653 compliant RTOSs, the SFPBench. The goal of SFPBench is to establish common performance metrics, and from a set of tests define reference points for comparison between systems. The suite is easily portable to different RTOSs, relying on a simple porting layer to enable its usage.

Possible future work comprise the deployment of additional test applications, stressing different aspects of the systems, such as boot-up time and memory management.

References