1 Introduction
Cyber physical systems (CPS) theory represents a novel research direction aiming to establish foundations for a tight integration of computing and physical processes [36, 37, 23]
. CPS research unifies domain specific design methods for subsystems to achieve desirable overall performance of the entire system. We are interested in battery supported CPS (CPSb) where control of physical systems and the underlying computing activities are confined by battery capacity, such as mobile devices. In CPSb, the battery, the actuators and the sensors can be viewed as physical components, while the embedded computers can be viewed as cyber components. The cyber and the physical components interact with each other so that no complete understanding can be gained by studying any component alone. The total discharge currents from the battery include currents drawn from all cyber and physical components as results of the interactions between these components. In order to estimate the remaining capacity of the battery or predict the remaining battery life, knowledge of the interactions among all cyberphysical components are necessary.
CPSb can be tested and verified using computer simulation tools that simulate all its components. Intensive simulations at the design phase usually achieve tolerance of perturbations that can be predicted. Prototypes of CPSb can then be verified using experiments. Exhaustive simulations and experiments are usually labor intensive and costly. Simpler yet less expensive approaches are desirable.
We propose an analytical approach to study CPSb. The analytical approach combines simplified mathematical models that capture the characteristic behaviors of each component of a CPSb. This approach is approximate in its nature. But since all CPSb components are modeled uniformly with mathematical equations, interactions between the CPSb components are naturally described as coupling terms between the mathematical models. Hence the analytical approach is well suited for gaining insight into the interactions among the CPSb components. Furthermore, mathematical insights into CPSb are greatly appreciated when perturbations unpredictable at the design phase may force the systems to work in conditions that are near or beyond the design envelopes where reliability becomes less guaranteed.
In this paper, we follow an analytical approach to develop mathematical tools to measure robustness of realtime scheduling algorithms and battery management algorithms for CPSb during runtime. The mathematical tools produce exact solutions in terms of mathematical formulas to describe the interactions between embedded computers and batteries, which are complementary to results obtained using simulation or experimental methods. In the rest of the introduction, we briefly review some background knowledge from literature that is closely related to our work, followed by the research problems addressed and the contributions made by this paper.
1.1 Literature Review
An important branch of realtime systems research is to study schedulabilty. It tries to ascertain whether a set of realtime tasks can be computed by a processor under proper scheduling. The study of utilization based schedulability tests can be traced back to the rate monotonic scheduling (RMS) and earliest deadline first scheduling (EDF) [27]. It has been shown that if a set of realtime tasks fall below a utilization bound, then they will be schedulabe. Since then, extensive research has been conducted on periodic tasks to improve the utilization bounds [25, 22, 8] or to relax assumptions [24, 6] that are used to derive these bounds. Some important utilization bounds for nonperiodic systems are also derived in [1]. Schedulability tests based on utilization bounds are easy to compute. Therefore, they are often used during runtime (online), but are constrained by limited computational power. Schedulability tests based on utilization bounds are typically conservative because they can fail on schedulable task sets. This drawback leads to exact schedulability tests [4, 25, 17]. Some recent advancements have been reported on exact schedulability tests [3, 38] with improved computational efficiency.
Robustness is well studied for feedback control systems and has seen successful applications [40]. For realtime scheduling, robustness is introduced as a measure of the tolerance of a scheduling algorithm to variations in computing time e.g. perturbations [33, 32, 7]. These works measure robustness by using a scaling factor (greater than one) for computing times that are long enough to cause a loss of schedulability. The robustness measure is computed using the binary search method, which limits it to nonperiodic tasks. Based on this notion of robustness, the method of elastic scheduling [10, 12] adjusts the periods of tasks to accommodate runtime perturbations.
Prediction of the state of charge (SoC, or the remaining battery capacity) is a basic function for all battery management algorithms [31]. A dynamic nonlinear battery model [14] and a particle filter will be used to predict the SoC in this paper. Different scheduling and control methods result in different “load profiles” that affect the operational life of a battery, hence various battery management algorithms are proposed [29, 19] to adjust the scheduling and control to prolong battery life. These previous results usually rely on optimization methods.
1.2 Research Problems and Contributions
We provide robustness analysis for CPSb by measuring robustness of both realtime scheduling and battery management algorithms. Two types of perturbations are studied in this paper: perturbations to the computing times of realtime tasks, and perturbations to the SoC and parameters of batteries. The perturbations to the computing times may extend or shorten the time spent to compute realtime tasks. The perturbations to the SoC may increase or decrease the SoC. We assume that these perturbations have not been accounted for at the design stage, but have to be tolerated at runtime.

How is robustness measured? Robustness of a realtime scheduling algorithm is measured as the maximum strength of perturbations on the computing times of scheduled tasks that will not cause loss of schedulability. Robustness of a battery management algorithm is measured by its ability to trigger the switching of a used battery out of the system before the SoC of the battery drops below a threshold that indicates instability, even under perturbations to the SoC and battery parameters.

What methods are developed to study robustness of realtime scheduling algorithms? We first developed a new mathematical model for the scheduled behaviors of realtime tasks. We then study schedulability of these tasks within a receding finite time window, and devise a dynamic schedulability test to give sufficient and necessary conditions for schedulability of acyclic task sets (e.g. tasks that are not necessarily periodic) under any priority based scheduling algorithm. The maximum strength of the perturbations that will not break schedulability can then be determined analytically. This tolerable strength of the perturbations provides a measure for robustness of the scheduling algorithm employed.

What methods are developed to study robustness of battery management algorithms? The mathematical models of realtime scheduling are combined with the controllers developed in our previous work [39] to generate predictions for the total battery discharge current. This prediction is then used to predict the SoC of batteries analytically at runtime. Due to nonlinearities inherent in battery behaviors, we introduce a measure for the robustness of battery management algorithms based on Lyapunov stability criteria [18]. We then introduce an adaptive battery switching algorithm based on the Lyapunov stability test to determine when used battery should be replaced.

What are the contributions for CPS? We have developed unified mathematical models for realtime scheduling in embedded computers that form the cyber components of CPSb, and for the discharging of batteries that form the physical components of CPSb. These mathematical models are also integrated with the feedback controller developed in our previous work [39]. By combining these mathematical models, we are able to study the interactions between the cyber and physical components analytically, this is well aligned with the main theme of CPS research. Several benefits have been generated by this analytical approach:

Our robustness analysis incorporates both realtime scheduling and battery management algorithms. These results have not been reported in literature. The robustness measures are able to account for situations at runtime that are unexpected at the design stage.

The dynamic schedulability test is an exact schedulability test for nonperiodic task sets. We have also generalized the notion of robustness from periodic task sets to nonperiodic task sets. These results are novel and complementary in comparison to the literature reviewed.

The paper is organized as follows. Section 2 discusses robustness of realtime scheduling algorithms. Section 3 studies robustness for battery management algorithms. Section 4 demonstrates the applications of the mathematical tools developed in this paper to a typical CPSb. Section 5 provides summary and conclusions.
2 Robustness of Realtime Scheduling Algorithms
A realtime scheduling algorithm assigns priorities to a set of realtime tasks so that all tasks can be computed on time on a processor. At the design phase of a realtime system, the parameters of tasks, such as computing times and deadlines, are usually determined based on desired performance and experimental data. We call these parameters the nominal characteristics. During runtime, the actual computing times and deadlines may deviate from the nominal values due to variations in the software, hardware, and the environment. These deviations are usually considered as online perturbations. For perturbations that can be predicted at the design phase, such as changes in task modes, the “designofexperiments” method may be applied to verify whether a scheduling algorithm can tolerate such perturbations [7, 16]. Usually there exist online perturbations that may be difficult to predict at the design stage, such as the transient overload of certain tasks and the arriving of unexpected tasks. In this section, we introduce mathematical tools to measure tolerance of a realtime scheduling algorithm to online perturbations.
Perturbations occurring online can change timing of the realtime tasks. It can cause a set of schedulable tasks to become unschedulable. Thus it is necessary to introduce a way to evaluate the schedulability during runtime as follows:
Definition 2.1
A dynamic schedulability test over a time interval checks if all task instances are able to meet their deadlines within .
As the starting time increases, the time interval will slide forward. The length of the interval depends on how confident we are to predict the actual characteristics of the realtime tasks to perform the schedulability test. All mathematical tools developed in this section are centered around the dynamic schedulability test within the time interval .
2.1 A Task Model
For theoretical rigor, let us define the task set that will be scheduled, which will include both periodic and aperiodic (non periodic) tasks. We consider a task set of independent hard realtime tasks running on a single processor. Let be any task in . Each task in consists of an infinite sequence of instances. We use the notation to represent the th instance of task . The instance is characterized by its time of arrival , its computing time and its relative deadline measured from its time of arrival. The absolute deadline of is then defined as .
For theoretical rigor, we make all tasks in the task set acyclic ([1]) as defined befow:
Definition 2.2
A task is acyclic if and only if satisfies the following properties:

different instances of are allowed to have different computing times and different relative deadlines, as long as and for all ;

the time of arrival of a new task instance coincides with the absolute deadline of the previous task instance of the same task, i.e. for all .
Figure 1 demonstrates an acyclic task. The horizontal line represents the progression of time. The upward arrows represent the times of arrival of new task instances, and the rectangles represent the computation of task instances. The computing times and the relative deadlines are also marked. These plotting conventions will be followed by other figures in Section 2.
We use the acyclic task model because it is universal: (1) any periodic task can be represented by an equivalent acyclic task. For example, a periodic task with computing time and period can be represented by an acyclic task with and for all ; (2) any set of non periodic tasks, i.e. tasks with irregular arriving instances, can be represented by an equivalent set of acyclic tasks [1].
We want to model the scheduled behaviors of the realtime tasks at any time . Some new notations that are only slightly different from the classical notations for acyclic tasks are necessary.
Definition 2.3
At any time , an instance of is effective if and only if it has arrived before time but has not expired, i.e., is effective at time if and only if
(1) 
Definition 2.4
At any time , is defined as the computing time of the effective instance of and is defined as the relative deadline of the effective instance of , i.e.
(2) 
2.2 The Dynamic Timing Model
In this section, we derive a mathematical model that describes the scheduled behaviors of a set of acyclic tasks within under any scheduling algorithm. We rely on the following assumption:
Assumption 2.5
At the starting time we assume that the values of and for are predictable.
Several key concepts will be defined including the state variables, the fixed priority window, and the dynamic timing model.
2.2.1 State Variables
The state variables are usually used to to derive differential or difference equations that describe dynamic systems behaviors [9]. To describe the dynamic behaviors of scheduled tasks, we define two state variables and one auxiliary variable as follows.
Definition 2.6
The dynamic deadline
is defined as a vector
. Each , for , is the length of the time interval starting at the time instant and ending at the absolute deadline for the effective instance of .In other words, suppose is an effective task instance, then .
Definition 2.7
The spare is defined as a vector , where , for , denotes the amount of CPU time that is available to compute the effective instance of from its time of arrival to time instant .
Definition 2.8
The residue is an auxiliary variable that is defined as a vector , where , for , denotes the remaining computing time required after time to finish computing the effective instance of .
We use the following example to further explain the meaning of , and . For ease of demonstration, we consider three periodic tasks.
Example 1
Consider tasks with and for . The three periodic tasks are scheduled under a fixed priority preemptive scheduling algorithm such that the priority of is higher than , and the priority of is higher than .
Figure 2(a) demonstrates the computation of on one processor. We use the same plotting conventions as in Figure 1, where the upper arrows indicate the times of arrival of the task instances. It can be observed that the computation of lower priority tasks are interrupted by the computation of higher priority tasks. When , , and are the effective instances of the three tasks with time of arrival , and respectively.
We can observe that at , , and will expire at , and respectively. Thus, according to Definition 2.6, the relative deadlines are
(3) 
After only has not finished computing. Therefore, the remaining computing times after are , and . By Definition 2.8, we have
(4) 
For with time of arrival at , since no higher priority task is computed within , all the CPU time within is available for . For with time of arrival at , since no higher priority task is computed within , all the CPU time within is available for . For with time of arrival , since the CPU time within , and is allocated to the higher priority tasks, only the CPU time within and is available for . Thus, according to Definition 2.7, we have that
(5) 
Similarly, at , we can find
(6) 
It is worth mentioning that is the amount of CPU time available to compute the effective instance of task , but not necessarily the amount of CPU time actually taken by that instance. If , then the amount of CPU time spent to compute the effective instance of task will be , which makes . On the other hand, if , then the amount of CPU time spent to compute the effective instance of will only be , and the extra CPU time will be given to tasks with lower priority than . In this case will be zero since no more computing time is needed. Therefore,
(7) 
This equation shows that solely depends on , and explains why is not a state variable. However, is more convenient to use for developing the dynamic timing model and the scheduled behavior in Section 2.2.4 and Section 2.2.5.
2.2.2 Scheduling Algorithms
We will now rigorously define a scheduling algorithm, which will be used by our mathematical models for the scheduled tasks later. Let be the set of indices of tasks and let the function measure the number of elements in a set. Let denote the set of tasks with priorities higher than at time . One way to formally define a scheduling algorithm is as follows.
Definition 2.9
A scheduling algorithm is a setvalued map between and the collection of all subsets of . It is parametrized as where and so that if .
For example, assume all tasks are periodic and the RMS algorithm [27] is used to assign fixed priorities. Suppose that tasks are labeled according to the length of their periods i.e. tasks with longer periods have larger indices. Then we have:
(8) 
Consider another example where a dynamic priority scheduling algorithm such as the EDF algorithm is used. Then, the values of depend on . At any time , the EDF assigns higher priorities to the tasks whose effective instances have closer absolute deadlines. According to the definition of , tasks whose effective instances having closer absolute deadlines also have smaller dynamic deadlines. Thus, for the EDF, the tasks with smaller values of are assigned higher priorities. When two tasks have the same dynamic deadlines, we assume that a higher priority is assigned to the task with a smaller index. Hence, the set can be expressed as
(9) 
2.2.3 Fixed priority window
Let us consider the time interval where the schedulibility of the tasks is concerned. We further divide into consecutive subintervals , where and . We require each subinterval to be a fixed priority window as defined below:
Definition 2.10
A time interval is a fixed priority window if no instance of any task arrives within .
In other words, task instance can only arrive at either or but not in between.
To better understand this definition, we consider Figure 2(b) as an example: is a fixed priority window because no new instance of any task arrives within ; and is not a fixed priority window because the task instance arrives at time .
The advantage of dividing into consecutive fixed priority windows is that realtime tasks within each fixed priority window are relatively easier to be modeled. These models can then be concatenated to derive more complex models for the scheduled behaviors on .
Next, we study how to divide into consecutive fixed priority windows. We denote the length of each window by , i.e
(10) 
then each window can be rewritten as . Hence, the partition of into fixed priority windows is determined by the window length for . To determine the value of each , we have the following claim
Claim 2.11
For a set of acyclic tasks, at the beginning of any subinterval, i.e. , if we choose , , then is a fixed priority window; otherwise, is not a fixed priority window.
Proof At the beginning of any subinterval, i.e. , consider the dynamic deadlines , as defined in Definition 2.6. According to the definition of , we know that the next task instance after arrives at .
If we choose , then no new instance of any task arrives in between . Therefore, is a fixed priority window.
On the other hand, if we choose , the next task instance after will arrive in between . Therefore, is not a fixed priority window.
The division of into consecutive fixed priority windows is carried out using the following procedure. At the beginning of the first subinterval, let , we choose the first window length to make the subinterval a fixed priority window. Then by letting and choosing a window length , the second subinterval can be made a fixed priority window. The process is repeated untill one subinterval reaches the ending time . According to Claim 2.11, we know that the largest possible window length can be expressed as
(11) 
where the extra term guarantees that the division procedure stops at time . A larger window length is preferred since it reduces the complexity in modeling the behaviors of tasks. Figure 2(b) shows an example of dividing the time interval into a series of consecutive fixed priority windows for Example 1 discussed previously.
2.2.4 Evolution of the state variables
With the state variables well defined in Section 2.2.1, we are now ready to define the dynamic timing model as follows:
Definition 2.12
The dynamic timing model is a set of equations that describes the evolution of the state variables over time .
For simplicity, we focus here on the evolution of the state variables within one fixed priority window . Later, the evolution of the state variables within any time interval can be obtained by concatenating the models within each fixed priority window that belongs to . For notational simplicity, we will drop the index . Moreover, we will use to denote the time point that is less than but is arbitrarily close to . Thus, the fixed priority window can now be equivalently written as .
In the dynamic timing model, the evolution of the state variables and , from the end of the last fixed priority window to any time within the current fixed priority window , can be derived in two steps: from to , and from to .
From to : First, we discuss the evolution for the state variables from to . For task , the values of the state variables at time , denoted by and , depend on whether an instance of arrives at .
(1) if no instance of arrives at then the dynamic deadline for is unchanged and must be positive i.e. , and all state variables hold their values from to , i.e.,
(12) 
(2) if an instance of arrives at then the dynamic deadline for will be reset to at i.e. . The dynamic deadline at will be the relative deadline for the new task instance i.e. . The state spare is reset to zero since no time is available between and . Therefore, we have
(13) 
In summary, according to (12) and (13), the evolution for the state variables from to can be written in a compact form as follows
(14)  
(15) 
where denotes the signum function, i.e. when , when , and when .
From to : Next, we discuss the evolution for the state variables from to .
(1) For the dynamic deadline , we know that the absolute deadline for the effective instance of is at . Since this absolute deadline is also at , we must have . Therefore, the equation for can be written as
(16) 
(2) For the spare , we know that the computation of is preempted until the computation of all higher priority tasks are completed. Then, the amount of time within that is available to compute is
(17) 
where denotes the time allocated to compute tasks with higher priorities than . The function max guarantees that it will not give a negative result. Therefore, the amount of time that is available to compute the effective instance of from its time of arrival to is
(18) 
In summary, according to (16) and (18), the evolution for the state variables from to can be expressed as
(19) 
where according to equation (7).
The mathematical equations discussed in (14) and (19) constitute the dynamic timing model within one fixed priority window , which can be implemented using Algorithm 1. Given the initial values of the state variables at , i.e. and , and the task characteristics within the fixed priority window, i.e., and for , we can use Algorithm 1 to obtain the evolution of the state variables from to any time . The dynamic timing model within any time interval can be achieved by iteratively applying Algorithm 1 to all the fixed priority windows.
2.2.5 Scheduled Behaviors of Tasks
We demonstrate how to use the dynamic timing model to describe the scheduled behaviors of the realtime tasks. Consider , we first describe scheduled behavior of task from . Within each fixed priority window , the scheduled behavior of task may go through three modes that will be indicated by a function :
The preempted mode: the computation of the effective instance of is blocked by tasks with higher priorities. This behavior is indicated by letting . It starts from the beginning of the fixed priority window and lasts for the amount of time , which is the sum of the remaining computing time of all higher priority tasks;
The execution mode: the effective instance of is being computed by the CPU. The scheduled behavior is indicated by letting . It starts right after the preempted mode and lasts until the computation of the effect instance of completes, which equals ;
The free mode: the computation of the effective instance of has completed and new instance has not arrived. The scheduled behavior is indicated by letting . It starts right after the execution mode and lasts till the end of the fixed priority window.
In summary, the scheduled behavior of within one fixed priority window can be expressed as
(20) 
where .
As it shows, the scheduled behavior of within one fixed priority window can be described by the state variables within . Applying the same methodology for all tasks in , we can derive the scheduled behavior of the realtime system within . As the fixed priority window propagates forward, the state variables will evolve according to the dynamic timing model in Algorithm 1. With the state variables evolving from to , we obtain the scheduled behavior of the realtime system over the time interval .
2.2.6 Verification of the Dynamic Timing Model
To verify the dynamic timing model, we compare the scheduled behavior of the realtime system derived from the dynamic timing model with the scheduled behavior of the same realtime system simulated using TrueTime [11]. TrueTime is one of the most commonly used software tools that facilitates research on realtime systems. TrueTime and the dynamic timing model work in different ways. TrueTime simulates a computer with a realtime kernel and maintains data structures that are commonly found in the realtime kernel, such as ready queues, time queues, records for tasks, interrupt handlers, monitors, timers and so on [11]. The dynamic timing model uses mathematical equations to analytically model the scheduling behavior, as shown in Algorithm 1 and (20). For the same realtime system, ideally TrueTime and the dynamic timing model should provide the same result. However, we find incorrect jitters in the behavior generated by TrueTime 1.5 implemented in MATLAB. These jitters do not exist in the behavior generated by the dynamic timing model.
Suppose at time , the state state variable . Consider a realtime system with three acyclic tasks running on it. The three acyclic tasks have the characteristics as ms and ms for s. We are interested in the scheduled behavior of the realtime system within . We run the simulation from to s using TrueTime 1.5 implemented in MATLAB. Side by side, we evaluate the dynamic timing model and (20) using MATLAB from to s. Figure 3 shows the comparative results of the scheduled behavior of the realtime tasks between the two different methods within .
By comparison, we see that the scheduled behaviors generated by TrueTime 1.5 and the dynamic timing model are identical for most of the time. The identical part indicates that the dynamic timing model can be used to describe the scheduled behavior of the realtime system as precisely as TrueTime. However, the scheduled behaviors generated by TrueTime 1.5 and the dynamic timing model are not identical for when s and for when s. Further exploration shows that the differences are due to jitters caused by the numerical inaccuracy in TrueTime 1.5 implemented in MATLAB, as illustrated in the upper half of Fig.3. As a simulation tool, TrueTime 1.5 inevitably has truncation errors that accumulate with numerical integration. Since the dynamic timing model presented in this paper is based on mathematical equations, the system behavior at time can be determined by evaluating functions without using numerical integration. Hence the chances for jitters are significantly reduced. No jitters are observed from the lower half of Fig. 3. This indicates that the dynamic timing model may be used side by side with TrueTime to resolve jitters.
2.3 Dynamic Schedulability Test
In Section 2.2, we have established a dynamic timing model that can analytically describe the evolution of the state variables from to . In this section, we study how to utilize the dynamic timing model to perform the dynamic schedulability test over . The success of this test requires the knowledge of the task sets within , as stated in Assumption 2.5.
For the set of realtime tasks , the dynamic schedulability test over can be decomposed to check whether each task of is able to meet its deadlines within each fixed priority window that belongs to . This is due to the following facts: (1) is schedulable within if and only if is schedulable within each fixed priority window , for ; (2) is schedulable within any fixed priority window if and only if each individual task is schedulable within . The following theorem states the necessary and sufficient conditions for the schedulability of within any fixed priority window .
Theorem 2.13
A task is schedulable within if and only if it satisfies ONE of the following two conditions:

and ;

.
Proof If an instance of expires at , i.e. , then the schedulability of within is satisfied if and only if the computation of this instance has completed, i.e.
According to (7), the above equation can be rewritten as
which implies that
(21) 
If no instance of expires at , i.e. , then the schedulability of within is automatically guaranteed.
According to Assumption 2.5, we can predict the actual task characteristics and within . Given the actual task characteristics and for , we can perform the dynamic schedulability test over the time interval using Algorithm 2. Algorithm 2 iteratively checks the schedulability of within each fixed priority window in the following ways: (1) first, at the beginning of any subinterval, it calculates the length of the current fixed priority window according to equations (11), as shown in Lines of Algorithm 2. (2) then, it utilizes the dynamic timing model in Algorithm 1 to obtain the values of the state variables at the end of the current fixed priority window, as indicated by Line ; (3) finally, it evaluates the schedulability of , where , within according to Theorem 2.13, as shown in Lines of Algorithm 2. To make the fixed priority window propagates seamlessly within , it assigns the starting time of the next fixed priority window to be the ending time of the current fixed priority window, as indicated by Line .
The variable indicates the dynamic schedulability test result of within : when is schedulable within , ; otherwise, . The set contains the dynamic schedulability test results of within all fixed priority windows that belong to . The task is schedulable within if and only if . The task set is schedulable within if and only if all individual tasks are dynamically schedulable within , i.e. .