There is an increasing trend in the use of control theory to guide the software engineering design process of self-adaptive systems (SAS) in order to provide dynamic adaptation with guarantees of trustworthiness and robustness (Weyns, 2018; Shevtsov et al., 2018; Filieri et al., 2017). Control theory has established solid techniques for designing controllers that enforce controlled (or managed) systems to behave as expected. “These controllers can provide formal guarantees about their effectiveness, under precise assumptions on the operating conditions” (Filieri et al., 2017).
At runtime, control systems are subject to inputs not known at design-time. Based on that, they are designed to respond within acceptable boundaries to dynamic environmental interactions. The analysis of whether the controlled system is operating accordingly demands metrics for comparing the performance of various control systems (Ogata, 2001). A self-adaptive system designed through control theory must be able to quantitatively guarantee its convergence in time to reach the adaptation goal (i.e., setpoint), and on its robustness (i.e., convergence to the setpoint) in the face of errors and noise (Shevtsov et al., 2018; Filieri et al., 2017). Additionally, due to high level of uncertainty in self-adaptive systems, the control model following a control-theoretical approach requires constant evolution.
Therefore, in the control-theoretical design of a self-adaptive software system, it is paramount to support decision-making procedures that can identify appropriate adaptation strategies by efficiently exploring the solution space. To avoid the problem of exhaustive analysis of large adaptation spaces when planning an adaptation, artificial intelligence (AI) techniques have been used, among others: to determine at runtime subsets of adaptation options from a large adaptation space through classification and regression(Quin et al., 2019); to assess and reason about black-box adaptation decisions through online learning (Esfahani et al., 2013; Elkhodary et al., 2010)
; or to find Pareto-optimal configurations of the system with respect to the mission goals of mobile robots through machine learning(Jamshidi et al., 2019). Moreover, control-based approaches for self-adaptive systems often employ adaptive controllers to address inaccuracies of the control model or radical changes of the environment/controlled system by updating the model using filters (e.g., Kalman or Recursive Least Square) (Maggio et al., 2012, 2014; Filieri et al., 2015b) and by tuning the controller parameters using feedback or machine learning (Filieri et al., 2012; Lama and Zhou, 2013).
Although such significant contributions in the literature have promoted solid mathematical foundation of control theory for building SAS, there is still the need of understanding (i) how AI could aid the parameter optimization of controllers and (ii) how to apply AI-based techniques in the search of adaptation solutions in control-theoretical SAS strategies. Moreover, “there are various hurdles that need to be tackled to turn control theory into the foundation of self-adaptation of software systems” (Weyns, 2017). Software engineers are not usually knowledgeable of the mathematical principles on control theory. Therefore, there is a need to turn control theory (and its guarantees) into a scientific foundation for engineering self-adaptation (Weyns, 2017).
In this paper, we aim at filling those gaps by intertwining control theory and AI in a two-phase optimization approach to support the engineering of control-based SAS. Our controller, hereafter named system manager, contains the mechanisms for deciding how the self-adaptive system must behave in order to achieve the desired goals for the managed system. Due to the gap between high-level requirements specification and the internal knob behavior of the managed system, a hierarchically composed components architecture seek the separation of concerns towards a dynamic solution (Braberman et al., 2015)
. Therefore, our system manager is further decomposed into two other components: a strategy manager and a strategy enactor. Each of these components are equipped with AI-based optimization techniques, namely regression model combined with the Non-dominated Sorting Genetic Algorithm II (NSGA-II)(Deb et al., 2002), for respectively: (i) synthesizing adaptation strategies through high-level reasoning upon the model that represents the managed system, and (ii) enforcing actions through control-theoretical principles to ensure the adaptation strategies are applied and properties of interest guaranteed. By these means, we contribute with a hierarchical and adaptive system manager that relies on optimization at all levels of the decision-making process towards a more efficient adaptation mechanism, while also improving the self-adaptation loop in terms of control theoretic properties: stability, overshooting and steady-state error.
For evaluation purposes, we experimented with a healthcare domain prototype constructed for operating volatile environments. The prototype played the role of the target system. The Body Sensor Network (BSN) prototype was implemented following the Robot Operating System (ROS) framework and consists of a set of distributed sensors that collects the patients vital signs and forwards them to a centralized unit for further processing and reporting. We evaluated the ability of our approach to optimize the adaptation space and to improve the self-adaptation loop w.r.t. the control theoretic properties. Results show that we have found nearly optimal solutions for space and time through our hybrid approach. Not to mention, the overshoot and steady-state error were below 3% with 100% of stability.
The remainder of this paper is organized as follows: Section 2
presents a glimpse over control theoretical principles and over evolutionary algorithms used in this work. Section3 presents the core contribution of our work. Section 4 presents the evaluation of our proposal on the BSN prototype. Section 5 presents the most related literature work to ours. Finally, Section 6 presents the conclusion and the directions we intend to pursue as our future work.
In this section, we discuss the background of our work, feedback control loop and evolutionary algorithms.
Feedback Control Loop. Self-adaptation is typically realized by a feedback loop (Brun et al., 2009), which originates from control engineering (Åström and Murray, 2010). As shown in Fig. 1, the plant is the subsystem to be controlled by a control signal from the controller. A reference (setpoint) is defined for a property of interest observed on the plant. The goal is that the observed property is sufficiently close to this reference despite the disturbance affecting the plant. For this purpose, the difference between the observation and reference is fed back to the controller that uses this error to decide about the value of the control signal.
A self-adaptive system that uses a feedback control loop to adjust its behavior is a closed-loop system (Hellerstein et al., 2004; Salehie and Tahvildari, 2009). Such a system can have different responses to a perturbation, that is, how the property of interest observed on the plant develops over time after the perturbation. Figure 2 illustrates a typical response to a sudden variation in the observed property. Various control metrics exist to verify how well the system is responding and if it satisfies requirements previously defined with expert knowledge (cf. (Filieri et al., 2017; Hellerstein et al., 2004)).
Stability refers to the system reaching a steady-state equilibrium, that is, the observed property converges to a specific value, ideally the setpoint, and stays inside a previously defined stability margin around this convergence point. An overshoot denotes that the property exceeds the setpoint in the transient state, which should be typically avoided. It is measured as maximum peak value of the property relative to the setpoint. The settling time is the time required for the system to reach the steady-state equilibrium. Finally, the steady-state error measures how far from the setpoint the system converges, that is, the steady-state difference between the setpoint and observed property.
In a feedback loop, one of the most popular controllers being used is the proportional-integral-derivative (PID) controller while often the derivative part is omitted due to difficulties in its tuning, leading to the use of PI controllers (cf. (Ang et al., 2005)). Moreover, such controllers are also widely used for adapting software systems (Shevtsov et al., 2018). Based on this observation, we also use a PI controller throughout this work. The continuous equation of a PI controller is shown in Equation 1, where is its control action and the error, both at time . In this equation we have three constants, which are the proportional gain , the integral gain . and the integral window . multiplies the current measured error directly, whereas multiplies the integral of the error over the time window . These constants can be defined empirically or using a tuning method of preference.
In this equation, the integral term is transformed into a sum of errors over the time window . Since a PI controller is also called periodically, the time window is a tunable parameter, which we consider in this work as the number of error measurements to be considered in the sum. This means that in the time instant the errors from to will be used in the integral term.
Evolutionary Algorithms. Evolutionary algorithms originate from computational intelligence, a subfield of artificial intelligence, and refer to meta-heuristic optimization techniques that imitate biological evolution (Kruse et al., 2016). A widely-used technique is the Non-dominated Sorting Genetic Algorithm II (NSGA-II) (Deb et al., 2002) that has been applied to various software engineering problems (Harman et al., 2012), and self-adaptive systems to optimize the configuration of the managed system (Fredericks et al., 2019; Shin et al., 2019). NSGA-II is a multi-objective algorithm producing a set of Pareto-optimal solutions to a given optimization problem. Imitating biological evolution, it evolves a population of possible solutions encoded as chromosomes by crossover, mutation, and selection. The evolution is guided by a fitness function that encodes the objectives of the optimization and that evaluates how well a solution satisfies these objectives. Solutions with a higher fitness are preferably selected for further evolution steps in the next generation of the population. Thus, the evolution converges to solutions with higher fitness. Moreover, NSGA-II promotes diversity to obtain Pareto-optimal solutions with different trade-offs between the objectives.
3. A Hybrid Approach Combining Control Theory and AI
In this paper, we propose an AI-based optimization approach to support the engineering of control-based SAS. Following principles of control theory, our system manager decides how the system must behave in order to achieve the desired goals for the managed system. Moreover, our system manager is enhanced with integrated AI-based optimization techniques, namely regression at all levels of the decision-making process towards a more efficient adaptation mechanism for (i) high-level reasoning upon the model that represents the managed system to synthesize adaptation strategies (cf. Section 3.1) and (ii) enforcing actions through control-theoretical principles ensuring the adaptation strategies respect the properties of interest: stability, overshooting and steady-state error (cf. Section 3.2). In our approach, an adaptation strategy is a rule that guides the system behavior. It consists of a goal to be reached, a condition that defines whether the goal is reached, and one or more actions to change the system behavior.
To provide a solution that architecturally promotes the separation of concerns in a hierarchical fashion (Braberman et al., 2015), the proposed architecture relies on two horizontal layers being the System Manager and the Managed System, and a third and vertical layer being the Knowledge Repository. The corresponding architectural overview of our approach is shown in Fig. 3.
The Managed System layer contains the Target System being the software system to be controlled. The Knowledge Repository is orthogonal to the other layers and responsible for maintaining the persistent information, that is, a parametric formula, the goals, and system information. A parametric formula evaluates a quality of service (QoS) property of the target system at runtime. For details of how such a formula is created, we refer to our previous work (Solano et al., 2019). The goals are the objectives to be achieved by the system with respect to functional or non-functional requirements. The system information is a set of events and data about the status of the target system collected from the running system.
The System Manager is responsible for controlling the Target System and realizes the combination of control-based self-adaptation and artificial intelligence (AI), which makes our approach hybrid and improves the self-adaptation process.
For this purpose, the system manager has four components, two realizing self-adaptation in a hierarchical fashion and two realizing AI-based optimization to reconfigure the self-adaptation process. This results in a hierarchical and adaptive system manager.
The Strategy Manager synthesizes adaptation strategies by reasoning on the target system using the parametric formula, the goals defined by the stakeholders and the current status of the observed property. This synthesis process and especially the time needed for the synthesis of adaptation strategies is monitored by the Manager Optimization component. Based on the measured time, this component optimizes the configuration of the strategy manager by providing optimal values for two parameters (Granularity and Offset) that influence the synthesis process. The goal of the optimization is to reduce the required time and thus, the efficiency of the synthesis process. The strategy manager provides a synthesized strategy to the Strategy Enactor.
The strategy enactor enforces an adaptation strategy in a closed-loop fashion by implementing a feedback control loop on top of the target system. Accordingly, the strategy enactor implements a PI controller as an actuation mechanism for enforcing behavior at the target system, which is guided by conditions and goals. This self-adaptation process is monitored by the Enactor Optimization component, that computes various Control Theory Metrics based on the System Information collected from the running target system. Moreover, the component uses the metrics to tune the strategy enactor adaptation parameters. Upon the metrics and system information, the optimizer tunes the PI controller parameters ( and ). The goal of this optimization is to improve the adaptation quality with respect to the control theory metrics.
As a result of combining each, the strategy manager and the strategy enactor with an optimization, the goals of the system are guaranteed to be constantly achieved with optimal adaptation search time and adaptation quality, in a limited search space. Both optimizations are realized by AI-based techniques of model learning (regression / curve fitting) and meta-heuristic search (evolutionary algorithms), while the strategy enactor is based on control (PI controller). By this combination, the overall controller (i.e., the system manager) implements a hybrid approach of control and AI.
We evaluate our approach with the Body Sensor Network (BSN)111Access http://bodysensornetwork.herokuapp.com for an executable artifact., being an exemplar for a target system that monitors and analyzes the health of patients. The BSN is composed of a set of distributed sensors to monitor vital signs of patients and to forward these signs to a centralized processor for analysis. To avoid overload of the centralized processor, the sampling rate of individual sensors can be adapted, which effects the amount of data sent to and analyzed by the processor.
In the following we detail the stategy manager and strategy enactor of our hybrid approach emphasizing their optimization.
3.1. Strategy Manager and its Optimization
Responsible for high-level reasoning, the strategy manager must be capable of performing expensive decision-making computation. Identifying feasible and effective adaptation strategies is particularly difficult as the size of the solution space grows exponentially with the number of individual adaptation options. For instance, for architectural adaptation the reasoning has to take the variability of each component (e.g., alternative components or parameters of the component) and the composition of components into account to evaluate whether the overall architectural configuration satisfies the goals of the system.
In previous work (Solano et al., 2019)
we have developed a transformation framework for specifying requirements hierarchically in a goal model, translating the goal model to parametric formulae with symbolic model checking, and evaluating these formulae at runtime to express probabilities over the fulfillment of the goals by the running system. Thus, these formulae provide guarantees for satisfying the goals while their evaluation is time efficient (constant computational time) since a costly model exploration as in typical model checking is avoided. The goal modeling and the translation of the goal model to parametric formulae are implemented in the GODA-MDP tool222https://pistar-goda.herokuapp.com/ (Solano et al., 2019). A parametric formula is essentially an algebraic equation that relates the probability of successfully achieving a system’s overall goal (i.e., the root goal of the goal model) to the combined probabilities of lower-level tasks that contribute towards realizing the goal. Therefore, a formula can be used in the analysis and planning stages of a feedback loop to solve a satisfiability problem, where an appropriate combination of successful local tasks would lead to satisfying a required global property. Searching for such a combination corresponds to the synthesis of adaptation strategies since the combination corresponds to a configuration that can be enacted on the target system. Although using the parametric formula reduces the complexity and time for reasoning, such formulae are still not free from problem of combinatory state-space explosion.
The pseudocode of Algorithm 1 details the search process to solve a parametric formula captured in a model. The search process has two input parameters, the granularity and offset that are used for reducing or widening the solution/search space. Lines 2-4 capture the monitoring of a set of QoS properties of the target system, the calculation of the current QoS property , and the calculation of the , that is, how far from the setpoint the current property value is. Then, lines 5-24 represent the search for a combination of independent terms (), for each in the set of all properties , that would lead to the goal (). To do so, lines 6-7 reset all into the value defined by the and the set of new values () is plugged into the model. If the error turns to be positive, then the value of each of the properties in should be incremented with the error multiplied by a factor dictated by the granularity until they, one by one, reach values that approximate the dependent value to the goal (see lines 8-15). On the other hand, the properties in are decremented by the same factor (lines 15-22). At line 23, the modified set of is concatenated into the valid set of Strategies. In the end, the best or a sufficient strategy is chosen among the valid ones (line 25).
In this search process, the procedure Apply matches the independent variables of the model/formula to each respective value, and performs the calculation that results in a value for the dependent variable, that is, the procedure evaluates the formula. Thus, it is used for quantitative reasoning on the system state. As a result, the algorithm relies on performing a search within the combination of all independent variables that would lead the system into reaching the goal. The search itself increases/decreases the value of each independent variable at a time, by gradually changing its value with a step dictated by a factor of .
The granularity and the offset parameters are fundamental to determine whether the strategy manager will be able to find a combination of values (i.e., to converge to a solution) that satisfies the goal of the system, and to find such a combination in time.
The smaller the granularity is, the algorithm will more likely find a solution, even though it would take more time to do so. In contrast, the offset broaden the search space boundaries, so the bigger it is the more values will the search go through until it reaches the end line.
Consequently, we need a method to enhance the choice of such parameters, and this is where we employ AI-based optimization.
Manager Optimization. In the following, we discuss the optimization of the search process conducted by the strategy manager to synthesize adaptation strategies. The optimization aims for identifying suitable values for the granularity and offset parameters (cf. Algorithm 1). The optimization of the strategy manager is a single-objective parameter optimization problem since the goal is to minimize the time required by the strategy manager to synthesize an adaptation strategy. This problem is solved by the optimization pipeline depicted in Figure 4. This pipeline consists of the following four steps:
1. Collect Data: We collect data relating the time to find a solution () with values for the granularity () and offset parameters. The variable represents the time taken by the strategy manager to converge to the setpoint. The data is obtained from executing our approach without the optimization in place. Thus, we execute our approach in different scenarios, that is, with different static choices of parameter values for granularity and offset.
2. Learn Model: From the collected data ¡time, gran, offset¿, we learn a model using the curve fitting method. The learned model describes the behavior of the strategy manager as a function
. Thus, we can use the model to estimate the time to synthesize an adaptation strategy given concrete values for the granularity and offset parameters. An example for such a model is.
3. Find Optimal Configuration:
We use the learned model as an input to an evolutionary algorithm, particularly NSGA-II, to find optimal values for the granularity and offset parameters in order to minimize the time to synthesize an adaptation strategy. For the evolutionary algorithm, a candidate solution is encoded as a chromosome or a vector two components, one component for each parameter. The fitness function is the minimization of the time. The fitness of a specific candidate solution, that is, a concrete pair of two real values for the two parameters, corresponds to computing the learned function with this pair of values. Moreover, the optimization is constrained by the problem domain, as well as the range of the granularity and offset parameters. The optimization performed by evolutionary algorithm results in an optimal or near-optimal configuration (parameter setting) for the strategy manager. To illustrate, lets bring back our previous example model,. Initially, a population of one hundred individuals is generated randomly, having the target values and/or input parameters limited by a given range. An individual is composed of a combination of genes and a calculated target value. In NSGA-II, the genes are represented by the input parameters, in this case gran and offset. The target value in this step is the variable. Better ranked solutions, i.e., solutions with the best fitness values, are observed in individuals with a lower to find a solution. For the optimization of both, strategy manager and strategy enactor stages, we have set the algorithm to process a hundred generations. Moreover, we have applied some standard operators like Probability Matching (PM), Simulated Binary Crossover (SBX), and Tournament Selector based on dominance comparison (Pareto).
4. Apply optimal configuration: Finally, we apply the optimal or near-optimal configuration, that is, a pair of values (gran, offset), to the strategy manager affecting the performance of the synthesis of adaptation strategies.
Delving into the optimization pipeline, we start describing the first part of the learning333Strictly speaking, in this stage the optimization modules do rely on previous adaptation data to learn the behavior and optimize the strategy manager parameters. However, we reckon that the use of term learning to describe curve fitting methods and evolutionary algorithms is still disputable. process: the manager optimization. As mentioned before, the manager behavior is ruled by two features: (i) gran, that is the granularity with which the strategy manager will search for new solutions, and (ii) offset, representing the starting point of the search within the solution space. Both attributes are crucial to determine a third variable of interest, the (iii) time to solution, i.e., how long the manager will take to find a suitable adaptation strategy. From a naive perspective, these values are often defined in an uninformed fashion, before the system operation, and remain unchanged despite the characteristics of the system, or the particularities of the domain. Our proposal tweaks this logic by tracing the strategy manager behavior from its data collected offline. Knowing the fact that the target variable depends on the gran and offset values, we conduct a curve fitting process to come up with models that explain the time to solution variable in terms of the aforementioned parameters. Such a process returns a mathematical function that has the best fit to the data points collected from previous executions of the strategy manager. If the learned function being the output of the curve fitting process is simple, it is possible to solve the optimization problem mathematically. Otherwise, we solve this problem using an evolutionary algorithm, like the NSGA-II.
We should note that our approach has been conceived to provide informed choices about granularity, offset, and at design time. Therefore, the amount of data used for training is not a major concern. A better description of the system behavior allows the engineers to make better choices of control theoretical metrics. It may be the case that the system behavior changes in face of uncertainty. Clearly, learning online from data and continuously feeding the system with new control theoretical metrics may leverage the strength of our approach. However, adopting an online learning approach brings additional complexity to the pipeline, which is out of scope in this work.
3.2. Strategy Enactor and its Optimization
The strategy enactor is responsible for enforcing an adaptation strategy on the target system. The strategies are synthesized by the strategy manager and must be read, interpreted, and enforced by the strategy enactor. Under the guidance of the active strategy, the enactor continually evaluates whether the target system behaves as it is intended to. Otherwise, it adapts the system towards achieving its goals. Whenever the enactor cannot enforce the active strategy, an exception is propagated to the strategy manager in an attempt to receive a new and more adequate adaptation strategy from the strategy manager.
Moreover, the strategy enactor realizes a negative feedback loop. In this sense, the enactor continuously monitors the system QoS properties, analyzes them, and checks whether its status is compliant to the goal demanded by the active strategy. As previously discussed, a strategy is defined by a goal, a condition, and a set of actions to achieve a desired configuration of the target system.
Figure 5 depicts the configurable feedback loop for enforcing strategies on the target system. The strategy enactor collects information from the target system and analyzes the QoS property of interest. If the monitored property status does not correspond to the desired goal under the defined condition then the enactor enforces one or more actions. Whereas the actions are optimized accordingly. In this context, we rely on control theory to implement the function that determines the control signal by using a PI controller. In this work, we use the BSN exemplar as a target system, whose knobs are the sampling rates of the sensors, which are adapted based on the reliability of the BSN’s central processing component affected by the load on this component.
The synthesized adaptation strategy guides the parameters of the closed loop and how the adaptation is executed. To this extent, the PI controller is an instance of the element that enforces the control signal on the target system, characterized as the “perform action” in Fig. 5. Thus, it receives as input the difference between the current state of the observed system property and the goal of the adaptation strategy (error), and multiplies the error by factors of the gains Kp and Ki coming from the optimized action, according to Eq. 2. In the case of our running example, the PI controller would trigger an adaptation command with a description of the calculated new sampling rate for a sensor (e.g., the thermometer of the patient). Depending on the values of Kp and Ki, the increments or decrements on the actuation variable could not guide the property into converging to the setpoint, they could request too much effort from the system, or they could take too long to reach the setpoint. Foremost, the lack of information on a model that relates Kp, Ki, stability, overshoot and steady-state-error demands an optimization of Kp and Ki towards optimal performance.
Enactor Optimization. The designed PI controller is not sufficient to guarantee optimal performance. Therefore, we employ an AI-based optimization technique to tune the controller’s parameters with respect to the desired control-theoretical properties of settling time, overshoot, and steady-state error. In this work, we focus on the overshoot and steady-state error. These parameters are the proportional gain and the integral gain of the PI controller. Thus, the optimization problem is to find optimal values for and that improve the self-adaptation in terms of overshoot and steady-state error. These two control properties should be minimized resulting in a multi-objective optimization problem.
The optimization pipeline for the strategy enactor is shown in Fig. 6. Except for slight differences, this pipeline and the employed techniques are the same as for the strategy manager (cf. Fig. 4). Considering Fig. 6, the control-theoretical metrics are collected from the behavior of the system during execution. From this data we get the data points relating and to settling time, overshoot, and steady-state error. Using this data, we adopt a curve fitting strategy to come up with three mathematical functions that describe the relationship between the parameters and and each control-theoretical metric: overshoot () and steady-state error () (see second step in Fig. 6). The type of the curve (e.g., linear, polynomial, exponential, etc.) depends on the relation revealed by the data points. This is illustrated in Fig. 7 for the BSN case study.
Once we have these functions, it is possible to feed the NSGA-II with them in order to find values of and that minimize the overshoot and steady-state error. Since we have now a multi-objective optimization problem, the meta-heuristic will not return just a single pair of optimal values for and , but a set of Pareto-optimal solutions, i.e., several pairs of (). These results are optimal or near-optimal solutions trading off the two objectives, overshoot and steady-state error as depicted in Fig. 8.
Finally, we update the parameters of the PI controller employed in the strategy enactor with and values that optimize the gains that lead to lower and more balanced overshoot and steady-state error.
4. Experimental Results
Our proposal is evaluated through the execution444Configurations for experiments: CPU 2x Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz, 8192MB RAM, Ubuntu 18.04.3 LTS, GNU C Compiler version 7.4.0 of an actual prototype from the healthcare domain (BSN) as the target system alongside an implementation555https://www.github.com/lesunb/bsn of the system manager components upon ROS. The evaluation itself follows the Goal-Question-Metric methodology (Caldiera and Rombach, 1994). Our focus is on answering the guiding questions, from a broader perspective (i) how could AI aid the parameter optimization controllers, and (ii) how to apply AI-based techniques in the search for adaptation solutions in control-theoretical SAS strategies. Thus, we narrow down the questions towards an operational scenario without loss of generality, see Table 1.
The first goal (G1) stems from the fact that the proposed AI-based optimization pipeline aims at providing support to the search for an optimal adaptation task by reducing time to solution at runtime. Therefore, to support G1, we collect the duration of the optimization pipeline for both components, Strategy Manager and Strategy Enactor. The optimization pipeline relies on the identification of parameters through the collection of control theoretical metrics from QoS attributes of the system at runtime. Thus, with G2, we want to guarantee that our approximation does lead to efficient solutions. By this means, we measure the efficiency of our method by the reached control theoretical metrics with optimized parameters configuration and the best solutions given with the exhaustive search algorithm.
Under the light of the GQM, the experiment is performed in complementary phases (i) time and space evaluation of the search for the optimal adaption, and (ii) evaluation of the adaptation quality with distinct configurations. Thus, following sections further describe the experimental setup, the operational scenario under which the experiments were conducted, and finally the results.
4.1. Time and space to find an eligible adaptation
In this phase, we have first instrumented the pipeline with a chronometer to measure the duration of the learning model step and the finding an optimal configuration step, from both strategy manager instance and strategy enactor instances.
For the strategy manager, the time to perform the curve fitting (time to solution, gran and offset) was in average 0.3099 seconds per execution, whereas the curve type was classified as exponential. The NGSA-II took in average 0.0023 seconds per execution. Totaling in 0.3123 seconds per execution for the strategy manager pipeline.
For the strategy enactor, four curve fittings had to be realized (i) Kp-SSE, (ii) Kp-Overshoot, (iii) Ki-SSE and (iv) Ki-Overshoot. For (i) the algorithm took 0.01295 seconds to fit the model into a quadratic function, (ii) it took 0.0498 seconds to fit into an exponential, (iii) it took 0.0127 seconds to fit into a quadratic and (iv) it took 0.0075 seconds to fit into a linear model. Furthermore, the NSGA-II was executed twice (v) Kp-SSE & Kp-Overshoot and (vi) Ki-SSE & Ki-Overshoot. For (v) the algorithm took 5.3947 seconds and (vi) it took 4.896 seconds. Totaling in 10.374099032 seconds per execution for the strategy manager pipeline.
Our execution of NSGA-II makes ten thousand function evaluations. However, this amount can be reduced to one thousand without great quality loss if the execution time of the pipeline needs to be improved.
4.2. Quality of the adaptation
We have deployed and executed the prototype under a scenario where at least one adaptation was triggered for each five minutes execution. Then, 50 randomly selected combinations of and , namely configurations, were executed. The 50 configurations are a product of the combination of two sets Kp = [60,150] with steps of 10, Ki = [0.2,1] with steps of 0.2 and IW = 5. Once we select an optimal configuration with the AI technique, another scenario is performed, totaling 51 configuration scenarios. The configuration selected by the optimization is , and .
The scenario was derived from the running example, where the stakeholders wish to maintain the system’s reliability level at 95%. The maintenance process is held by the system manager, as follows.
The system manager must continuously monitor the target system’s reliability status during runtime. Moreover, the prototype was developed to satisfy complementary tasks with single responsibility components. For that reason, the reliability is locally calculated for each component and composed through a model that relates their probability of success, i.e. the reliability formula from previous works (Solano et al., 2019). To keep track of the system goals, the system manager systematically analyzes whether the monitored reliability status fulfills them. Where in case of violations to the adaptation strategy conditions the system managers adapts the target system to maintain the reliability level accordingly to the stipulated goal.
In the experimental sessions, we simulated a scenario where multiple sensors flood the central processor with a higher rate than what it can handle. On top of that, noise is generated by an external agent that triggers random failures on the sensors preventing them from forwarding data to the central processor. The simulated failures reduce the number of messages received by the central processor at a random factor, placing another layer of uncertainty on the system execution. To cope with the disturbances into the reliability status, the system manager delivers reconfiguration signals containing increments/decrements to the sensors’ sampling rate that could give more or less time for the central processor to maintain a always flowing processing queue.
The SAS was instrumented to log information regarding the quality of the adaptations w.r.t the control theory metrics. The outcoming data is processed accordingly to each evaluation and is presented in Figures 9 and 10.
Figure 9 shows the results of the adaptation space explored by the naïve (in red and blue) and our approach (in black). Our approach was able to meet the nearly optimal adaptation space for the threshold of steady state error and overshoot of 3%. Figures 10 and 10 provides a bigger picture of the experiments over the 50 configurations and the optimized one. For the sake of space, we refer only to the first 24 configurations, where configuration 1 refers to our approach. Results for the overshoot and steady-state error show that our approach managed to perform among the best results, reaching out convergence point of 100%. In other words, our approach not only managed to optimize steady-state and overshoot but also the stability of our BSN. By these means, the self-adaptation loop of the BSN from the control theoretic principles can be considered quite robust for the scenarios evaluated.
4.3. Threats to Validity
Internal validity. The main threat to internal validity is the data that we collected from the scenarios of executing our approach and that we used for the model learning and the optimization. Different scenarios may lead to different data and thus, to different results of our approach. Moreover, we rely on specific techniques for the model learning (curve fitting) and optimization (NSGA-II). Other techniques could be used, which may lead to different results. In this context, we have not tuned the techniques, especially the hyper-parameters of NSGA-II, to avoid any bias concerning BSN.
External validity. Although we present a generic self-adaptation approach that combines AI and control-theoretical principles, our evaluation focuses on the BSN case study with a single non-functional requirement (reliability) as the adaptation concern. Thus, we cannot generalize our results to other concerns (e.g., performance, security, or costs) and target systems, which calls for further experiments.
Construct validity. The main threat is the correctness of the implementation of our approach and the BSN used for the evaluation. Concerning our approach, at least two authors of this paper have reviewed the implementation and checked the plausibility of the evaluation results based on the experience they have with the BSN from earlier work (Rodrigues et al., 2018; Solano et al., 2019). With this experience we are confident about the validity of the BSN that is publicly available.
5. Related Work
As stated by Weyns (2017), the use of control theory in the design of self-adaptive software can bring several benefits since it allows providing analytical guarantees for several system properties such as stability and robustness. Consequently, the idea of using control-based approaches to achieve self-adaptation for software systems has been widely studied (Shevtsov et al., 2018), and the use of control theory as a well-suited solution for self-adaptation to systematically meet the adaptation goals despite a certain degree of uncertainty of the system and environment has been recognized (Filieri et al., 2015b, 2011, 2017; Diao et al., 2005).
Control-based self-adaptation approaches typically propose controller synthesis to automatically construct a controller for managing the software system’s adaptation needs (Filieri et al., 2015a; Shevtsov and Weyns, 2016; Filieri et al., 2014). The resulting controllers are related to the Strategy Enactor in our work. These approaches to synthesize these controllers could be applied for the Strategy Enactor whose focus is on using a control-theoretical controller. However, our work presents a further module called Strategy Manager that synthesizes an adaptation strategy to which the controller subsequently adheres As previously stated, the combination of these two modules is the essence of our work, where self-adaptations is realized at two different levels: we first use a global goal, set by users, to come up with an adaptation strategy, and then we adapt the target system’s parameters based on goals, used as setpoints, conditions and actions defined in the strategy.
Moreover, we use AI-based learning and optimization techniques in our work to reduce the adaptation space at the strategy manager level and to optimize the controller’s parameters at the strategy enactor level. For the latter level, the focus is on meeting acceptable system behavior with respect to control properties such as overshoot and steady-state error. In this context, other approaches use learning, online or offline, and other statistical methods to reduce the adaptation space, often called configuration space, in order to find a configuration for the system in available time (Gerostathopoulos et al., 2018; Elkhodary et al., 2010; Esfahani et al., 2013; Quin et al., 2019; Jamshidi et al., 2019; Jamshidi and Casale, 2016; Jamshidi et al., 2017). The main difference to our work is that improve efficiency of synthesizing adaptation strategies by optimizing the the search process (in terms of granularity and offset) exploring the adaptation space. Thus, we optimize the exploration of the search space rather than explicitly reducing the space.
Similarly to the strategy enactor in our work, existing control-based approaches for self-adaptation use PI or PID controllers that are adapted at run-time (cf. (Shevtsov et al., 2018)), to compensate inaccuracies of the model or to handle radical changes. Particularly, values of parameters in the model are estimated and updated at run-time based on measurements often processed by filter algorithms such as Kalman or Recursive Least Square (Klein et al., 2014a; Maggio et al., 2014; Klein et al., 2014b; Filieri et al., 2014; Shevtsov and Weyns, 2016; Shevtsov et al., 2017), or controller parameters are tuned online by relay feedback (Filieri et al., 2012) or machine learning (Lama and Zhou, 2013), or offline by experiments (Desmeurs et al., 2015). In contrast, in our approach the controller parameters are tuned by an evolutionary algorithms identifying (near-)optimal values for them. Evolutionary algorithms have been applied previously to self-adaptive systems in order to find optimal configurations of the target system (Fredericks et al., 2019; Shin et al., 2019) while we apply them to improve the self-adaptation performed by the strategy manager and strategy enactor.
To the best of our knowledge, our work is a novel approach that combines the use of algebraic models (i.e., the parametric formulae) of a system, which are used to synthesize adaptation strategies based on a non-functional requirement, and control theory, which is used to adapt the target system’s parameters systematically based on the previously defined strategy. Also, we combine AI-based optimization with these two approaches to adapt the system in an efficient way while also improving the self-adaptation in terms of the control-theoretical properties: overshoot and steady-state error.
6. Conclusion and Future Work
In this work, we have proposed a hybrid approach that combines control theory principles with AI techniques to optimize the adaptation process in self-adaptive systems. Using curve fitting methods aligned with a meta-heuristic technique, namely NSGA-II, we were able to i) efficiently synthesize adaptation strategies through high-level reasoning upon the model that represents the managed system, and (ii) enforce actions through control-theoretical principles to ensure the adaptation strategies are applied and properties of interest guaranteed. By these means, we contribute with a hierarchical and dynamic system manager that relies on optimization at all levels of the decision-making process towards a more efficient and robust adaptation mechanism. We have evaluated our approach on the BSN prototype implemented in the ROS framework. The evaluation results have shown that our hybrid approach is able to find optimal solutions for the adaptation space while also improving the self-adaptation loop in terms of control theoretic properties: stability, overshooting and steady-state error.
We envision that our future work will be devoted into expanding our technique to accommodate PID controllers as well as exploiting further evolutionary algorithms, while ensuring our approach is also able to optimize the adaptation space at runtime. During the experiments, we have noticed that defining a suitable data window in which the method should train the model is a hard task. A second issue is knowing when the learned model should be applied. Some kind of quality monitor is likely to be incorporated to the architecture in order to orchestrate the learning process. From a performance perspective, this brings us to another challenge, that is building the pipeline in a way that the continuous learning and adaptation of metrics do not affect negatively the operation of the managed system. We plan to further investigate these issues in a future work. Additionally, for generalization purposes, we plan to conduct case studies with other exemplars from other domains than healthcare as well as to further compare our solution to state-of-the-art adaptive control algorithms with respect to time to solution and robustness.
The authors express their utmost gratitude to Léo Moraes and Gabriel Levi (UnB/Brazil) for implementing an accessible version of the BSN for experimentation on SAS. This study was financed in part by the CAPES-Brasil – Finance Code 001, through CAPES scholarship. This work was also partially supported by the Wallenberg Al, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, the FLASH project (GR 3634/6-1) funded by the German Science Foundation (DFG), and the EU H2020 Research and Innovation Prog. under GA No. 731869 (Co4Robots). The authors also acknowledge financial support from Centre of EXcellence on Connected, Geo-Localized and Cybersecure Vehicle (EX-Emerge) funded by Italian Government under CIPE resolution n. 70/2017 (Aug. 7, 2017). Finally, we thank CNPq for partial support under grant number 306017/2018-0.
- PID control system analysis, design, and technology. IEEE transactions on control systems technology 13 (4), pp. 559–576. Cited by: §2.
- Feedback systems: an introduction for scientists and engineers. Princeton University Press. External Links: Cited by: §2.
- MORPH: a reference architecture for configuration and behaviour self-adaptation. In Proceedings of the 1st International Workshop on Control Theory for Software Engineering, CTSE 2015, New York, NY, USA, pp. 9–16. Cited by: §1, §3.
- Engineering self-adaptive systems through feedback loops. In Software Engineering for Self-Adaptive Systems, B. H. C. Cheng, R. de Lemos, H. Giese, P. Inverardi, and J. Magee (Eds.), pp. 48–70. External Links: Cited by: §2.
- The goal question metric approach. Encyclopedia of software engineering, pp. 528–532. Cited by: §4.
A fast and elitist multiobjective genetic algorithm: NSGA-II.
IEEE transactions on evolutionary computation6 (2), pp. 182–197. Cited by: §1, §2.
- Event-driven application brownout: reconciling high utilization and low tail response times. In Proceedings of the 2015 International Conference on Cloud and Autonomic Computing, ICCAC ’15, pp. 1–12. External Links: Cited by: §5.
- Self-managing systems: a control theory foundation. In 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS’05), pp. 441–448. Cited by: §5.
- FUSION: a framework for engineering self-tuning self-adaptive software systems. In Proceedings of the Eighteenth ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE ’10, New York, NY, USA, pp. 7–16. External Links: Cited by: §1, §5.
- A learning-based framework for engineering feature-oriented self-adaptive software systems. IEEE Transactions on Software Engineering 39 (11), pp. 1467–1493. External Links: Cited by: §1, §5.
- Self-adaptive software meets control theory: a preliminary approach supporting reliability requirements. In Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering, pp. 283–292. Cited by: §5.
- Reliability-driven dynamic binding via feedback control. In Proceedings of the 7th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS ’12, pp. 43–52. External Links: Cited by: §1, §5.
- Automated design of self-adaptive software with control-theoretical formal guarantees. In Proceedings of the 36th International Conference on Software Engineering, pp. 299–310. Cited by: §5, §5.
- Automated multi-objective control for self-adaptive software design. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, pp. 13–24. Cited by: §5.
- Control strategies for self-adaptive software systems. ACM Trans. Auton. Adapt. Syst. 11 (4), pp. 24:1–24:31. Cited by: §1, §1, §2, §2, §5.
- Software engineering meets control theory. In Proceedings of the 10th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, pp. 71–82. Cited by: §1, §5.
- Planning as optimization: dynamically discovering optimal configurations for runtime situations. In 13th International Conference on Self-Adaptive and Self-Organizing Systems, SASO ’19, pp. 1–10. Cited by: §2, §5.
- Adapting a system with noisy outputs with statistical guarantees. In Proceedings of the 13th International Conference on Software Engineering for Adaptive and Self-Managing Systems, pp. 58–68. Cited by: §5.
- Search-based software engineering: trends, techniques and applications. ACM Comput. Surv. 45 (1). External Links: Cited by: §2.
- Feedback control of computing systems. John Wiley Sons, Inc., USA. External Links: Cited by: §2, §2.
- Machine learning meets quantitative planning: enabling self-adaptation in autonomous robots. In Proceedings of the 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS ’19, Piscataway, NJ, USA, pp. 39–50. Cited by: §1, §5.
- An uncertainty-aware approach to optimal configuration of stream processing systems. In 2016 IEEE 24th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), pp. 39–48. Cited by: §5.
- Transfer learning for improving model predictions in highly configurable software. In Proceedings of the 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, pp. 31–41. Cited by: §5.
- Brownout: building more robust cloud applications. In Proceedings of the 36th International Conference on Software Engineering, ICSE 2014, pp. 700–711. External Links: Cited by: §5.
- Improving cloud service resilience using brownout-aware load-balancing. In Proceedings of the 2014 IEEE 33rd International Symposium on Reliable Distributed Systems, SRDS ’14, pp. 31–40. External Links: Cited by: §5.
- Computational intelligence: a methodological introduction. 2nd edition, Springer. External Links: Cited by: §2.
- Autonomic provisioning with self-adaptive neural fuzzy control for percentile-based delay guarantee. ACM Trans. Auton. Adapt. Syst. 8 (2). External Links: Cited by: §1, §5.
- Comparison of decision-making strategies for self-optimization in autonomic computing systems. ACM Trans. Auton. Adapt. Syst. 7 (4). External Links: Cited by: §1.
- Control strategies for predictable brownouts in cloud computing. IFAC Proceedings Volumes 47 (3), pp. 689–694. Note: 19th IFAC World Congress External Links: Cited by: §1, §5.
- Modern control engineering. 4th edition, Prentice Hall PTR, Upper Saddle River, NJ, USA. Cited by: §1.
- Efficient analysis of large adaptation spaces in self-adaptive systems using machine learning. In 2019 IEEE/ACM 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), Vol. , pp. 1–12. Cited by: §1, §5.
- A learning approach to enhance assurances for real-time self-adaptive systems. In 2018 IEEE/ACM 13th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), pp. 206–216. Cited by: §4.3.
- Self-adaptive software: landscape and research challenges. ACM Trans. Auton. Adapt. Syst. 4 (2). External Links: Cited by: §2.
- Control-theoretical software adaptation: a systematic literature review. IEEE Trans. Softw. Eng. 44 (8), pp. 784–810. Cited by: §1, §1, §2, §5, §5.
- Handling new and changing requirements with guarantees in self-adaptive systems using simca. In Proceedings of the 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS ’17, pp. 12–23. External Links: Cited by: §5.
- Keep it simplex: satisfying multiple goals with guarantees in control-based self-adaptive systems. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 229–241. Cited by: §5, §5.
- Dynamic adaptive network configuration for iot systems: A search-based approach. CoRR abs/1905.12763. External Links: Cited by: §2, §5.
- Taming uncertainty in the assurance process of self-adaptive systems: a goal-oriented approach. In Proceedings of the 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS ’19, pp. 89–99. Cited by: §3.1, §3, §4.2, §4.3.
- The case for feedback control real-time scheduling. In Proceedings of 11th Euromicro Conference on Real-Time Systems. Euromicro RTS’99, pp. 11–20. Cited by: §2.
- Software engineering of self-adaptive systems: an organised tour and future challenges. Chapter in Handbook of Software Engineering. Cited by: §1, §5.
- Engineering self-adaptive software systems - an organized tour. In FAS*W@SASO/ICAC, pp. 1–2. Cited by: §1.