DeepAI
Log In Sign Up

Environment Imitation: Data-Driven Environment Model Generation Using Imitation Learning for Efficient CPS Goal Verification

Cyber-Physical Systems (CPS) continuously interact with their physical environments through software controllers that observe the environments and determine actions. Engineers can verify to what extent the CPS under analysis can achieve given goals by analyzing its Field Operational Test (FOT) logs. However, it is challenging to repeat many FOTs to obtain statistically significant results due to its cost and risk in practice. To address this challenge, simulation-based verification can be a good alternative for efficient CPS goal verification, but it requires an accurate virtual environment model that can replace the real environment that interacts with the CPS in a closed loop. This paper proposes a novel data-driven approach that automatically generates the virtual environment model from a small amount of FOT logs. We formally define the environment model generation problem and solve it using Imitation Learning (IL) algorithms. In addition, we propose three specific use cases of our approach in the evolutionary CPS development. To validate our approach, we conduct a case study using a simplified autonomous vehicle with a lane-keeping system. The case study results show that our approach can generate accurate virtual environment models for CPS goal verification at a low cost through simulations.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/23/2021

Data Driven Testing of Cyber Physical Systems

Consumer grade cyber-physical systems (CPS) are becoming an integral par...
03/23/2022

A Search-Based Framework for Automatic Generation of Testing Environments for Cyber-Physical Systems

Many modern cyber physical systems incorporate computer vision technolog...
12/21/2022

Modelling Controllers for Cyber Physical Systems Using Neural Networks

Model Predictive Controllers (MPC) are widely used for controlling cyber...
07/07/2020

Imitation Learning Approach for AI Driving Olympics Trained on Real-world and Simulation Data Simultaneously

In this paper, we describe our winning approach to solving the Lane Foll...
01/20/2020

Counter-example Guided Learning of Bounds on Environment Behavior

There is a growing interest in building autonomous systems that interact...
07/14/2018

Deriving AOC C-Models from D V Languages for Single- or Multi-Threaded Execution Using C or C++

The C language is getting more and more popular as a design and verifica...
04/07/2021

On Determinism of Game Engines used for Simulation-based Autonomous Vehicle Verification

Game engines are increasingly used as simulation platforms by the autono...

1 Introduction

Cyber-Physical Systems (CPS) utilize both physical and software components deeply intertwined to continuously collect, analyze, and control physical actuators at runtime Baheti and Gill (2011). CPS has been increasingly studied for many applications, such as autonomous vehicles An et al. (2020); Mullins et al. (2018), robots Bozhinoski et al. (2019); Ahmad and Babar (2016), smart factories Shiue et al. (2018); Wang et al. (2022), and medical devices Zema et al. (2015); Fu (2011).

One of the essential problems in CPS development is to verify to what extent the CPS under development can achieve its goals. To answer this, a developer could deploy a CPS (e.g., an autonomous vehicle) into its operational environment (e.g., a highway road) and verify the CPS’s goal achievement (e.g., lane-keeping) using the logs collected from the Field Operational Tests (FOTs). However, conducting FOTs is expensive, time-consuming, and even dangerous, especially when hundreds of repeats are required to achieve a certain level of statistical significance in the verification results. An alternative is a simulation-based approach where the software controller of the CPS is simulated with a virtual environment model. Though it can reduce the cost and risk of the CPS goal verification compared to using FOTs, it requires a highly crafted virtual environment model based on deep domain knowledge. Furthermore, it may not be possible at all if a high-fidelity simulator for the problem domain does not exist. It prevents the simulation-based approach from being better used in practice.

To solve the difficulty of manually generating virtual environment models, we propose an automated data-driven environment model generation approach for CPS goal verification by recasting the problem of environment model generation as the problem of imitation learning. We call this novel approach ENVironment Imitation (ENVI

). In machine learning, Imitation Learning (IL) has been widely studied to mimic complex human behaviors in a given task only from a limited amount of demonstrations 

Hussein et al. (2017). Our approach leverages IL to mimic how the real environment interacts with the CPS under analysis from a small set of log data collected from FOTs. Since the log data records how the CPS and the real environment interacted, our approach can generate an environment model that mimics a state transition mechanism of the real environment according to the CPS action as closely as possible to that recorded in the log data. The generated environment model is then used to simulate the CPS software controller as many times as needed to statistically analyze the CPS goal achievement.

We evaluate the feasibility of our novel approach while comparing various imitation learning algorithms on a case study of a lane-keeping system of an autonomous robot vehicle. The evaluation results show that our approach can automatically generate an environment model that mimics the interaction mechanism between the lane-keeping system and physical environment, even using minimal amounts of FOT log data (e.g., less than 30 seconds execution log).

In summary, below are the contributions of this paper:

  1. We shed light on the problem of environment model generation for CPS goal verification with a formal problem definition.

  2. We propose ENVI, a novel data-driven approach for environment model generation utilizing IL.

  3. We assess the application of our approach through a case study with a real CPS and various IL algorithms.

The remainder of this paper is organized as follows: Section 2 illustrates a motivating example. Section 3 provides background on representative imitation learning algorithms considered in our experiments. Section 4 formalizes the problem of the data-driven environment model generation. Section 5 describes the steps of ENVI. Section 6 reports on the evaluation of ENVI. Section 7 discusses implications and open issues. Section 8 introduces related work. Section 9 concludes the paper.

2 Motivating Example

We present a simple example of CPS goal verification to demonstrate a use case of our approach.

Consider a software engineer developing a lane-keeping system of an autonomous vehicle. The engineer aims to develop and test the vehicle’s software controller (i.e., lane-keeping system) that continuously monitors the distance from the center of the lane and computes the steering angle that determines how much to turn to keep the distance as small as possible.

Once the software controller is developed, the engineer must ensure that the vehicle equipped with the controller continues to follow the center of the lane while driving. To do this, the engineer deploys the vehicle on a safe road and collect an FOT log, including the distance and the steering angle at time where is a pre-defined FOT duration. Based on the collected data, the engineer can quantitatively assess the quality of the lane-keeping system by calculating the sum of the distances the vehicle deviated from the center of the lane, i.e., . The quantitative assessment is used to verify precisely a goal of the system, i.e., whether holds or not for a small threshold . Notice that, due to the uncertainties in FOT, such as non-uniform friction between the tires and the ground, the same FOT must be repeated multiple times, and statistical analysis should be applied to the results.

It takes a lot of time and resources to repeat the FOTs enough to obtain statistically significant results. To address this issue, the engineer may decide to rely on simulations. However, using high-fidelity and physics-based simulators, such as Webots Michel (2004) or Gazebo Koenig and Howard (2004), is very challenging, especially for software engineers who do not have enough expertise in physics. It is not easy to accurately design the physical components of the system (e.g., the size of wheels and the wheelbase) and the road in the simulator so that the simulation results are almost identical to the FOT results.

Our approach, ENVI, enables the CPS goal verification without using such a high-fidelity simulator. The engineer can simply provide ENVI with the software controller (i.e., the lane-keeping system under analysis) and a small amount of FOT logs collected from the beginning, which is far less than the data required for statistically significant results using FOTs. Then ENVI automatically generates a virtual environment model that imitates the behavior of the real environment of the lane-keeping system; specifically, the virtual environment model can simulate for given and for such that calculated based on the virtual model is almost the same as the value calculated based on the FOTs. Therefore, by quickly re-running the simulation multiple times, the engineer can have statistically significant results about the quality of the software controller at little cost. Furthermore, if multiple software controller versions make different CPS behaviors, the virtual environment model generated by ENVI can be reused to verify the CPS goal achievements of new controller versions that have never been tested in the real environment.

The challenge for ENVI is automatically generating a virtual environment model that behaves as similar as possible to the real environment using a limited amount of data. To address this, we leverage imitation learning detailed in Section 3.

3 Background: Imitation Learning

Imitation Learning (IL) is a learning method that allows an agent to mimic expert behaviors for a specific task by observing demonstrations of the expert Hussein et al. (2017). For example, an autonomous vehicle can learn to drive by observing how a human driver controls a vehicle. IL assumes that an expert decides an action depending on only the state that the expert encounters. Based on this assumption, an expert demonstration is a series of pairs of states and actions, and IL aims to extract the expert’s internal decision-making function (i.e., a policy function that maps states into actions) from the demonstration Hussein et al. (2017). We introduce two representative IL algorithms in the following subsections: Behavior Cloning (BC) and Generative Adversarial Imitation Learning (GAIL).

3.1 Behavior Cloning

Behavior Cloning (BC) infers the policy function of the expert using supervised learning 

Schaal (1996); Argall et al. (2009). Training data can be organized by pairing states and corresponding actions in the expert’s demonstration. Then existing supervised learning algorithms can train the policy function that returns expert-like actions for given states. Due to the simplicity of the BC algorithm, BC can create a good policy function that mimics the expert quickly if there are sufficiently much demonstration data. However, if the training data (i.e., expert demonstration) does not fully cover the input state space or is biased, the policy function may not mimic the expert behavior correctly Argall et al. (2009).

3.2 Generative Adversarial Imitation Learning

Generative Adversarial Imitation Learning (GAIL) Ho and Ermon (2016a)

utilizes the idea of Generative Adversarial Networks 

Ho and Ermon (2016b) to evolve the policy function using iterative competitions with a discriminator that evaluates the policy function. Therefore, both the policy function and the discriminator are trained in parallel.

The policy function gets states in the expert demonstration and produces simulated actions. The discriminator then gets the policy function’s input (i.e., states) and output (i.e., simulated actions) and evaluates how the policy function behaves like the real expert, as shown in the demonstration. The more similar the simulation is to the expert demonstration, the more rewarded the policy function is by the discriminator. The policy function is trained to maximize the reward from the discriminator.

On the other hand, the discriminator is trained using both the demonstration data and the simulation trace of the policy function. The state and action pairs, which is the input and output of the policy function, in the demonstration data are labeled as real, but the pairs in the simulation trace are labeled as fake. A supervised learning algorithm trains the discriminator to quantitatively evaluate whether a state and action pair is real (returning a high reward) or fake (returning a low reward).

After numerous learning iterations of the policy function and the discriminator, the policy function finally mimics the expert well to deceive the advanced discriminator. GAIL uses both the expert demonstration data and the simulation trace data of the policy function generated internally, so it works well even with small demonstration data Ho and Ermon (2016a). However, because of the internal simulation of the policy function, its learning speed is relatively slow Jena et al. (2020).

4 Problem Definition

This section introduces a mathematical framework for modeling how the CPS under analysis interacts with its environment to achieve its goals. Based on the formal framework, we then define the environment model generation problem for CPS goal verification.

4.1 A Formal Framework for CPS Goal Verification

A CPS achieves its goals by interacting with its physical environment. Specifically, starting from an initial state of the environment, the CPS software controller observes the state and decides an appropriate action to maximize the likelihood of achieving the goals. Then, taking action causes a change in the environment for the next step, which the CPS will observe again to decide an action for the next step. We assume the CPS and the environment interact in a closed loop without interference by a third factor. To formalize this process, we present a novel CPS-ENV interaction model

inspired by Markov Decision Process 

Sutton et al. (1998) that models an agent’s sequential decision-making process under observation over its environmental states.

Specifically, a CPS-ENV interaction model is a tuple , where is a set of observable states of the environment under consideration, is a set of possible CPS actions, is a policy function that captures the software controller of the CPS, is a transition function that captures the transitions of environmental states over time as a result of CPS actions and its previous states111

Though we use deterministic policy and transition functions for simplicity, they can be easily extended in terms of probability density, i.e.,

and , to represent stochastic behaviors if needed., and is an initial environmental state. For example, starting from , the CPS makes an action , leading to a next state . By observing , the CPS again makes the next action , and so on.

For a CPS-ENV interaction model , we can think of a sequence of transitions over steps where denotes a transition from a state to another state of the environment by taking an action of the CPS. More formally, we define a trajectory of over time ticks as a sequence of tuples .

Figure 1: Formal framework for CPS goal verification

Since a trajectory of a CPS-ENV interaction model concisely captures the sequential interaction between the CPS under analysis and its environment, one can easily verify whether CPS goals are achieved or not by analyzing the trajectory. Figure 1 visualizes how a CPS-ENV interaction model is used for simulation-based CPS goal verification. Specifically, let be a requirement that precisely specifies a goal under verification. The achievement of is quantifiable. For a CPS-ENV interaction model , the verification result of for , denoted by , is computed by evaluating the achievement of on the trajectory of . Depending on the type of , the value of

can be Boolean (expressing the success or failure of a requirement with clear-cut criteria) or Float (expressing the measurement of an evaluation metric of

). For example, one of the evaluation metrics of the lane-keeping requirement is the distance the vehicle is away from the center of the lane. As a result of the verification of the lane-keeping goal, the average or maximum distance from the center is computed.

4.2 Problem Statement

The problem of virtual environment model generation for simulation-based CPS goal verification is to find an accurate virtual environment model that can replace the real environment of the CPS goal under verification while maintaining the same level of verification accuracy. Specifically, for the same CPS under analysis, let a CPS-ENV interaction model representing the interaction between the CPS and its real environment (in FOT) and another model representing the interaction between the same CPS and its virtual environment (in simulations). Notice that we have the same , , , and for both and since they are about the same CPS222Note that can be the same for and because it is a set of observable states from the perspective of the CPS under analysis., whereas and are different since they represent how the corresponding environments react to the actions performed by the CPS. For a requirement , we aim to have that minimizes the difference between and . Therefore, the problem of virtual environment model generation for CPS goal verification is to find such that is the minimum.

The virtual environment model generation problem has three major challenges. First, the number of possible states and actions is often very large, making it infeasible to build a virtual environment model (i.e., represented by a transition function ) by exhaustively analyzing individual states and actions. Second, since the virtual environment model continuously interacts with the CPS under analysis in a closed-loop, even a small difference between the virtual and real environments can significantly differ in verification results as it accumulates over time, the so-called compounding error problem. This means that simply having a transition function that mimics the behavior of in terms of individual input and output pairs, without considering the accumulation of errors for sequential inputs, is not enough. Third, generating should not be as expensive as using many FOTs; otherwise, there is no point in using simulation-based CPS goal verification. Recall that manually crafting virtual environment models in a high-fidelity simulator requires a lot of expertise, which takes longer than doing FOTs many times for having statistically significant verification results. Therefore, a practical approach should generate an accurate virtual environment model efficiently and automatically.

To address the challenges mentioned above, we suggest leveraging IL to automatically generate virtual environment models from only a small amount of data. The data is the partial trajectory of , which can be collected from a few FOTs for the CPS under test in its real application environment. Since IL can efficiently extract how experts make sequential actions for given states from a limited amount of demonstrations while minimizing the compounding errors, it is expected to be an excellent match to our problem. Therefore for our problem, IL will extract , instead of (which is the original goal of IL), that can best reproduce given trajectories of (i.e., FOT logs). Generated may differ depending on the amount of the trajectory, so we analyze it in the experiment.

5 Environment Imitation

This section provides ENVI, a novel approach to the problem of environment model generation for CPS goal verification, defined in Section 4. We solve the problem by using IL to automatically infer a virtual environment state transition function from the log recorded during the interaction between the CPS under test and its application environment. In this context, the real application environment is considered an “expert,” and the FOT log demonstrates the expert.

Figure 2: Environment Imitation process and simulation-based CPS goal verification

Figure 2 shows the overview of the environment model generation and simulation-based CPS goal verification process using our approach. It is composed of three main stages: (1) FOT log collection for model generation, (2) environment model generation using an IL algorithm, and (3) CPS goal verification using the generated environment model. In the first stage, engineers collect FOT logs of a CPS controller under analysis deployed in its real application environment. The interaction between the CPS and the real environment is abstracted as , including the unknown . The trajectory of recorded in the logs is then used by IL algorithms in the second stage to generate a virtual environment model that imitates automatically. In the last stage, the simulation of in the virtual environment described by is performed to generate simulation logs as many as needed for statistical verification. As a result, engineers can statistically verify to what extent a requirement of the CPS is satisfied using only a few FOT logs. In the following subsections, we explain each of the main steps in detail with the example introduced in Section 2.

5.1 FOT Log Collection

The first stage of ENVI is to collect the interaction data between the CPS controller and its real environment, which will be used as the “demonstrations” of imitation learning to generate the virtual environment later. For a CPS-ENV interaction model defined in Section 4, the interaction data collected over time can be represented as the trajectory of over steps, i.e., where and for . The trajectory can be easily collected from an FOT, since it is common to record the interaction between the CPS controller and its real environment as an FOT log Xu and Duan (2019). For example, the lane-keeping system records time-series data of the distances the vehicle deviated from the center of the lane and the steering angles over during an FOT.

In practice, the trajectory of the same is not necessarily the same due to the uncertainty of the real environment, such as the non-uniform surface friction. Therefore, it is recommended to collect a few FOT logs for the same . Since the virtual environment model generated by imitation learning will mimic the given trajectories as much as possible, the uncertainty of the real environment recorded in the trajectories will also be imitated. Section 6 will investigate to what extent virtual environment models generated by ENVI can accurately mimic the real environment in terms of CPS goal verification when the size of the given FOT logs varies.

5.2 Environment Model Generation

The second stage of ENVI is to generate a virtual environment model from the collected FOT logs using an IL algorithm. It consists of two steps: (1) define the environment model structure and (2) run an IL algorithm to generate a trained model.

5.2.1 Defining Environment Model Structure

We implement an environment model as a neural network to leverage imitation learning. Before training the environment model, users define the neural network structure.

Figure 3: The environment model structure

The virtual environment model structure is based on the environmental state transition function defined in Section 4. It assumes that the ideal (real) environment generates the next state by taking the current environment state and the current CPS action only, meaning that is sufficient to determine in the ideal environment at time . However, in practice, may not include sufficient information since it is observed by the sensors of the CPS under verification and the sensors have limited sensing capabilities. To solve this issue, we extend for virtual environment models as where is the length of the state-action pairs required to predict the next state. This means that uses to predict . Notice that is equal to when . To account for the extension of , we also extend the CPS-ENV interaction model to where is a partial trajectory of over steps starting from . Intuitively speaking, is the initial input for similar to (and ) for .

Based on the extended definition of , the structure of is shown in Figure 3. The input and output of are and , respectively, as defined above. Recall that an environmental state and a CPS action

can be vectors in general; let

be the length of a vector

. Then, the number of input neurons of the neural network is

, and the number of output neurons is .

Defining the environment model structure involves two manual tasks. The first task is to choose a proper value for the history length . If the value of increases, more information can be captured in environmental states while the cost of training and executing increases. Therefore, it is important to balance the amount of information and the cost of computation. For example, one can visualize the FOT log and see if there are any cyclic patterns in the sequence of environmental states. The second task is to design the hidden layers of . The hidden layers specify how the output variables are calculated from the input variables, so-called forward propagation. The design of hidden layers is specific to a domain, but general guidelines of the neural network design exist for practitioners Hagan et al. (1997); Rafiq et al. (2001); Schilling et al. (2019).

5.2.2 Environment Model Training using IL Algorithms

Once the structure of is determined, we can train using an IL algorithm with a proper set of training data , where is the number of FOT logs, is the sequence of inputs collected from -th FOT log and is the corresponding sequence of outputs (i.e., the expected value of is for all and for ). Since is an -length sequence of state-action pairs, we can generate from an FOT log using a sliding window of length . Specifically, for an FOT log , for .

In the following subsections, we explain how each of the representative IL algorithms, i.e., BC, GAIL, and the combination of BC and GAIL, can be used for training .

Using BC

As described in Section 3.1, BC trains an environment model using supervised learning. Pairs of the input and output of the real environment recorded in FOT logs are given to as training data, and is trained to learn the real environment state transition shown in the training data.

Specifically, the BC algorithm (whose pseudocode is shown in Algorithm 1) takes as input a randomly initialized environment model and a training dataset ; it returns an environment model trained using .

Input : ENV model (randomly initialized) ,
Training data
Output : ENV model (trained)
1 while  do
2       foreach  do
3             Sequence of model outputs
4             Float
5            
6            
7       end foreach
8      
9 end while
10 return
Algorithm 1 ENVI BC algorithm

The algorithm iteratively trains using until a stopping condition (e.g., a fixed number of iterations or convergence of the model’s loss) is met (lines 11). For each , the algorithm repeats the following (lines 11): (1) executing on to predict a sequence of outputs (line 1), (2) calculating the training loss based on the difference between and (line 1), and (3) updating to minimize (line 1). The algorithm ends by returning (line 1).

Algorithm 1 is intuitive and easy to implement. In addition, the model’s loss converges fast because it is a supervised learning approach. However, if the training data does not fully cover the input space or is biased, the model may not accurately imitate the real environment.

Using GAIL

As described in Section 3.2, GAIL iteratively trains not only but also the discriminator that evaluates in terms of the CPS controller . Specifically, for a state , evaluates with respect to (captured by ) by comparing and . To do this, is trained using by supervised learning333The structure of is similar to , but the input of is and the output of is a reward value ., and is trained using the evaluation results of .

Algorithm 2 shows the pseudocode of GAIL. Similar to Algorithm 1, it takes as input a randomly initialized environment model and a training dataset ; however, it additionally takes as input a randomly initialized discriminator and the CPS controller under analysis . It returns a trained virtual environment model .

Input : ENV model (randomly initialized) ,
Discriminator (randomly initialized) ,
Function of CPS decision-making logic ,
Training data
Output : ENV model (trained)
1 while  do
2       foreach  do
             // Discriminator training
3             Sequence of model outputs
4             Float
             // Environment model training
5             Sequence of model rewards
6             Model input
7             for  do
8                   Model output
9                   Reward
10                  
11                   CPS action
12                  
13                  
14             end for
15             Float
16            
17            
18       end foreach
19      
20 end while
21 return
Algorithm 2 ENVI GAIL algorithm

The algorithm iteratively trains both and using and until a stopping condition is met (lines 22). To train , for each (lines 22), the algorithm executes on to predict a sequence of outputs (line 2), calculates the discriminator loss indicating how well can distinguish and for (line 2), and updates using (line 2). Once is updated, the algorithm trains using and (lines 22). Specifically, the algorithm initializes a sequence of rewards (line 2) and a model input (line 2), collects for each using , , and (lines 22), calculates the environment model loss by aggregating (line 2), and updates using (line 2). To collect for each (lines 22), the algorithm executes on to predict an output (line 2), executes on and to get a reward (line 2), appends at the end of (line 2), executes on to decide a CPS action (line 2), and updates as by removing and appending (line 2). The algorithm ends by returning (line 2).

Notice that, to train , GAIL uses the input-output pair simulated by and , in addition to the real input-output pair in . This is why it is known to work well even with a small amount of training data Ho and Ermon (2016a); Jena et al. (2020)

. However, the algorithm is more complex to implement than BC, and the environment model converges slowly or sometimes fails to converge depending on hyperparameter values.

Using BC and GAIL together

Notice that BC trains using the training data only, but GAIL trains using the simulated data as well; BC and GAIL can be combined to use both training and simulated data without algorithmic conflict. This idea is suggested by Ho and Ermon (2016a) to improve learning performance, and Jena et al. (2020) later implemented the idea as an algorithm BCxGAIL.

The BCxGAIL algorithm is the same as GAIL in terms of its input and output, and it also trains both and similar to GAIL. In particular, is updated as the same as in GAIL. However, is updated using both (line 4 in Algorithm 1) and (line 15 in Algorithm 2). By doing so, BCxGAIL can converge fast (similar to BC) with a small amount of training data (similar to GAIL).

5.3 Simulation-based CPS Goal Verification

Using the virtual environment model generated from the previous stage, an engineer can statistically verify if the CPS controller under analysis satisfies a goal (i.e., compute ) through many simulations of .

To simulate , the initialization data should be given. Since is the partial trajectory of over steps, the engineer should conduct partial FOTs over steps to get . Notice that acquiring is much cheaper than having full FOTs for FOT-based CPS goal verification since is much shorter than (i.e., the full FOT duration). The engineer then run as many times as needed for statistical verification444This is because can be non-deterministic and the same can lead to different simulation results.. For example, to verify if a vehicle equipped with a lane-keeping system under development is not more than away from the center of the lane, engineers simulate the lane-keeping system several times with the generated environment model. The engineers then analyze the distance farthest from the center of the lane in each simulation and verify whether the requirement is statistically satisfied.

In practice, it is common to develop multiple versions of the same CPS controller, for example, developed sequentially during its evolutionary development Basden et al. (1991); Helps and Mensah (2012); Sirjani et al. (2021). Let us consider a lane-keeping system controller implemented with a configuration parameter indicating the minimum degree of steering for lane-keeping. Then, one can develop a new version of the lane-keeping system by changing the parameter value based on the CPS goal verification results of its previous versions. In such an evolutionary development process, for the verification of the new version, we can consider different use cases depending on which version of the FOT logs is used to generate the environment model. Specifically, we can consider three different use cases:

Case 1: One version is used for training, and verification is performed on the same version as training

This is the basic use case, shown in Figure 4 (a). For example, for the verification of the first version of the lane-keeping system controller, some FOT logs of that version must be collected since there are no previous versions (and their FOT logs). Since Training involves One version and Verification is for the Known version, we refer to this case TOVK.

Case 2: Multiple versions are used for training, and verification is performed on one of the versions used for training

Multiple versions of the CPS controller can be used for training, as shown in Figure 4 (b). For example, when there are different sets of FOT logs collected by previously developed versions of the lane-keeping system in addition to the FOT logs collected by the new version, all the logs associated with different parameter values can be used together to generate a single environment model. This allows us to best utilize all FOT logs for virtual environment model generation. Since Training involves Multiple versions and Verification is for one of the Known versions, we refer to this case TMVK.

Case 3: Multiple versions are used for training, and verification is performed on a new version that has never been used for training

As shown in Figure 4 (c), this is similar to the TMVK use case, but without using FOT logs collected by the new version. In other words, only the previously collected FOT logs are used for the verification of the new version. This allows us to significantly reduce the cost of new FOTs for the new version for CPS goal verification. Since Training involves Multiple versions and Verification is for an Unknown version, we refer to this case TMVU.

Figure 4: Use cases of simulation-based verification using ENVI

6 Case Study

This section provides a case study to evaluate the applicability of our approach in various use cases introduced in Section 5.3. Specifically, we first investigate the accuracy of CPS goal verification results when ENVI is used for a single CPS controller version (i.e., the TOVK use case). We then analyze if ENVI can efficiently generate a single environment model that can be used for the CPS goal verification of multiple CPS controller versions (i.e., the TMVK use case). Last but not least, we also investigate if the single environment model can be used for the CPS goal verification of a new CPS controller version that has never been used for training (i.e., the TMVU use case). To summarize, we answer the following research questions:

  1. Can ENVI generate a virtual environment model that can replace the real environment in the CPS goal verification for a single CPS controller version? (TOVK)

  2. Can ENVI generate a virtual environment model that can replace the real environment in the CPS goal verification for multiple CPS controller versions? (TMVK)

  3. Can ENVI generate a virtual environment model that can replace the real environment in the CPS goal verification for a new CPS controller version? (TMVU)

6.1 Subject CPS

Figure 5: Case study subject CPS: a LEGO-lized autonomous vehicle

To answer the research questions in the context of a real CPS development process, we implement a simplified autonomous vehicle equipped with a lane-keeping system. We utilize an open physical experimental environment Shin et al. (2021) that abstracts an autonomous vehicle as a programmable LEGO robot and a road as a white and black paper lane, as shown in Figure 5. The goal of the lane-keeping system is to keep the center of the lane, indicated by the border between white and black areas while driving, so we aim to verify the goal achievement of the lane-keeping system (e.g., how smoothly it drives following the lane center). Similar to many other CPS, the LEGO-lized autonomous vehicle comprises three parts: sensor, controller, and actuator. A sensor (e.g., a color sensor) gives data observing the CPS environment to a controller. A controller (e.g., a Python program in a LEGO brick) controls actuators (e.g., a motor of a wheel) that make CPS act.

As for the controller, we aim to consider multiple versions of the same lane-keeping system and compare them using simulation-based CPS goal verification to find the best one that allows the ego vehicle to drive the smoothest along the center of the lane. To do this, we develop a template of rule-based lane-keeping system logic and instantiate it into multiple versions of the same lane-keeping system with different parameter values. Algorithm 3 shows the template logic with a configurable parameter indicating the degree of rotation; the algorithm takes as input a color (range from 0 meaning the darkest to 100 meaning the brightest) from the color sensor and returns an angle for the rotation motor. Positive/negative angle means turning right/left, respectively. The algorithm simply turns right if the value of is greater than 50 (i.e., the color is darker than gray) and turns left if the value of is less than 50 (i.e., the color is lighter than gray); otherwise (i.e., the color is exact gray), the algorithm goes straight. The parameter value of determines the degree of turning right and left. We consider five different parameter values of , i.e., from to in steps of in our case study.

Config. : Positive float of unit rotation degree
Input : Float of lane color value
Output : Float of rotation angle value
1 Float
2 if  then
             // Turn right
3      
4 end if
5else if  then
          // Turn left
6      
7 end if
8else
             // Go straight
9      
10 end if
return
Algorithm 3 Lane-keeping system controller logic

Although the algorithm simplifies the logic of lane-keeping systems with a configuration parameter , making a parameterized controller and optimizing the controller’s configuration are common in practice Tao and Bin (2008); David et al. (2012). In addition, engineers experience that changing the configuration changes the CPS behavior in the real environment. Figure 6 shows the partial FOT logs of the lane-keeping system with different values used in our case study; we can see how the interaction between the lane-keeping system and the real environment varies depending on the value of .

Based on Algorithm 3, we implement five different CPS controllers to cover the three use cases (i.e., TOVK, TMVK, and TMVU) described in Section 5.3. Specifically, we follow an evolutionary development Helps and Mensah (2012) scenario where (1) a CPS controller with is developed, and its goal achievement is verified first (i.e., TOVK), (2) two versions with , and are additionally developed, and an environment model is generated using FOT logs of , , and together and is used to verify each developed version (i.e., TMVK), and (3) two more versions with and are additionally developed, and their goal achievements are verified using the previously generated virtual environment model without using any FOT logs for and (i.e., TMVU).

Figure 6: Different interactions between the CPS and its real environment depending on different CPS controller versions

To assess the goal achievement of the different CPS controllers (i.e., how smoothly the vehicle drives following the lane), we define multiple driving performance metrics by investigating driving traces collected during our preliminary experiments Cherrett and Pitfield (2001); Shin et al. (2021). Specifically, given time length , eight driving quality metrics are defined as follows (visualized in Figure 7):

  1. number of steady-state : indicating how many times the vehicle stays in the lane center thresholds

  2. total steady-state duration : indicating how long the vehicle stays in the lane center thresholds

  3. number of overshooting : indicating how many times the vehicle overshoots the upper threshold of the lane center

  4. sum of overshooting amplitudes : indicating how much the vehicle overshoots

  5. total overshooting duration : indicating how long the vehicle overshoots

  6. number of undershooting : indicating how many times the vehicle undershoots the lower threshold of the lane center

  7. sum of undershooting amplitudes : indicating how much the vehicle undershoots

  8. total undershooting duration : indicating how long the vehicle undershoots

It is straightforward that the smaller the metrics about the overshooting and undershooting (i.e., metric 3–8), the better. In addition, if the vehicle does not deviate from the lane center, the steady-state continues uninterrupted, and its duration becomes . Therefore, at the ideal case (e.g., driving exactly on the lane center), the first metric , the second metric , and the other metrics are all .

Figure 7: Driving quality metrics in a lane-keeping system log

6.2 Envi Experimental Setup

As described in Section 5, the CPS goal verification using ENVI follows three main stages: (1) FOT log collection, (2) environment model generation, and (3) simulation-based CPS goal verification. In the following subsections, we explain our experimental setup for each stage in detail.

6.2.1 FOT Log Collection

For each of the five CPS controller versions, we conduct 30 FOTs of the simplified autonomous vehicle and collect 30 logs to capture how the vehicle interacts with the real environment. At each time , the following information is recorded in the logs: (1) a lane color value as an environmental state observed by the vehicle’s color sensor and (2) a steering angle as a CPS action decided by the vehicle’s controller. Therefore, an FOT log is a sequence of state-action pairs where is the FOT duration. According to the vehicle’s hardware spec, it records 25 state-action pairs per one second. Since a sequence of 25 state-action pairs is enough to observe the behavior of a CPS controller, we set to 25 (i.e., one FOT log is collected by one second).

6.2.2 Environment Model Generation

To investigate the impact of using different IL algorithms, we generate different environment models using BC, GAIL, and BCxGAIL. We implement the algorithms in PyTorch 

Paszke et al. (2019). BC uses the ADAM optimizer Kingma and Ba (2017) to update environment models. Since GAIL needs a policy gradient algorithm to update models Ho and Ermon (2016a), we use a state-of-the-art Proximal Policy Optimization (PPO) algorithm Schulman et al. (2017). As for the hyperparameters of the IL algorithms, we best use default values from the original papers Schulman et al. (2017); Ho and Ermon (2016a). Table 1 shows the hyperparameter values used in our evaluation.

Algorithm Hyperparameter Value
BC Epoch 300
Learning rate 0.00005
GAIL Epoch 300
Model learning rate 0.00005
PPO num. policy iteration 10
Discriminator learning rate 0.01
PPO num. discriminator iteration 10
PPO reward discount 0.99
PPO GAE parameter 0.95
PPO clipping 0.2
Table 1: Hyperparameter values for IL algorithms

As for the model structure, we set the length of history as 10, meaning that the input of a virtual environment model is a 20-dimensional vector (i.e., a sequence of 10 state-action pairs). We use a simple design for hidden layers for both the virtual environment model and discriminator. Tables 2 and 3 summarize the structures of virtual environment model and discriminator, respectively.

# layer # output units
input 20
1 fully connected layer 256
2 tanh 256
3 fully connected layer 256
4 tanh 256
5 fully connected layer 1
6 tanh 1
Table 2: Environment model structure
# layer # output units
input 21
1 fully connected layer 256
2 ReLU 256
3 fully connected layer 256
4 ReLU 256
5 fully connected layer 1
6 Sigmoid 1
Table 3: Discriminator structure

As for the model training, to better understand the training data efficiency of each of the IL algorithms, we vary the number of FOT logs to be used and compare the resulting models. Specifically, among the 30 FOT logs collected in Section 6.2.1, we randomly select logs for training a virtual environment model and vary from 3 to 30 in steps of 3. For all models, normalized state and action values (ranging between and ) recorded in the FOT logs are used for training.

6.2.3 Simulation-based CPS Goal Verification

To verify the CPS goal achievement, each of the five CPS controller versions is simulated multiple times with the environment models generated by ENVI, and the simulation logs are used to assess the degree of CPS goal achievement in terms of the eight driving performance metrics defined in Section  6.1.

6.3 CPS Goal Verification Accuracy

To evaluate how accurate the simulation-based verification using ENVI is with respect to the FOT-based verification using enough FOTs for a set of verification requirements, we measure the similarity between the FOT-based verification and simulation-based verification results. Specifically, for a set of CPS goals (requirements) , the CPS goal verification accuracy of a virtual environment model for the CPS (controller) is defined as

where represents the interaction between the CPS under verification (represented by ) and its real environment and represents the interaction between the same CPS and . As for the set of requirements , we consider eight CPS goals (requirements) based on the eight driving performance metrics defined Section 6.1. Since individual requirements have different ranges, we normalize to a value between 0 and 1 using the possible minimum and maximum values. As a result, ranges between 0 and 1; the higher its value, the more accurate the virtual environment model.

To compute for , we perform 100 FOTs using our autonomous robot vehicle and collect the FOT logs. Note that these logs are for evaluating the simulation-based verification accuracy, and therefore different from the 30 FOT logs used for training virtual environment models described in Section 6.2.

To compute for , we perform 100 simulations using a virtual environment model generated by ENVI. To make the FOT-based verification and the simulation-based verification compatible, the same initials must be used. To achieve this, we provide with the initial ten pairs of states and actions of each of the 100 FOT logs as for each simulation.

Notice that is the only real data given to to compute . From ’s point of view, the CPS under verification is black-box and the value of its configuration parameter is unknown to , meaning that predicts how the real environment continuously interacts with a black-box given only.

To account for the randomness in measuring and , we repeat the experiment 30 times and report the average.

6.4 Comparison Baseline

It is ideal to compare ENVI with other existing environment model generation approaches. However, to the best of our knowledge, there is no such approach. Therefore, we make a random environment model for an alternative comparison baseline. The random environment model changes the environmental state randomly regardless of CPS actions. As a result, in addition to the three IL algorithms, we use four different virtual environment model generation approaches (BC, GAIL, BCxGAIL, and Random) and compare them in terms of .

6.5 Experiment Results

6.5.1 Rq1: Tovk Use Case

RQ1 aims to evaluate whether ENVI can generate a virtual environment model for the CPS goal verification of a single CPS controller version. To answer RQ1, we generate a virtual environment model using the FOT logs of the controller version with and measure where indicates the CPS controller version with .

Before we investigate with different configurations (i.e., the different model generation algorithms and the different numbers of FOT logs used for training), we first visualize the behaviors of the real and virtual environments when interacting with the CPS. Figure 8 shows the behaviors of real (red) and virtual (blue) environments in terms of the environmental states (y-axis) generated by the continuous interaction with the lane-keeping system over time (x-axis), when the number of FOT logs used for training is 3 (Figure 8(a)), 15 (Figure 8(b)), and 30 (Figure 8

(c)). When compared to random, we can see that BC, GAIL, and BCxGAIL generate virtual environment models that can closely mimic the real environment. Considering the fact that a slight difference between the virtual and real lane colors at a moment can be accumulated over time due to the closed-loop interaction between the virtual environment model and the lane-keeping system, the visualization shows that all the IL algorithms can learn how the real environment interacts with the lane-keeping system over time without significant errors. Moreover, for each of the IL algorithms, the more FOT logs are used, the closer the virtual and real environment models’ behaviors are. Given the promising visualization results, we continue to investigate the CPS goal verification accuracy below.

Figure 8: Comparison of real (i.e., FOT) and simulation log data
Figure 9: Verification accuracy of environment model generation approaches for TOVK use case

Figure 9 shows how the CPS goal verification accuracy varies depending on the number of FOT logs used for training the virtual environment model when different model generation algorithms are used. Table 4 additionally provides the accuracy values for representative cases (i.e., when the number of FOT logs is 3, 15, and 30). Overall, due to the characteristics of the eight driving performance metrics and their normalizations, the random approach’s verification accuracy is around 82.5%. Nevertheless, all environment models generated by ENVI achieve higher than 96% verification accuracy, which is much higher than that of the random approach. This means that the IL algorithms used in ENVI are significantly better than the random baseline in terms of generating accurate virtual environment models. Regarding the training data efficiency, the verification accuracy only slightly increases as the number of used FOT logs increases, implying that ENVI can generate an accurate environment model using even a very small number of FOT logs (e.g., three). Comparing the IL algorithms, we can clearly see that BCxGAIL outperforms the others regardless of the number of used FOT logs. This is because the convergence speed and data efficiency of the model training have been complemented by using BC and GAIL algorithms together, as explained in Section 5.2. This suggests that engineers can expect the highest model accuracy in this use case through the BCxGAIL algorithm.

Algorithm Logs
Random - 82.47%
BCxGAIL 3 98.69%
15 99.20%
30 99.30%
GAIL 3 96.15%
15 97.62%
30 97.83%
BC 3 97.53%
15 97.27%
30 97.49%
Table 4: Verification accuracy results for the TOVK use case. The best accuracy for each number of FOT logs is highlighted in bold.

The answer to RQ1 is that ENVI can generate an accurate virtual environment model that can replace the real environment in FOTs using only a small number of FOT logs. Among the three IL algorithms used in ENVI, BCxGAIL outperforms the others in terms of the CPS goal verification accuracy.

6.5.2 Rq2: Tmvk Use Case

RQ2 aims to evaluate whether ENVI can generate a virtual environment model for the CPS goal verification of multiple CPS controller versions. To answer RQ2, we measure the verification accuracy of the same virtual environment model for different lane-keeping system controller versions. Specifically, we first train using the FOT logs of three lane-keeping system controller versions (i.e., , , and ) and then assess for each .

(a) Controller under verification
(b) Controller under verification
(c) Controller under verification
Figure 10: Verification accuracy of environment model generation approaches for TMVK use case
Algorithm Logs Avg.
Random - 69.63% 82.04% 80.96% 77.54%
BCxGAIL 3 97.38% 98.78% 98.54% 98.23%
15 96.95% 99.20% 99.24% 98.46%
30 97.47% 99.33% 99.34% 98.71%
GAIL 3 96.16% 96.64% 97.25% 96.68%
15 98.21% 98.32% 97.92% 98.15%
30 98.52% 98.59% 98.80% 98.63%
BC 3 95.77% 97.74% 97.80% 97.10%
15 96.14% 97.91% 97.85% 97.30%
30 97.17% 98.15% 97.71% 97.67%
Table 5: Verification accuracy results for the TMVK use case. The best accuracy for each number of FOT logs and for each controller is highlighted in bold.

Figure 10 shows the verification accuracy results depending on the number of training FOT logs for the three different controller versions. Table 5 provides the accuracy values when the number of FOT logs is 3, 15, and 30. In the table, the best accuracy for each controller version and the number of FOT logs is highlighted in bold. Overall, the virtual environment models generated by ENVI using the FOT logs of multiple controller versions achieve much higher verification accuracy (at least 95%) than the random model, even when only a few FOT logs are used for training them. Considering the different interaction patterns for different CPS controller versions as shown in Figure 6, the high verification accuracy indicates that, even with small FOT logs, the IL algorithms can learn how the real environment interacts with different CPS controller versions and generate a single virtual environment model that covers all the different interaction patterns. Comparing the IL algorithms, BCxGAIL generates the most accurate environment models using the same number of logs than the other algorithms in general, whereas GAIL sometimes outperforms BCxGAIL when and BC never outperforms the others. This is because GAIL can infer an accurate environment model with small training data (e.g., less than 30) better than BC, as already demonstrated by Ho and Ermon (2016a).

The answer to RQ2 is that ENVI can generate an accurate virtual environment model that can be shared in the CPS goal verification of different CPS controller versions. Among the IL algorithms used in ENVI, BCxGAIL generally outperforms the others in terms of the CPS goal verification accuracy.

6.5.3 Rq3: Tmvu Use Case

RQ3 aims to evaluate whether ENVI can generate a virtual environment model for the CPS goal verification of a new controller version that has never been used for training the model. To answer RQ3, we measure the verification accuracy of the virtual environment models generated by multiple controller versions for a new controller version. Specifically, we first generate as the same as RQ2 and then assess for each new .

(a) Controller under verification
(b) Controller under verification
Figure 11: Verification accuracy of environment model generation approaches for TMVU use case.
Algorithm Logs avg.
Random - 79.63% 81.60% 80.61%
BCxGAIL 3 98.44% 98.76% 98.60%
15 97.47% 99.32% 98.40%
30 97.51% 99.53% 98.52%
GAIL 3 97.03% 96.85% 96.94%
15 98.94% 98.18% 98.56%
30 99.21% 98.73% 98.97%
BC 3 97.09% 97.79% 97.44%
15 96.62% 98.18% 97.40%
30 96.71% 98.03% 97.37%
Table 6: Verification accuracy results for the TMVU use case. The best accuracy for each number of FOT logs and for each controller is highlighted in bold.

Similar to RQ2, Figure 11 and Table 6 show the verification accuracy results. In all cases, achieves more than 96% accuracy, which is much higher than random. This means that the virtual environment model generated using the FOT logs of the previously developed CPS controller versions (, , and ) can be used for the CPS goal verification for newly developed versions ( and ) with high accuracy. This implies that the virtual environment model can learn interaction patterns between the real environment and different CPS controller versions and generalize the patterns to unknown CPS controller versions. Therefore, only simulating the new CPS controller versions many times, without much FOT, is required for the CPS goal verification of the new versions if an accurate virtual environment model has been created in the TMVK use case. This can significantly reduce the cost of the CPS goal verification in practice. Regarding the IL algorithms, GAIL and BCxGAIL outperform BC as the same as in RQ2. This implies that GAIL and BCxGAIL are recommended for the IL algorithm when ENVI is used for the CPS goal verification of new CPS versions.

The answer to RQ3 is that ENVI can generate an accurate virtual environment model to verify unknown controller versions that have never been field-tested for training. Regarding the IL algorithms, BCxGAIL and GAIL outperform BC in all cases.

6.6 Threats to Validity

In terms of external validity, our case study focused on only a lane-keeping system in a simplified CPS implemented as a LEGO-lized autonomous vehicle and used only one parameter (i.e., the degree of rotations ) for representing different versions of software controllers. Although the subject CPS of our case study may differ from the real CPS (e.g., autonomous vehicle), our simplified CPS represents a CPS in practice in terms of continuous interaction with the environment and distinction between different controller versions by multiple parameter values. Applying ENVI to more complex CPS could show different results, but the applicability of ENVI for various use cases (i.e., TOVK, TMVK, and TMVU) shown in this paper is still valid for CPSs with such characteristics. However, additional case studies with more complex CPS are required to improve our results’ generalizability.

In terms of internal validity, the goal achievement measure defined based on specific driving quality metrics could be a potential threat since the evaluation of the lane-keeping system’s goal could be biased to a specific aspect of driving. To mitigate this threat, in our case study, we defined eight driving qualities from the FOT logs motivated by Cherrett and Pitfield (2001) and aggregated the results on the qualities to comprehensively understand whether the lane-keeping system under analysis works well or not. Hyperparameter value settings for machine learning models (e.g., number of iterations, learning rates, Etc.) could be another potential threat to the internal validity since the performance of machine learning models can largely depend on hyperparameter values Wang and Gong (2018); Probst et al. (2019). We used the default values provided in the original studies Schulman et al. (2017); Ho and Ermon (2016a). Nevertheless, hyperparameter tuning is an important research field, so it remains an interesting future work. In addition, the verification accuracy evaluation results could be affected by the simulation duration because small errors of the environment model can be accumulated and cause significant errors in a long simulation, as mentioned in Section 4. However, in our case study, we could not see the problem in all ENVI algorithms even when is ten times longer than the current setting in this paper. Nevertheless, analyzing the performances on mitigating the compounding error of various IL algorithms for ENVI in different systems remains an interesting future work.

7 Discussion

IL algorithm selection: In this paper, we considered three representative IL algorithms (BC, GAIL, and BCxGAIL) for the environment model generation. In practice, a specific IL algorithm should be selected when implementing ENVI considering its characteristics, as described in Section 3. Based on our case study results, we recommend engineers use the BCxGAIL algorithm in practice since the environment models generated by BCxGAIL were the most accurate in terms of CPS goal verification. However, there are other factors for the IL algorithm selection, such as learning speed or sensitivity to hyperparameters, and therefore providing more empirical guidelines for selecting a specific IL algorithm still remains an interesting future work.

Knowledge-based approach vs. Data-driven approach: When there is a high-fidelity simulation engine based on well-known principles in the CPS domain, engineers can manually create an accurate virtual environment in the simulator for CPS goal verification. In contrast to such knowledge-based environment modeling, ENVI is a data-driven approach where only a few FOT logs are required to automatically generate an accurate virtual environment model. This is a huge advantage when there are no high-fidelity simulators or well-defined principles in the CPS domain. Therefore, the data-driven approach can complement the knowledge-based approach depending on the application domain.

Open challenges: Though we successfully developed and evaluated ENVI, there are three main open challenges for data-driven environment model generation approaches.

First, sample efficiency is essential. This is because conducting FOTs to collect logs is the most expensive task in the data-driven approach. In our case study, BCxGAIL that combines BC and GAIL to improve sample efficiency indeed outperforms the other IL algorithms in most cases. Using state-of-the-art techniques for increasing sample efficiency Jena et al. (2020); Zhang et al. (2020); Robertson and Walter (2020) could further help.

Second, it should be robust to noise in FOT logs. We utilized IL techniques, and many IL studies assume the correctness of the expert demonstration Codevilla et al. (2018); Abdou et al. (2019); Peng et al. (2018). However, the demonstrator for IL algorithms in the data-driven environment model generation is the real environment, and therefore some level of noise can appear (e.g., due to sensor noise. In our case study, though we used noisy data collected by the real CPS, systematically investigating the impact of noise was not in the scope of our work. Nevertheless, as many studies have already considered the noise issue in machine learning Atla et al. (2011); Gupta and Gupta (2019); Zeng et al. (2021), they could better guide how to address noisy data.

Third, finding a proper level of abstraction for complex environmental behaviors is important. We abstracted the environment as a state-transition function in a closed-loop simulation and recast the environment model generation problem as the IL problem (see Section 4). This is a typical level of abstraction when an environment is modeled Qin et al. (2016); Reichstaller and Knapp (2018); Shin et al. (2021). However, this simple representation may not be sufficient to model complex environmental behaviors, such as structural changes in the environment during the FOT or responses to factors other than the system. Therefore, an extension of the CPS-ENV interaction model could be needed for some domains. It is an interesting future work, and we can also refer to some IL studies that imitate complex expert behaviors (e.g., multi-task or concurrent behavior) Agrawal and van de Panne (2016); Harmer et al. (2018); Singh et al. (2020).

8 Related work

Instead of conducting FOTs, assessing CPS on a simulation environment is widely used in CPS engineering. Therefore, many studies have been presented in modeling CPS environments.

Qin et al. (2016) and Reichstaller and Knapp (2018) modeled the interaction between the CPS and environment as a closed-loop similar to our CPS-ENV model. They generated environmental testing inputs to predict and evaluate the CPS runtime behavior. Fredericks (2016) also specified uncertain situations the CPS may face at runtime, such as inaccurate or delayed cognition of the environment, to evaluate the CPS with adverse environmental inputs. These approaches can generate the initial environmental inputs that the CPS observes using sensors, but the state transition of the environment during the simulation should be manually modeled by domain experts or engineers in an external simulator.

Some studies model the environment state transition for CPS simulation similar to our approach. Püschel et al. (2014) modeled the change of the environment state as a process model and reconfigured the environment based on the model during CPS simulation. Yang et al. (2014) explicitly specified how the environmental state is changed after CPS action on a state machine. Cámara et al. (2018) and Moreno et al. (2018) also modeled the probabilistic environment state transition in Markov Decision Process (MDP) and verified the CPS goal achievement in the dynamic environment. Though they modeled environmental state transition functions, domain experts have to manually design these models, which require sufficient domain knowledge and efforts.

There are studies utilizing environmental data to model the environment. Ding et al. (2016) modeled the continuous environment state transition as a continuous place in an extension of Petri nets, and the parameters in the model were learned from data. Aizawa et al. (2018) and Sykes et al. (2013)

modeled the changing environment as a labeled transition system (LTS) and a logic program, respectively. The initial environment models are revised by execution trace data of the system so that the models represent the changing environment of reality as accurately as possible. However, in these studies, the revised environment model is still highly dependent on the initial models made by experts because data update only the partial information in the model.

Unlike the previous studies that modeled the environment of CPS, we abstract the complex state transition of the environment into a black box function implemented as a neural network. As a result, the environment model can be automatically generated with execution trace samples of CPS without prior knowledge of the environment.

Independently from CPS, model-based Reinforcement Learning (RL) uses a notion of the environment model generally defined as anything that informs how the RL agent’s environment will respond to the agent’s actions 

Sutton and Barto (2018). Though the concept is similar to our environment model that interacts with the CPS under verification, the purposes of training (learning) the environment model are different. The primary objective of the environment model in model-based RL is to better learn the agent’s policy function, so an inaccurate environment model is acceptable as long as it can promote the policy learning process. Naturally, supervised learning is used for learning the environment model Moerland et al. (2021) without considering possible accumulations of errors over time. In contrast, the environment model in ENVI is to replace the real FOT environment for CPS goal verification, and therefore making the environment model the same as the real environment in a closed-loop simulation is our primary objective, which is why we leverage Imitation Learning (IL) in our approach.

9 Conclusion

In this paper, we present ENVI, a novel data-driven environment imitation approach that efficiently generates accurate virtual environment models for CPS goal verification. Instead of conducting expensive FOTs many times, ENVI requires only a few FOTs to collect some FOT logs for training a virtual environment model. By leveraging the representative IL algorithms (i.e., BC, GAIL, and BCxGAIL), an accurate virtual environment model can be generated automatically from the collected FOT logs. Our case study using a LEGO-lized autonomous vehicle equipped with a lane-keeping system shows that the CPS goal verification accuracy of the virtual environment models generated by our approach is very accurate, even when only a few FOT logs are used for training the models. The case study also shows that when the same CPS has multiple versions from an evolutionary development process, an ENVI-generated environment model can be used for the CPS goal verification of new versions whose FOT logs are never collected before for the model training.

In future work, we plan to provide practical guidelines for using ENVI with different IL algorithms by further investigating the characteristics of individual IL algorithms and conducting more case studies with complex CPS (e.g., an automated driving system composed of machine learning components). We further expect that ENVI is not limited to the purpose of CPS controller verification, so we also plan to suggest new applications of ENVI, such as an optimal CPS control predicting the environmental reaction.

Acknowledgements

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2022-2020-0-01795) and (SW Star Lab) Software R&D for Model-based Analysis and Verification of Higher-order Large Complex System (No. 2015-0-00250) supervised by the IITP (Institute of Information & Communications Technology Planning & Evaluation). This research was also partially supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2019R1A6A-3A03033444).

References

  • Baheti and Gill (2011) R. Baheti, H. Gill, Cyber-physical systems, The impact of control technology 12 (2011) 161–166.
  • An et al. (2020) D. An, J. Liu, M. Zhang, X. Chen, M. Chen, H. Sun, Uncertainty modeling and runtime verification for autonomous vehicles driving control: A machine learning-based approach, Journal of Systems and Software 167 (2020) 110617.
  • Mullins et al. (2018) G. E. Mullins, P. G. Stankiewicz, R. C. Hawthorne, S. K. Gupta, Adaptive generation of challenging scenarios for testing and evaluation of autonomous vehicles, Journal of Systems and Software 137 (2018) 197–215.
  • Bozhinoski et al. (2019) D. Bozhinoski, D. Di Ruscio, I. Malavolta, P. Pelliccione, I. Crnkovic, Safety for mobile robotic systems: A systematic mapping study from a software engineering perspective, Journal of Systems and Software 151 (2019) 150–179.
  • Ahmad and Babar (2016) A. Ahmad, M. A. Babar, Software architectures for robotic systems: A systematic mapping study, Journal of Systems and Software 122 (2016) 16–39.
  • Shiue et al. (2018) Y.-R. Shiue, K.-C. Lee, C.-T. Su, Real-time scheduling for a smart factory using a reinforcement learning approach, Computers & Industrial Engineering 125 (2018) 604–614.
  • Wang et al. (2022) W. Wang, Y. Zhang, J. Gu, J. Wang, A proactive manufacturing resources assignment method based on production performance prediction for the smart factory, IEEE Transactions on Industrial Informatics 18 (2022) 46–55.
  • Zema et al. (2015) M. Zema, S. Rosati, V. Gioia, M. Knaflitz, G. Balestra, Developing medical device software in compliance with regulations, in: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2015, pp. 1331–1334. doi:10.1109/EMBC.2015.7318614.
  • Fu (2011) K. Fu, Trustworthy medical device software, Public Health Effectiveness of the FDA 510 (2011) 102.
  • Hussein et al. (2017) A. Hussein, M. M. Gaber, E. Elyan, C. Jayne, Imitation learning: A survey of learning methods, ACM Comput. Surv. 50 (2017).
  • Michel (2004) O. Michel, Cyberbotics ltd. webots™: Professional mobile robot simulation, International Journal of Advanced Robotic Systems 1 (2004) 5.
  • Koenig and Howard (2004) N. Koenig, A. Howard,

    Design and use paradigms for gazebo, an open-source multi-robot simulator,

    in: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), volume 3, 2004, pp. 2149–2154 vol.3. doi:10.1109/IROS.2004.1389727.
  • Schaal (1996) S. Schaal, Learning from demonstration, in: Advances in Neural Information Processing Systems, 1996, pp. 1040–1046. URL: http://papers.nips.cc/paper/1224-learning-from-demonstration.
  • Argall et al. (2009) B. D. Argall, S. Chernova, M. Veloso, B. Browning, A survey of robot learning from demonstration, Robotics and Autonomous Systems 57 (2009) 469–483.
  • Ho and Ermon (2016a) J. Ho, S. Ermon, Generative adversarial imitation learning, in: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, Curran Associates Inc., Red Hook, NY, USA, 2016a, p. 4572–4580.
  • Ho and Ermon (2016b) J. Ho, S. Ermon, Generative adversarial imitation learning, in: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, Curran Associates Inc., Red Hook, NY, USA, 2016b, p. 4572–4580.
  • Jena et al. (2020) R. Jena, C. Liu, K. Sycara, Augmenting gail with bc for sample efficient imitation learning, arXiv (2020).
  • Sutton et al. (1998) R. S. Sutton, A. G. Barto, et al., Introduction to reinforcement learning, volume 135, MIT press Cambridge, 1998.
  • Xu and Duan (2019) L. D. Xu, L. Duan, Big data for cyber physical systems in industry 4.0: a survey, Enterprise Information Systems 13 (2019) 148–169.
  • Hagan et al. (1997) M. T. Hagan, H. B. Demuth, M. Beale, Neural network design, PWS Publishing Co., 1997.
  • Rafiq et al. (2001) M. Rafiq, G. Bugmann, D. Easterbrook, Neural network design for engineering applications, Computers & Structures 79 (2001) 1541–1552.
  • Schilling et al. (2019) A. Schilling, C. Metzner, J. Rietsch, R. Gerum, H. Schulze, P. Krauss, How deep is deep enough? – quantifying class separability in the hidden layers of deep neural networks, arXiv (2019).
  • Basden et al. (1991) A. Basden, I. Watson, P. Brandon, The evolutionary development of expert systems, in: Research & Development In Expert Systems Vlll, Cambridge University Press, 1991, pp. 67–81.
  • Helps and Mensah (2012) R. Helps, F. N. Mensah, Comprehensive design of cyber physical systems, in: Proceedings of the 13th Annual Conference on Information Technology Education, SIGITE ’12, Association for Computing Machinery, New York, NY, USA, 2012, p. 233–238. doi:10.1145/2380552.2380618.
  • Sirjani et al. (2021) M. Sirjani, L. Provenzano, S. A. Asadollah, M. H. Moghadam, M. Saadatmand, Towards a verification-driven iterative development of software for safety-critical cyber-physical systems, Journal of Internet Services and Applications 12 (2021) 1–29.
  • Shin et al. (2021) Y.-J. Shin, L. Liu, S. Hyun, D.-H. Bae, Platooning legos: An open physical exemplar for engineering self-adaptive cyber-physical systems-of-systems, in: 2021 International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2021, pp. 231–237. doi:10.1109/SEAMS51251.2021.00038.
  • Tao and Bin (2008) Y. Tao, Z. Bin, A novel self-tuning cps controller based on q-learning method, in: 2008 IEEE Power and Energy Society General Meeting - Conversion and Delivery of Electrical Energy in the 21st Century, 2008, pp. 1–6. doi:10.1109/PES.2008.4596654.
  • David et al. (2012) R.-C. David, R.-E. Precup, S. Preitl, J. K. Tar, J. Fodor, Three evolutionary optimization algorithms in pi controller tuning, in: Applied Computational Intelligence in Engineering and Information Technology, Springer, 2012, pp. 95–106. doi:10.1007/978-3-642-28305-5_8.
  • Cherrett and Pitfield (2001) T. Cherrett, D. Pitfield, Extracting driving characteristics from heavy goods vehicle tachograph charts, Transportation Planning and Technology 24 (2001) 349–363.
  • Paszke et al. (2019)

    A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, PyTorch: An Imperative Style, High-Performance Deep Learning Library, Curran Associates Inc., Red Hook, NY, USA, 2019, pp. 8026–8037.

  • Kingma and Ba (2017) D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv (2017).
  • Schulman et al. (2017) J. Schulman, F. Wolski, P. Dhariwal, A. Radford, O. Klimov, Proximal policy optimization algorithms, arXiv (2017).
  • Wang and Gong (2018) B. Wang, N. Z. Gong, Stealing hyperparameters in machine learning, in: 2018 IEEE Symposium on Security and Privacy (SP), 2018, pp. 36–52. doi:10.1109/SP.2018.00038.
  • Probst et al. (2019) P. Probst, A.-L. Boulesteix, B. Bischl, Tunability: Importance of hyperparameters of machine learning algorithms, J. Mach. Learn. Res. 20 (2019) 1934–1965.
  • Zhang et al. (2020) X. Zhang, Y. Li, Z. Zhang, Z.-L. Zhang, f-gail: Learning f-divergence for generative adversarial imitation learning, in: H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, H. Lin (Eds.), Advances in Neural Information Processing Systems, volume 33, Curran Associates, Inc., 2020, pp. 12805–12815. URL: https://proceedings.neurips.cc/paper/2020/file/967990de5b3eac7b87d49a13c6834978-Paper.pdf.
  • Robertson and Walter (2020) Z. W. Robertson, M. R. Walter, Concurrent training improves the performance of behavioral cloning from observation, arXiv (2020).
  • Codevilla et al. (2018) F. Codevilla, M. Müller, A. López, V. Koltun, A. Dosovitskiy, End-to-end driving via conditional imitation learning, in: 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 4693–4700. doi:10.1109/ICRA.2018.8460487.
  • Abdou et al. (2019) M. Abdou, H. Kamal, S. El-Tantawy, A. Abdelkhalek, O. Adel, K. Hamdy, M. Abaas, End-to-end deep conditional imitation learning for autonomous driving, in: 2019 31st International Conference on Microelectronics (ICM), 2019, pp. 346–350. doi:10.1109/ICM48031.2019.9021288.
  • Peng et al. (2018) X. B. Peng, P. Abbeel, S. Levine, M. van de Panne, Deepmimic: Example-guided deep reinforcement learning of physics-based character skills, ACM Trans. Graph. 37 (2018).
  • Atla et al. (2011) A. Atla, R. Tada, V. Sheng, N. Singireddy, Sensitivity of different machine learning algorithms to noise, J. Comput. Sci. Coll. 26 (2011) 96–103.
  • Gupta and Gupta (2019) S. Gupta, A. Gupta, Dealing with noise problem in machine learning data-sets: A systematic review, Procedia Computer Science 161 (2019) 466–474. The Fifth Information Systems International Conference, 23-24 July 2019, Surabaya, Indonesia.
  • Zeng et al. (2021) Z. Zeng, Y. Liu, W. Tang, F. Chen, Noise is useful: Exploiting data diversity for edge intelligence, IEEE Wireless Communications Letters 10 (2021) 957–961.
  • Qin et al. (2016) Y. Qin, C. Xu, P. Yu, J. Lu, Sit: Sampling-based interactive testing for self-adaptive apps, Journal of Systems and Software 120 (2016) 70–88.
  • Reichstaller and Knapp (2018) A. Reichstaller, A. Knapp, Risk-based testing of self-adaptive systems using run-time predictions, in: 2018 IEEE 12th International Conference on Self-Adaptive and Self-Organizing Systems (SASO), 2018, pp. 80–89. doi:10.1109/SASO.2018.00019.
  • Shin et al. (2021) Y.-J. Shin, J.-Y. Bae, D.-H. Bae, Concepts and models of environment of self-adaptive systems: A systematic literature review, in: 2021 28th Asia-Pacific Software Engineering Conference (APSEC), 2021, pp. 296–305. doi:10.1109/APSEC53868.2021.00037.
  • Agrawal and van de Panne (2016) S. Agrawal, M. van de Panne, Task-based locomotion, ACM Trans. Graph. 35 (2016).
  • Harmer et al. (2018) J. Harmer, L. Gisslén, J. del Val, H. Holst, J. Bergdahl, T. Olsson, K. Sjöö, M. Nordin, Imitation learning with concurrent actions in 3d games, in: 2018 IEEE Conference on Computational Intelligence and Games (CIG), 2018, pp. 1–8. doi:10.1109/CIG.2018.8490398.
  • Singh et al. (2020) A. Singh, E. Jang, A. Irpan, D. Kappler, M. Dalal, S. Levinev, M. Khansari, C. Finn, Scalable multi-task imitation learning with autonomous improvement, in: 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 2167–2173. doi:10.1109/ICRA40945.2020.9197020.
  • Fredericks (2016) E. M. Fredericks, Automatically hardening a self-adaptive system against uncertainty, in: 2016 IEEE/ACM 11th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2016, pp. 16–27. doi:10.1109/SEAMS.2016.010.
  • Püschel et al. (2014) G. Püschel, C. Piechnick, S. Götz, C. Seidl, S. Richly, T. Schlegel, U. Aßmann, A combined simulation and test case generation strategy for self-adaptive systems, Journal On Advances in Software 7 (2014) 686–696.
  • Yang et al. (2014) W. Yang, C. Xu, Y. Liu, C. Cao, X. Ma, J. Lu, Verifying self-adaptive applications suffering uncertainty, in: Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering, ASE ’14, Association for Computing Machinery, New York, NY, USA, 2014, p. 199–210. URL: https://doi.org/10.1145/2642937.2642999. doi:10.1145/2642937.2642999.
  • Cámara et al. (2018) J. Cámara, W. Peng, D. Garlan, B. Schmerl, Reasoning about sensing uncertainty in decision-making for self-adaptation, in: A. Cerone, M. Roveri (Eds.), Software Engineering and Formal Methods, Springer International Publishing, Cham, 2018, pp. 523–540.
  • Moreno et al. (2018) G. A. Moreno, J. Cámara, D. Garlan, B. Schmerl, Flexible and efficient decision-making for proactive latency-aware self-adaptation, ACM Trans. Auton. Adapt. Syst. 13 (2018).
  • Ding et al. (2016) Z. Ding, Y. Zhou, M. Zhou, Modeling self-adaptive software systems with learning petri nets, IEEE Transactions on Systems, Man, and Cybernetics: Systems 46 (2016) 483–498.
  • Aizawa et al. (2018) K. Aizawa, K. Tei, S. Honiden, Identifying safety properties guaranteed in changed environment at runtime, in: 2018 IEEE International Conference on Agents (ICA), 2018, pp. 75–80. doi:10.1109/AGENTS.2018.8460083.
  • Sykes et al. (2013) D. Sykes, D. Corapi, J. Magee, J. Kramer, A. Russo, K. Inoue, Learning revised models for planning in adaptive systems, in: 2013 35th International Conference on Software Engineering (ICSE), 2013, pp. 63–71. doi:10.1109/ICSE.2013.6606552.
  • Sutton and Barto (2018) R. S. Sutton, A. G. Barto, Reinforcement learning: An introduction, MIT press, 2018.
  • Moerland et al. (2021) T. M. Moerland, J. Broekens, C. M. Jonker, Model-based reinforcement learning: A survey, arXiv (2021).