The introduction of automated vehicles (AVs) to public roads promises many benefits. These include a reduced number of accidents caused by driver errors (safety), increased efficiency of the transport system (environment), increased spare time for users (comfort) and mobility for elderly and impaired users (social inclusion) . At higher levels of automation, in particular at SAE Level , AVs become complex autonomous systems operating in open context, i.e. dealing with unstructured real world environments. Thus, the validation and approval of AVs pose an enormous challenge . As a newly emerging and safety-critical technology, rigorously proving safe operability and strictly avoiding accidents is crucial for societal acceptance.
The process of assuring functional safety for conventional road-vehicles is guided by the ISO 26262 . This process has been extended by the ISO/PAS 21448  to include the complementary aspect of the safety of the intended functionality (SOTIF), at least for SAE Level . In the automotive domain, testing is an important part of verification and validation and has traditionally used distance-based statistical arguments. However, for AVs, distance-based approaches to testing become infeasible due to the vast amount of distance that needs to be covered  . Instead, other approaches begin to emerge, such as ’Responsibility Sensitive Safety’ (RSS), which is a formal model that relies on defining a safety envelop around the vehicle . However, it is questionable whether this formal model can actually be implemented in the real world . A novel approach, which has been explored by recent research projects such as PEGASUS  and ENABLE-S3 , is to perform scenario-based testing (SBT). In SBT, testing is performed by deriving relevant test cases from a manageable set of scenario classes.
In this paper, we perform a deep dive into the fundamental considerations that must be taken into account in order for SBT to generate a meaningful contribution to the verification and validation of AVs. The goal is to identify underlying principles and assumptions which are essential to the successful realization of SBT for automated driving. We discuss relevant terminology, notation and a general framework around SBT in Section II. The introduced framework provides the structure for Section III. For each process step, we first describe the state of the art. Secondly, we semi-formally examine the fundamental arguments, principles, and assumptions that are inherent to the aforementioned contemporary approaches. The core results of our analysis are summarized in Section IV as a condensed table, which may serve as an impulse for future research.
Ii Scenario-Based Testing
Verification and validation serves the purpose of assuring properties of interest, such as safety and security, with respect to both the specified and intended purpose of the system. In the automotive domain, testing is defined as the ’process of planning, preparing, and operating or exercising an item or an element to verify that it satisfies specified requirements, to detect anomalies, and to create confidence in its behavior’ . In SBT, the central artifact for the operation is the scenario. Such testing approaches are pursued as state of the art in the AV testing community  .
Ii-a Terminology and Notation
A standardized definition of the term scenario in the context of verification and validation of automated vehicles is presented in ISO/PAS 21448 , which we briefly introduce alongside a semi-formalism that will be used later on.
A scenario describes the development of scenes over time, thus building a temporal sequence. A test case is defined as a scenario enriched with pass/fail criteria suitable for test evaluation . Menzel et al. propose the qualifications of functional, logical and concrete scenarios . At the highest level of abstraction, a functional scenario is described semantically using natural language.
Formalizing a functional scenario, a logical scenario can be represented by a state space and its interrelations. It includes a list of relevant parameters and their value ranges with for . The state space of can be described by
. Optionally, correlations between parameters and numeric constraints can be added. Probability distributions can be attached to the state space or the value ranges. We denote the set of all logical scenarios by.
A concrete scenario requires the assignment of a single value to each parameter. It can be obtained from a logical scenario by instantiating all parameters with some , i.e. selecting a . We denote the set of all concrete scenarios by . Analogously, is the set of all concrete scenarios derived from a logical scenario . The process of instantiating a set of logical scenarios to concrete scenarios can be seen as a map w.r.t. the relations, constraints and distributions of .
Ii-B General Framework
In order to generate the aforementioned test cases and to interpret the test results, various steps are commonly executed up- and downstream of the actual testing process. Thus, it is crucial to embed SBT into a larger framework. Figure 1 shows such an abstract framework around SBT, mainly based on the ENABLE-S3 scenario-based verification and validation process  and the PEGASUS method . Note that this framework is not meant to be a core contribution of our work, but rather an organizational tool for the following considerations obtained from incorporating various existing approaches. For example, the framework is also consistent with the publications by Huang et al. [16, Figure 8] and Kalisvaart et al. [24, Figure 7].
The first step, scenario elicitation, consists of deriving adequate scenario classes, e.g. in the form of logical scenarios. The requirement elicitation step complements these classes by a set of safety requirements that shall be satisfied by the system in the identified scenarios. Testing then evaluates the system’s compliance to the requirements w.r.t. the scenario classes. Well-established concepts around testing can be found in standards such as the ISO/IEC/IEEE 29119  and ISO 26262 . Within this test process, test cases are initially derived from the scenario classes and safety requirements during test derivation. Subsequently, test execution is performed using an appropriate test bench, e.g. either virtually, physically or in a combination. The executed tests are assessed in a test evaluation step. The results of this whole process are then used in a subsequent, overarching safety argumentation, which contributes to the safety case.
Iii Fundamental Considerations
Structured along this framework, we examine arguments, principles and assumptions that, in the authors’ opinion, are fundamental to the SBT process. We initially present some general considerations applicable to the overall process in Section III-A. Then, in Section III-B to Section III-F, we conduct a two-part examination for each process step: we first give an overview of the state of the art which is followed by a structured analysis of the underlying fundamental considerations. The key insights are marked with an ID, linking them to Table I. We omit an in-depth analysis of the safety argumentation, which will be subject to further research.
Iii-a General Considerations
SBT requires a unified understanding of what the term scenario means. As mentioned in Section II-A, ongoing standardization activities work towards this unified understanding, but there remain open questions on the exact definition and usage of scenarios and what its qualifications describe. For example, the ISO/PAS 21448 uses logical scenarios in the test phase while Menzel et al. propose to use concrete scenarios . Depending on these definitions, the technical realizations of these qualifications, e.g. OpenDRIVE111asam.net/standards/detail/opendrive and OpenSCENARIO222asam.net/standards/detail/openscenario, need to be sufficiently expressive and unambiguous such that scenario descriptions lead to test cases without loss or misinterpretation of information (G1).
Second, SBT often assumes smoothness of certain properties of interest, e.g. of criticality metrics, behaviors or trajectories, under variation of the parameters, i.e. slight variations of the parameters lead to slight changes of the considered property (G2
). This smoothness can be exploited, e.g. to extrapolate knowledge of a single scenario to a set of scenarios or to explore the close proximity – in the sense of parameters – of a scenario in order to estimate the direction of more challenging scenarios. In both cases, this increases the certainty of the inference from finitely many test results to sets of scenarios when analyzing complex systems. However, Poddey et al.
argue that AVs are an instance of complex systems which rely heavily on sensor technologies for the perception of their open context. This poses difficulties, as processing the sensory input often uses machine learning. These algorithms are prone to misclassifications as a result of minor input changes, therefore smoothness of such classification algorithms cannot easily be assumed.
Since the open context in which AVs are supposed to operate is constantly evolving, the framework around SBT has to be adaptable. A complete set of scenario classes and testing methods at the time of elicitation may be incomplete at time of deployment due to a number of unforeseeable reasons, so called unknown unknowns  . These range from novel objects or vehicles appearing in traffic over newly introduced traffic regulations up to massive structural changes in traffic due to human drivers reacting to the introduction of AVs. Hence, an integrated update process and constant monitoring of AVs during their life cycle are inevitable for SBT (G3).
Iii-B Scenario Elicitation
Most published frameworks follow an expert- or data-driven approach to elicit functional or logical scenarios. Expert-driven approaches use a manifestation and formalization of expert knowledge about the real world. Proposed methods include the direct use of laws, regulations, system specifications, as well as already existing logical scenarios . As an a priori formalization step, it has been proposed to incorporate such knowledge into ontologies . Based on such formalized knowledge, experts can conduct safety analyses to identify hazardous scenarios 
. Scenarios can also be elicited in a data-driven manner. For example, large-scale observational studies can serve as a means to identify hazardous events, e.g. by learning a neural network. Those can in turn be transformed into logical scenarios, e.g. by using the challenger framework , and can additionally be attached with probability distributions from the observed data . Such a process can be enhanced by extending the already existing data basis . Besides investigating large data sets, one can also closely investigate a concrete real-world drive 
, which then allows for a parametrization into a logical scenario. Finally, there exist attempts to combine expert- and data-driven frameworks. In top-down approaches, logical scenarios are classified by experts, e.g. on a keyword-based scheme
, and the attached probability distributions are derived from real-world data. Orthogonally, bottom-up approaches use expert analysis on data bases, e.g. a Successive Odds Ratio Analysis or a classification tree approach , to classify observed concrete scenarios.
Expert-driven scenario elicitation necessitates defining a set of relevant real-world phenomena to be considered implicitly or explicitly, e.g. via an ontological representation. This approach is prone to creating a gap between identified and relevant phenomena. While this gap can be controlled by using systematic methods and made visible by explicit representations, completeness guarantees cannot be given. Moreover, experts lack the combinatorial power for the exhaustive exploration of the open context, increasing the risk of an incomplete set of scenario classes. In that regard, automated analyses are able to complement an expert-based approach (SE1). Most data-driven approaches rely on the existence of a sufficiently large data basis. It is assumed that relevant phenomena leave traces in recorded data, for example as causes of accidents in observational studies. In this case, relevant phenomena have to be present either directly in the data model or need to be manually identified by examination of the recorded scenarios (SE2). If the data basis is extended by collecting more data sets, the choice of observation is up to expert judgment and arguments must be made as to why this choice leads to scenario classes that enable SBT. Regarding the location of data collection, experts must argue that the chosen locations are in some form representative for the targeted operational design domain. This applies particularly to the validity of measured real-world distributions and their generalizability. For collecting data and evaluating concrete real-world drives, it has to be decided which entities and relations shall be observed in the real world. Additionally, appropriate tools and measurement sensors with sufficient accuracy have to be chosen purposefully and explicitly, in order for a downstream safety argumentation to reason over the validity of such decisions (SE3). Both expert- and data-driven approaches have in common that they either elicit an arguably complete classification of the scenario space, or need to reason on why the exclusion of certain scenarios is applicable in the current context (SE4). While a complete classification of scenarios that covers the entire scenario space is essential for SBT, such a decomposition does not reduce the overall test effort. However, if sound arguments can be made why some classes can be excluded altogether, omitting these classes significantly reduces the required test effort later on.
Iii-C Requirement Elicitation
Of particular importance for any safety-relevant application is a systematic safety process that includes identification of hazards and risks, derivation of safety goals and decomposition of the high-level safety goals into safety requirements on item and component level  . For systems up to SAE Level 2, this is well covered by ISO 26262 and ISO/PAS 21448. For SAE Level , scenarios appear to be an ideal starting point for the derivation and elicitation of acceptance criteria. Junietz et al. examine how such a quantified requirement definition can take place based on acceptable risk levels .
The process of identifying and specifying requirements is inherently bound to the problem of correctness (does the specification match the actual requirement), completeness (are all safety goals fully covered), consistency (the absence of contradictions) and validity (are we specifying a useful requirement in the first place) (RE1). Furthermore, for requirements tested virtually, specific arguments must be made with regards to measurability and computability. Here, the number of virtually executed scenarios can grow extremely large. Thus, the key elements of requirements need to be observable and measurable in the virtual environment (RE2).
Iii-D Test Derivation
During test derivation, a discretization step for a class to a discrete class is usually performed to reduce the test effort to a finite number of concretizations. This step can be seen as a map with . If the state space of a logical scenario , i.e. a whole scenario class, is fully described using a set of bounded value ranges , some of these parameters describe measurable continuous real-world quantities. Examples contain the speed of an object and rainfall quantity. Mapping each continuous value range to a finite, discrete one, denoted by , results in the aforementioned logical scenario with a finite, discrete state space .
While overly fine discretizations may produce many redundant test cases , any discretization step leads to gaps in the original state space, possibly resulting in undiscovered safety violations . One approach defines the map according to probability distributions of the parameters , obtained from either synthesized or real-world data. In both cases, however, the question of the validity of the data arises. Discretization according to distributions is more sophisticated than equidistant splitting and leads to scenarios that resemble reality more closely, but its usefulness depends on the fundamental assumption that ’probability of occurrence of a scenario correlates with its relevance in terms of the safety argumentation’ . Whether this assumption is justified depends on the paradigm used for the safety argumentation . We note that unlikely parameter combinations could lead to critical outcomes. Thus, discretization might lead to dismissing rare, critical scenarios due to preeliminated parameter combinations (TD1).
Iii-D2 Variation Methods
In order to generate test cases for a given logical scenario , it is necessary to instantiate the parameters with concrete values . For efficient testing, a systematic approach for instantiation is required. This can be done either deterministically, e.g. by stepping (equidistantly) through or performing Boundary Value Analysis , or stochastically, e.g. using Monte Carlo methods . If we attach probability distributions to each parameter value range
(or a joint distribution defined on), the instantiation
can be seen as a random variable that assigns concrete valuesto each parameter through sampling. Then, one may sample according to the real-world distributions of the parameters w.r.t. all constraints and interrelations of . It is likely that these distributions are approximated from either locally measured naturalistic data  or from valid simulated data  . Both cases assume that sampling according to real-world distributions enhances the value of test cases for the safety argument.
A different approach argues that scenarios being critical w.r.t. a suitable metric offer the most value for the safety argument and should therefore be tested preferably . However, as these rarely occur naturally , one has to increase their significance during the sampling process. This can be realized by sampling . Here, a criticality metric , that maps a discrete time series of values obtained from a simulation engine to a scalar measuring the criticality of a concrete scenario , can be optimized. This criticality-guided approach shifts much of the methodical burden to the metrics. First, a criticality metric has to reflect the real-world criticality accurately  (TD2). A single metric is unlikely to capture all types of critical phenomena so that several metrics in conjunction, each capturing different aspects of criticality, are required . Moreover, each metric is evaluated on a time series depending on , where is described by the parameters . Thus, sampling in order to optimize can only capture real-world criticalities that are (i) actively influenced by the parameters and (ii) accurately computed by . Clearly, (i) depends on the scenario description language and (ii) hinges on the validity of the simulation environment and the utilized models. Influencing factors for real-world criticality whose effects are not conveyed by this process can thus not be revealed. Simple metrics such as TTC incorporate only few parameters, but can be optimized efficiently. More involved metrics cover more influencing factors at the cost of higher computational effort . When defining a new metric, e.g. by combining known metrics and taking a weighted sum, the complexity of the associated optimization problem is determined by its mathematical properties, e.g. being continuous, differentiable, or convex.
For each criticality metric a threshold is required in order to label a time series as critical whenever (TD3). Junietz et al.  suggest fitting thresholds based on manually annotated scenarios which are then used to learn binary classifiers. Criticality thresholds can be used to define the critical subspace w.r.t. and . The question arises whether optimization algorithms reliably cover the set by finding critical concrete scenarios from all of its connected components. If components of are missed entirely, one can no longer argue about testing all critical instantiations of (TD4). In order to prevent this, optimization algorithms may have to be executed repeatedly, using varying step lengths or starting points. In any case, the dependency on the simulation engine remains.
Iii-E Test Execution
|G1||The term scenario, its qualifications and technical realizations are sufficiently expressive and unambiguous.||General||   |
|G2||If smoothness of a property of interest of the system is assumed, this assumption is asserted.|
|G3||The SBT process is adaptable to real-world changes.|
|SE1||An expert-based approach is executed systematically and supported by automation.||Scenario Elicitation||
  
  
  
|SE2||An identification of all relevant phenomena is facilitated.|
|SE3||A data-driven approach uses representative measurement locations, devices and valid probability distributions.|
|SE4||A decomposition of the test space into scenario classes is complemented either by evidence for its completeness or an argumentation for the omission of classes.|
|RE1||Identified requirements are correct, consistent, complete and valid.||Requirement Elicitation||  |
|RE2||Key elements of requirements are observable and measurable in the environment.|
|TD1||Discretization does not preeliminate valuable test cases.||Test Derivation||            |
|TD2||Utilized criticality measures are validated.|
|TD3||Suitable thresholds for criticality measures are employed.|
|TD4||The critical subspace is explored systematically.|
|TX1||The simulation environment, all models and their interactions are validated.||Test Execution||    |
|TE1||Test results are aggregated into a statement that supports the safety case.||Test Evaluation||   |
|TE2||Statements about test coverage of scenario classes are derived using sound statistical arguments.|
Depending on the stage of the development process, there exists a diverse set of test execution methods . An essential strategy is mixed virtual-physical testing, i.e. replacing physical components with virtual ones . For each involved component and simulation model, a test method needs to be chosen that provides valid test results .
In order for virtual testing to replace physical testing, the corresponding virtual models have to be verified and validated against their physical counterparts . The validity of a simulation environment depends on the validity of many individual models (e.g., environment models, behavior models, sensor models) as well as their interactions. Gathering relevant real-world data for virtual models is a key challenge in such a validation process, which also depends on the employed notion of validity (cf. ). Thus, the question how to validate simulation environments remains subject to current research (TX1).
Iii-F Test Evaluation
After executing test cases, they need to be evaluated according to safety requirements. For SBT, these requirements can appear in the form of thresholds for criticality metrics, being evaluated using a finite sequence of measurements during the evolution of a concrete scenario . It is possible to judge compliance of real-world tests with qualitative requirements using experts. For virtual batch-testing of scenarios, however, qualitative requirements need to be mapped to criticality metrics and thresholds. PEGASUS suggested a 4-stage-process for test case evaluation that evaluates the aspects safety distances, absence of collision, causality and mitigation . Instead of binary test results,  proposes test case evaluation on a more detailed ordinal scale, incorporating knowledge about possible discrepancies between the anticipated and actual category of a test case depending on the underlying paradigm.
Formalizing test evaluation, we assume that executing a concrete scenario instantiated by from results in a discrete time series of values . This time series can either be obtained by a simulation engine or by real-world measurements. We focus on the former here, i.e. and . A pass/fail-criterion is denoted as w.r.t. a suitable criticality metric and threshold for . The results of testing instantiations of , obtained from , need to be aggregated. A key challenge is to obtain a useful aggregate statement to support the safety case (TE1). While a simple success ratio can be adequate for comfort functions, it might not be a good choice for testing safety goals. This challenge becomes even more complex when evaluating multiple metrics.
The map and also – for a probabilistic simulation – can be seen as stochastic processes. Their composition with a criticality metric is a random variable . For test evaluation of an entire scenario class , for which not all instances can be tested, we need to make a statistical statement about the system under test failing a test coming from (TE2). A possible strategy is to estimate the probability of a random scenario derived from being critical w.r.t. , i.e. . The true distribution of is unknown which requires an estimate . Assuming the tested system is well-engineered and the threshold is well-chosen, we can expect to be very small. As explained in Section III-D, advanced Monte Carlo methods can be applied to obtain an estimate . A confidence statement about using can then be interpreted as a statistical statement about test coverage based on the generated samples .
As the result of our analysis, we present Table I, which summarizes the fundamental considerations from Section III, marked with an identifier. These considerations are grouped according to the steps of the SBT process, as depicted by Figure 1, and each step is annotated with examples of the relevant literature.
In order to obtain a valid safety case for the release of an AV, a coherent well-structured safety argumentation is required. Such a safety case needs to argue why and how the provided evidence satisfies the high-level safety requirements. One integral part of this evidence are the results of the testing process. In this regard, Table I provides a collection of considerations for which meaningful evidences need to be gathered in order for SBT to support a safety case. However, this collection is not exhaustive and additional considerations may be necessary to establish a valid safety case.
A comprehensive review and analysis of the literature concerning SBT for automated vehicles was performed. We presented numerous arguments, principles and assumptions that are fundamental to the automotive SBT approach. For each step of the process, we analyzed the strengths and weaknesses of the most promising contemporary approaches in order to uncover potential gaps and inconsistencies. As a result, we obtained a collection of fundamental considerations that need to be substantiated with evidences. It is subject to further research to provide approaches that generate these evidences reliably. Finally, an exploration of additional fundamental considerations complementing our collection is likely to be necessary.
-  (2019-10) A Risk-index based Sampling Method to Generate Scenarios for the Evaluation of Automated Driving Vehicle Safety. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 667–672. External Links: Cited by: §III-D2, TABLE I.
-  (2018) . IEEE Access 6, pp. 14410–14430. Cited by: §III-A, TABLE I.
-  (2019) Functional decomposition — A contribution to overcome the parameter space explosion during validation of highly automated driving. Traffic injury prevention 20 (sup1), pp. S52–S57. Cited by: §III-D1, TABLE I.
-  (2017-06) Test scenario selection for system-level verification and validation of geolocation-dependent automotive control systems. In 2017 International Conference on Engineering, Technology and Innovation (ICE/ITMC), pp. 203–210. External Links: Cited by: §III-B, TABLE I.
-  (2018) Ontology based scene creation for the development of automated vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1813–1820. Cited by: §III-B, TABLE I.
-  (2018) Efficient Splitting of Test and Simulation Cases for the Verification of Highly Automated Driving Functions. In International Conference on Computer Safety, Reliability, and Security, pp. 139–153. Cited by: §III-E, TABLE I.
-  Paradigms in Scenario-Based Testing for Automated Driving. Note: unpublished Cited by: §III-D1, §III-F, TABLE I.
-  (2019-05) Testing and Validation of Highly Automated Systems. Technical report Cited by: §I, Fig. 1, §II-B.
-  (2019) Statistical Model Checking for Scenario-based verification of ADAS. In Control Strategies for Advanced Driver Assistance Systems and Autonomous Driving Functions, pp. 67–87. Cited by: §III-F, TABLE I.
-  (2019) ERTRAC Connected Automated Driving Roadmap. ERTRAC Working Group 9. Cited by: §I.
-  (2017) Towards a Scenario-Based Assessment Method for Highly Automated Driving Functions. Cited by: §III-F, TABLE I.
-  (2017-06) Spatiotemporal representation of driving scenarios and classification using neural networks. In 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 1782–1788. External Links: Cited by: §III-B, TABLE I.
-  (2015) Virtuelle Integration. In Handbuch Fahrerassistenzsysteme, pp. 125–138. Cited by: §III-E, TABLE I.
-  (2018) Simulation-based identification of critical scenarios for cooperative and automated vehicles. SAE International Journal of Connected and Automated Vehicles 1 (2018-01-1066), pp. 93–106. External Links: Cited by: §III-D2, TABLE I.
-  (2019-10) Did We Test All Scenarios for Automated and Autonomous Driving Systems?. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 2950–2955. External Links: Cited by: §III-B, TABLE I.
-  (2016) Autonomous vehicles testing methods review. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pp. 163–168. External Links: Cited by: §II-B, §III-E, TABLE I.
-  (2011) ISO 26262: Road vehicles – Functional safety. Norm, ISO, Geneva, Switzerland. Cited by: §I, §II-B, §II.
-  (2013) ISO/IEC/IEEE 29119 Software and systems engineering – Software testing. Norm, ISO, Geneva, Switzerland. Cited by: §II-B.
-  (2019) ISO/PAS 21448: Road vehicles – Safety of the intended functionality. Norm/PAS, ISO, Geneva, Switzerland. Cited by: §I, §II-A, §II-A, §III-A, TABLE I.
-  (2018-11) Criticality Metric for the Safety Validation of Automated Driving using Model Predictive Trajectory Optimization. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 60–65. Note: ISSN: 2153-0009 External Links: Cited by: §III-D2, TABLE I.
-  (2018) Criticality Metric for the Safety Validation of Automated Driving using Model Predictive Trajectory Optimization. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 60–65. Cited by: §III-D2, TABLE I.
-  (2017) Metrik zur Bewertung der Kritikalität von Verkehrssituationen und -szenarien. In 11. Workshop Fahrerassistenzsysteme, Cited by: §III-D2, TABLE I.
-  (2019) Macroscopic safety requirements for highly automated driving. Transportation research record 2673 (3), pp. 1–10. External Links: Cited by: §III-C, §III-D2, TABLE I.
-  (2020) Using Scenarios in Safety Validation of Automated Systems. In Validation and Verification of Automated Systems: Results of the ENABLE-S3 Project, A. Leitner, D. Watzenig, and J. Ibanez-Guzman (Eds.), pp. 27–44 (en). External Links: Cited by: §II-B.
-  (2016) Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?. Transportation Research Part A: Policy and Practice 94, pp. 182–193. Cited by: §I.
-  (2019-10) Defining Pass-/Fail-Criteria for Particular Tests of Automated Driving Functions. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 169–174. External Links: Cited by: §III-C, TABLE I.
-  (2019) Autonomous Vehicles Meet the Physical World: RSS, Variability, Uncertainty, and Proving Safety. In International Conference on Computer Safety, Reliability, and Security, pp. 245–253. Cited by: §I.
Identifying unknown unknowns in the open world: representations and policies for guided exploration.
Thirty-First AAAI Conference on Artificial Intelligence, Cited by: §III-A.
-  (2019) From Functional to Logical Scenarios: Detailing a Keyword-Based Scenario Description for Execution in a Simulation Environment. In 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 2383–2390. External Links: Cited by: §III-B, TABLE I.
-  (2018) Scenarios for development, test and validation of automated vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1821–1827. Cited by: §II-A, §III-A, §III-D2, TABLE I.
-  (2020-02) Identification & quantification of hazardous scenarios for highly automated driving. Note: DOI 10.13140/RG.2.2.26704.66564 Cited by: §III-B, TABLE I.
-  (2019-05) The PEGASUS method. Technical report Cited by: §I, Fig. 1, §II-B.
-  (2019) Safety Statement. Poster. Note: www.pegasusprojekt.de/files/tmpl/Pegasus-Abschlussveranstaltung/28_Safety_Statement.pdf Cited by: §III-F, TABLE I.
-  (2019-01) On the validation of complex systems operating in open contexts. arXiv:1902.10517. External Links: Cited by: §I, §III-A, TABLE I.
-  (2019) An Optimization-based Method to Identify Relevant Scenarios for Type Approval of Automated Vehicles. In Proceedings of the ESV—International Technical Conference on the Enhanced Safety of Vehicles, Eindhoven, The Netherlands, pp. 10–13. Cited by: §III-B, §III-D1, TABLE I.
-  (2017) System validation of highly automated vehicles with a database of relevant traffic scenarios. Cited by: §III-B, TABLE I.
-  (2010) Verification and validation of simulation models. In Proceedings of the 2010 winter simulation conference, pp. 166–183. Cited by: §III-E, TABLE I.
-  (2019-09) Fault tree-based derivation of safety requirements for automated driving on the example of cooperative valet parking. In 26th International Technical Conference on the Enhanced Safety of Vehicles (ESV) 2019, pp. . Cited by: §III-C, TABLE I.
-  (2018-10) On a Formal Model of Safe and Scalable Self-driving Cars. arXiv:1708.06374. External Links: Cited by: §I.
-  (2016) From Simulation Data to Test Cases for Fully Automated Driving and ADAS. In Testing Software and Systems, F. Wotawa, M. Nica, and N. Kushik (Eds.), Lecture Notes in Computer Science, Cham, pp. 191–206 (en). External Links: Cited by: §III-D2, TABLE I.
-  (2019-05) A Method for Classifying Test Bench Configurations in a Scenario-Based Test Approach for Automated Vehicles. arXiv:1905.09018. Note: Comment: Declined at 2019 IEEE Intelligent Vehicles Symposium, will be updated in near future, 7 pages, 8 figures External Links: Cited by: §III-E, TABLE I.
-  (2015-09) Testing of Advanced Driver Assistance Towards Automated Driving: A Survey and Taxonomy on Existing Approaches and Open Questions. In 2015 IEEE 18th International Conference on Intelligent Transportation Systems, pp. 1455–1462. Note: ISSN: 2153-0009 External Links: Cited by: §II.
-  (2020) Coverage based testing for V&V and Safety Assurance of Self-driving Autonomous Vehicle: A Systematic Literature Review. In The Second IEEE International Conference On Artificial Intelligence Testing, Cited by: §II.
-  (2016) The release of autonomous vehicles. In Autonomous driving, pp. 425–449. Cited by: §I.
-  (2018-06) Using Time-to-React based on Naturalistic Traffic Object Behavior for Scenario-Based Risk Assessment of Automated Driving. In 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1521–1528. Note: ISSN: 1931-0587 External Links: Cited by: §III-D2, TABLE I.
-  (2019-09) Scenario Mining for Development of Predictive Safety Functions. In 2019 IEEE International Conference on Vehicular Electronics and Safety (ICVES), pp. 1–7. Note: ISSN: 2643-9743 External Links: Cited by: §III-B, TABLE I.
-  (2019) A framework for definition of logical scenarios for safety assurance of automated driving. Traffic injury prevention 20 (sup1), pp. S65–S70. Cited by: §III-B, TABLE I.
-  (2020-01) A simulation-based, statistical approach for the derivation of concrete scenarios for the release of highly automated driving functions. Note: DOI 10.13140/RG.2.2.15306.31683/1 Cited by: §III-D1, §III-D2, TABLE I.
-  (2015-07) Data-driven simulation and parametrization of traffic scenarios for the development of advanced driver assistance systems. In 2015 18th International Conference on Information Fusion (Fusion), pp. 1422–1428. Cited by: §III-B, TABLE I.