Fundamental Considerations around Scenario-Based Testing for Automated Driving

05/08/2020 ∙ by Christian Neurohr, et al. ∙ 0

The homologation of automated vehicles, being safety-critical complex systems, requires sound evidence for their safe operability. Traditionally, verification and validation activities are guided by a combination of ISO 26262 and ISO/PAS 21448, together with distance-based testing. Starting at SAE Level 3, such approaches become infeasible, resulting in the need for novel methods. Scenario-based testing is regarded as a possible enabler for verification and validation of automated vehicles. Its effectiveness, however, rests on the consistency and substantiality of the arguments used in each step of the process. In this work, we sketch a generic framework around scenario-based testing and analyze contemporary approaches to the individual steps. For each step, we describe its function, discuss proposed approaches and solutions, and identify the underlying arguments, principles and assumptions. As a result, we present a list of fundamental considerations for which evidences need to be gathered in order for scenario-based testing to support the homologation of automated vehicles.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The introduction of automated vehicles (AVs) to public roads promises many benefits. These include a reduced number of accidents caused by driver errors (safety), increased efficiency of the transport system (environment), increased spare time for users (comfort) and mobility for elderly and impaired users (social inclusion) [10]. At higher levels of automation, in particular at SAE Level , AVs become complex autonomous systems operating in open context, i.e. dealing with unstructured real world environments. Thus, the validation and approval of AVs pose an enormous challenge [34]. As a newly emerging and safety-critical technology, rigorously proving safe operability and strictly avoiding accidents is crucial for societal acceptance.

The process of assuring functional safety for conventional road-vehicles is guided by the ISO 26262 [17]. This process has been extended by the ISO/PAS 21448 [19] to include the complementary aspect of the safety of the intended functionality (SOTIF), at least for SAE Level . In the automotive domain, testing is an important part of verification and validation and has traditionally used distance-based statistical arguments. However, for AVs, distance-based approaches to testing become infeasible due to the vast amount of distance that needs to be covered [25] [44]. Instead, other approaches begin to emerge, such as ’Responsibility Sensitive Safety’ (RSS), which is a formal model that relies on defining a safety envelop around the vehicle [39]. However, it is questionable whether this formal model can actually be implemented in the real world [27]. A novel approach, which has been explored by recent research projects such as PEGASUS [32] and ENABLE-S3 [8], is to perform scenario-based testing (SBT). In SBT, testing is performed by deriving relevant test cases from a manageable set of scenario classes.

In this paper, we perform a deep dive into the fundamental considerations that must be taken into account in order for SBT to generate a meaningful contribution to the verification and validation of AVs. The goal is to identify underlying principles and assumptions which are essential to the successful realization of SBT for automated driving. We discuss relevant terminology, notation and a general framework around SBT in Section  II. The introduced framework provides the structure for Section  III. For each process step, we first describe the state of the art. Secondly, we semi-formally examine the fundamental arguments, principles, and assumptions that are inherent to the aforementioned contemporary approaches. The core results of our analysis are summarized in Section  IV as a condensed table, which may serve as an impulse for future research.

Ii Scenario-Based Testing

Fig. 1: Simplified framework around scenario-based testing based on [32] and [8]. Rectangular boxes represent process steps, whereas curved boxes depict artifacts.

Verification and validation serves the purpose of assuring properties of interest, such as safety and security, with respect to both the specified and intended purpose of the system. In the automotive domain, testing is defined as the ’process of planning, preparing, and operating or exercising an item or an element to verify that it satisfies specified requirements, to detect anomalies, and to create confidence in its behavior’ [17]. In SBT, the central artifact for the operation is the scenario. Such testing approaches are pursued as state of the art in the AV testing community [42] [43].

Ii-a Terminology and Notation

A standardized definition of the term scenario in the context of verification and validation of automated vehicles is presented in ISO/PAS 21448 [19], which we briefly introduce alongside a semi-formalism that will be used later on.

A scenario describes the development of scenes over time, thus building a temporal sequence. A test case is defined as a scenario enriched with pass/fail criteria suitable for test evaluation [19]. Menzel et al. propose the qualifications of functional, logical and concrete scenarios [30]. At the highest level of abstraction, a functional scenario is described semantically using natural language.

Formalizing a functional scenario, a logical scenario can be represented by a state space and its interrelations. It includes a list of relevant parameters and their value ranges with for . The state space of can be described by

. Optionally, correlations between parameters and numeric constraints can be added. Probability distributions can be attached to the state space or the value ranges. We denote the set of all logical scenarios by

.

A concrete scenario requires the assignment of a single value to each parameter. It can be obtained from a logical scenario by instantiating all parameters with some , i.e. selecting a . We denote the set of all concrete scenarios by . Analogously, is the set of all concrete scenarios derived from a logical scenario . The process of instantiating a set of logical scenarios to concrete scenarios can be seen as a map w.r.t. the relations, constraints and distributions of .

Ii-B General Framework

In order to generate the aforementioned test cases and to interpret the test results, various steps are commonly executed up- and downstream of the actual testing process. Thus, it is crucial to embed SBT into a larger framework. Figure  1 shows such an abstract framework around SBT, mainly based on the ENABLE-S3 scenario-based verification and validation process [8] and the PEGASUS method [32]. Note that this framework is not meant to be a core contribution of our work, but rather an organizational tool for the following considerations obtained from incorporating various existing approaches. For example, the framework is also consistent with the publications by Huang et al. [16, Figure 8] and Kalisvaart et al. [24, Figure 7].

The first step, scenario elicitation, consists of deriving adequate scenario classes, e.g. in the form of logical scenarios. The requirement elicitation step complements these classes by a set of safety requirements that shall be satisfied by the system in the identified scenarios. Testing then evaluates the system’s compliance to the requirements w.r.t. the scenario classes. Well-established concepts around testing can be found in standards such as the ISO/IEC/IEEE 29119 [18] and ISO 26262 [17]. Within this test process, test cases are initially derived from the scenario classes and safety requirements during test derivation. Subsequently, test execution is performed using an appropriate test bench, e.g. either virtually, physically or in a combination. The executed tests are assessed in a test evaluation step. The results of this whole process are then used in a subsequent, overarching safety argumentation, which contributes to the safety case.

Iii Fundamental Considerations

Structured along this framework, we examine arguments, principles and assumptions that, in the authors’ opinion, are fundamental to the SBT process. We initially present some general considerations applicable to the overall process in Section  III-A. Then, in Section  III-B to Section  III-F, we conduct a two-part examination for each process step: we first give an overview of the state of the art which is followed by a structured analysis of the underlying fundamental considerations. The key insights are marked with an ID, linking them to Table  I. We omit an in-depth analysis of the safety argumentation, which will be subject to further research.

Iii-a General Considerations

SBT requires a unified understanding of what the term scenario means. As mentioned in Section  II-A, ongoing standardization activities work towards this unified understanding, but there remain open questions on the exact definition and usage of scenarios and what its qualifications describe. For example, the ISO/PAS 21448 uses logical scenarios in the test phase while Menzel et al. propose to use concrete scenarios [30]. Depending on these definitions, the technical realizations of these qualifications, e.g. OpenDRIVE111asam.net/standards/detail/opendrive and OpenSCENARIO222asam.net/standards/detail/openscenario, need to be sufficiently expressive and unambiguous such that scenario descriptions lead to test cases without loss or misinterpretation of information (G1).

Second, SBT often assumes smoothness of certain properties of interest, e.g. of criticality metrics, behaviors or trajectories, under variation of the parameters, i.e. slight variations of the parameters lead to slight changes of the considered property (G2

). This smoothness can be exploited, e.g. to extrapolate knowledge of a single scenario to a set of scenarios or to explore the close proximity – in the sense of parameters – of a scenario in order to estimate the direction of more challenging scenarios. In both cases, this increases the certainty of the inference from finitely many test results to sets of scenarios when analyzing complex systems. However, Poddey et al. 

[34]

argue that AVs are an instance of complex systems which rely heavily on sensor technologies for the perception of their open context. This poses difficulties, as processing the sensory input often uses machine learning. These algorithms are prone to misclassifications as a result of minor input changes

[2], therefore smoothness of such classification algorithms cannot easily be assumed.

Since the open context in which AVs are supposed to operate is constantly evolving, the framework around SBT has to be adaptable. A complete set of scenario classes and testing methods at the time of elicitation may be incomplete at time of deployment due to a number of unforeseeable reasons, so called unknown unknowns [19] [28]. These range from novel objects or vehicles appearing in traffic over newly introduced traffic regulations up to massive structural changes in traffic due to human drivers reacting to the introduction of AVs. Hence, an integrated update process and constant monitoring of AVs during their life cycle are inevitable for SBT (G3).

Iii-B Scenario Elicitation

Most published frameworks follow an expert- or data-driven approach to elicit functional or logical scenarios. Expert-driven approaches use a manifestation and formalization of expert knowledge about the real world. Proposed methods include the direct use of laws, regulations, system specifications, as well as already existing logical scenarios [35]. As an a priori formalization step, it has been proposed to incorporate such knowledge into ontologies [5]. Based on such formalized knowledge, experts can conduct safety analyses to identify hazardous scenarios [31]

. Scenarios can also be elicited in a data-driven manner. For example, large-scale observational studies can serve as a means to identify hazardous events, e.g. by learning a neural network

[12]. Those can in turn be transformed into logical scenarios, e.g. by using the challenger framework [47], and can additionally be attached with probability distributions from the observed data [36]. Such a process can be enhanced by extending the already existing data basis [15]. Besides investigating large data sets, one can also closely investigate a concrete real-world drive [49]

, which then allows for a parametrization into a logical scenario. Finally, there exist attempts to combine expert- and data-driven frameworks. In top-down approaches, logical scenarios are classified by experts, e.g. on a keyword-based scheme

[29]

, and the attached probability distributions are derived from real-world data. Orthogonally, bottom-up approaches use expert analysis on data bases, e.g. a Successive Odds Ratio Analysis

[46] or a classification tree approach [4], to classify observed concrete scenarios.

Expert-driven scenario elicitation necessitates defining a set of relevant real-world phenomena to be considered implicitly or explicitly, e.g. via an ontological representation. This approach is prone to creating a gap between identified and relevant phenomena. While this gap can be controlled by using systematic methods and made visible by explicit representations, completeness guarantees cannot be given. Moreover, experts lack the combinatorial power for the exhaustive exploration of the open context, increasing the risk of an incomplete set of scenario classes. In that regard, automated analyses are able to complement an expert-based approach (SE1). Most data-driven approaches rely on the existence of a sufficiently large data basis. It is assumed that relevant phenomena leave traces in recorded data, for example as causes of accidents in observational studies. In this case, relevant phenomena have to be present either directly in the data model or need to be manually identified by examination of the recorded scenarios (SE2). If the data basis is extended by collecting more data sets, the choice of observation is up to expert judgment and arguments must be made as to why this choice leads to scenario classes that enable SBT. Regarding the location of data collection, experts must argue that the chosen locations are in some form representative for the targeted operational design domain. This applies particularly to the validity of measured real-world distributions and their generalizability. For collecting data and evaluating concrete real-world drives, it has to be decided which entities and relations shall be observed in the real world. Additionally, appropriate tools and measurement sensors with sufficient accuracy have to be chosen purposefully and explicitly, in order for a downstream safety argumentation to reason over the validity of such decisions (SE3). Both expert- and data-driven approaches have in common that they either elicit an arguably complete classification of the scenario space, or need to reason on why the exclusion of certain scenarios is applicable in the current context (SE4). While a complete classification of scenarios that covers the entire scenario space is essential for SBT, such a decomposition does not reduce the overall test effort. However, if sound arguments can be made why some classes can be excluded altogether, omitting these classes significantly reduces the required test effort later on.

Iii-C Requirement Elicitation

Of particular importance for any safety-relevant application is a systematic safety process that includes identification of hazards and risks, derivation of safety goals and decomposition of the high-level safety goals into safety requirements on item and component level [26] [38]. For systems up to SAE Level 2, this is well covered by ISO 26262 and ISO/PAS 21448. For SAE Level , scenarios appear to be an ideal starting point for the derivation and elicitation of acceptance criteria. Junietz et al. examine how such a quantified requirement definition can take place based on acceptable risk levels [23].

The process of identifying and specifying requirements is inherently bound to the problem of correctness (does the specification match the actual requirement), completeness (are all safety goals fully covered), consistency (the absence of contradictions) and validity (are we specifying a useful requirement in the first place) (RE1). Furthermore, for requirements tested virtually, specific arguments must be made with regards to measurability and computability. Here, the number of virtually executed scenarios can grow extremely large. Thus, the key elements of requirements need to be observable and measurable in the virtual environment (RE2).

Iii-D Test Derivation

Iii-D1 Discretization

During test derivation, a discretization step for a class to a discrete class is usually performed to reduce the test effort to a finite number of concretizations. This step can be seen as a map with . If the state space of a logical scenario , i.e. a whole scenario class, is fully described using a set of bounded value ranges , some of these parameters describe measurable continuous real-world quantities. Examples contain the speed of an object and rainfall quantity. Mapping each continuous value range to a finite, discrete one, denoted by , results in the aforementioned logical scenario with a finite, discrete state space .

While overly fine discretizations may produce many redundant test cases [35], any discretization step leads to gaps in the original state space, possibly resulting in undiscovered safety violations [3]. One approach defines the map according to probability distributions of the parameters [48], obtained from either synthesized or real-world data. In both cases, however, the question of the validity of the data arises. Discretization according to distributions is more sophisticated than equidistant splitting and leads to scenarios that resemble reality more closely, but its usefulness depends on the fundamental assumption that ’probability of occurrence of a scenario correlates with its relevance in terms of the safety argumentation’ [48]. Whether this assumption is justified depends on the paradigm used for the safety argumentation [7]. We note that unlikely parameter combinations could lead to critical outcomes. Thus, discretization might lead to dismissing rare, critical scenarios due to preeliminated parameter combinations (TD1).

Iii-D2 Variation Methods

In order to generate test cases for a given logical scenario , it is necessary to instantiate the parameters with concrete values [30]. For efficient testing, a systematic approach for instantiation is required. This can be done either deterministically, e.g. by stepping (equidistantly) through or performing Boundary Value Analysis [40], or stochastically, e.g. using Monte Carlo methods [1]. If we attach probability distributions to each parameter value range

(or a joint distribution defined on

), the instantiation

can be seen as a random variable that assigns concrete values

to each parameter through sampling. Then, one may sample according to the real-world distributions of the parameters w.r.t. all constraints and interrelations of . It is likely that these distributions are approximated from either locally measured naturalistic data [45] or from valid simulated data [1] [48]. Both cases assume that sampling according to real-world distributions enhances the value of test cases for the safety argument.

A different approach argues that scenarios being critical w.r.t. a suitable metric offer the most value for the safety argument and should therefore be tested preferably [21]. However, as these rarely occur naturally [23], one has to increase their significance during the sampling process. This can be realized by sampling . Here, a criticality metric , that maps a discrete time series of values obtained from a simulation engine to a scalar measuring the criticality of a concrete scenario , can be optimized. This criticality-guided approach shifts much of the methodical burden to the metrics. First, a criticality metric has to reflect the real-world criticality accurately [21] (TD2). A single metric is unlikely to capture all types of critical phenomena so that several metrics in conjunction, each capturing different aspects of criticality, are required [14]. Moreover, each metric is evaluated on a time series depending on , where is described by the parameters . Thus, sampling in order to optimize can only capture real-world criticalities that are (i) actively influenced by the parameters and (ii) accurately computed by . Clearly, (i) depends on the scenario description language and (ii) hinges on the validity of the simulation environment and the utilized models. Influencing factors for real-world criticality whose effects are not conveyed by this process can thus not be revealed. Simple metrics such as TTC incorporate only few parameters, but can be optimized efficiently. More involved metrics cover more influencing factors at the cost of higher computational effort [20]. When defining a new metric, e.g. by combining known metrics and taking a weighted sum, the complexity of the associated optimization problem is determined by its mathematical properties, e.g. being continuous, differentiable, or convex.

For each criticality metric a threshold is required in order to label a time series as critical whenever (TD3). Junietz et al. [22] suggest fitting thresholds based on manually annotated scenarios which are then used to learn binary classifiers. Criticality thresholds can be used to define the critical subspace w.r.t. and . The question arises whether optimization algorithms reliably cover the set by finding critical concrete scenarios from all of its connected components. If components of are missed entirely, one can no longer argue about testing all critical instantiations of (TD4). In order to prevent this, optimization algorithms may have to be executed repeatedly, using varying step lengths or starting points. In any case, the dependency on the simulation engine remains.

Iii-E Test Execution

ID Consideration Process Step Literature
G1 The term scenario, its qualifications and technical realizations are sufficiently expressive and unambiguous. General [34] [19] [30] [2]
G2 If smoothness of a property of interest of the system is assumed, this assumption is asserted.
G3 The SBT process is adaptable to real-world changes.
SE1 An expert-based approach is executed systematically and supported by automation. Scenario Elicitation [35] [5] [31]
[12] [47] [36]
[15] [49] [29]
[46] [4]
SE2 An identification of all relevant phenomena is facilitated.
SE3 A data-driven approach uses representative measurement locations, devices and valid probability distributions.
SE4 A decomposition of the test space into scenario classes is complemented either by evidence for its completeness or an argumentation for the omission of classes.
RE1 Identified requirements are correct, consistent, complete and valid. Requirement Elicitation [26] [38] [23]
RE2 Key elements of requirements are observable and measurable in the environment.
TD1 Discretization does not preeliminate valuable test cases. Test Derivation [30] [35] [23] [3] [48] [7] [40] [1] [45] [21] [14] [20] [22]
TD2 Utilized criticality measures are validated.
TD3 Suitable thresholds for criticality measures are employed.
TD4 The critical subspace is explored systematically.
TX1 The simulation environment, all models and their interactions are validated. Test Execution [16] [13] [41] [37] [6]
TE1 Test results are aggregated into a statement that supports the safety case. Test Evaluation [7] [11] [33] [9]
TE2 Statements about test coverage of scenario classes are derived using sound statistical arguments.
TABLE I: Overview of the examined fundamental considerations around scenario-based testing. Examples of relevant literature are stated for each process step.

Depending on the stage of the development process, there exists a diverse set of test execution methods [16]. An essential strategy is mixed virtual-physical testing, i.e. replacing physical components with virtual ones [13]. For each involved component and simulation model, a test method needs to be chosen that provides valid test results [41].

In order for virtual testing to replace physical testing, the corresponding virtual models have to be verified and validated against their physical counterparts [37]. The validity of a simulation environment depends on the validity of many individual models (e.g., environment models, behavior models, sensor models) as well as their interactions. Gathering relevant real-world data for virtual models is a key challenge in such a validation process, which also depends on the employed notion of validity (cf. [6]). Thus, the question how to validate simulation environments remains subject to current research (TX1).

Iii-F Test Evaluation

After executing test cases, they need to be evaluated according to safety requirements. For SBT, these requirements can appear in the form of thresholds for criticality metrics, being evaluated using a finite sequence of measurements during the evolution of a concrete scenario [11]. It is possible to judge compliance of real-world tests with qualitative requirements using experts. For virtual batch-testing of scenarios, however, qualitative requirements need to be mapped to criticality metrics and thresholds. PEGASUS suggested a 4-stage-process for test case evaluation that evaluates the aspects safety distances, absence of collision, causality and mitigation [33]. Instead of binary test results, [7] proposes test case evaluation on a more detailed ordinal scale, incorporating knowledge about possible discrepancies between the anticipated and actual category of a test case depending on the underlying paradigm.

Formalizing test evaluation, we assume that executing a concrete scenario instantiated by from results in a discrete time series of values . This time series can either be obtained by a simulation engine or by real-world measurements. We focus on the former here, i.e.  and . A pass/fail-criterion is denoted as w.r.t. a suitable criticality metric and threshold for . The results of testing instantiations of , obtained from , need to be aggregated. A key challenge is to obtain a useful aggregate statement to support the safety case (TE1). While a simple success ratio can be adequate for comfort functions, it might not be a good choice for testing safety goals. This challenge becomes even more complex when evaluating multiple metrics.

The map and also – for a probabilistic simulation – can be seen as stochastic processes. Their composition with a criticality metric is a random variable . For test evaluation of an entire scenario class , for which not all instances can be tested, we need to make a statistical statement about the system under test failing a test coming from (TE2). A possible strategy is to estimate the probability of a random scenario derived from being critical w.r.t. , i.e. . The true distribution of is unknown which requires an estimate . Assuming the tested system is well-engineered and the threshold is well-chosen, we can expect to be very small. As explained in Section  III-D, advanced Monte Carlo methods can be applied to obtain an estimate . A confidence statement about using can then be interpreted as a statistical statement about test coverage based on the generated samples [9].

Iv Results

As the result of our analysis, we present Table  I, which summarizes the fundamental considerations from Section  III, marked with an identifier. These considerations are grouped according to the steps of the SBT process, as depicted by Figure  1, and each step is annotated with examples of the relevant literature.

In order to obtain a valid safety case for the release of an AV, a coherent well-structured safety argumentation is required. Such a safety case needs to argue why and how the provided evidence satisfies the high-level safety requirements. One integral part of this evidence are the results of the testing process. In this regard, Table  I provides a collection of considerations for which meaningful evidences need to be gathered in order for SBT to support a safety case. However, this collection is not exhaustive and additional considerations may be necessary to establish a valid safety case.

V Conclusion

A comprehensive review and analysis of the literature concerning SBT for automated vehicles was performed. We presented numerous arguments, principles and assumptions that are fundamental to the automotive SBT approach. For each step of the process, we analyzed the strengths and weaknesses of the most promising contemporary approaches in order to uncover potential gaps and inconsistencies. As a result, we obtained a collection of fundamental considerations that need to be substantiated with evidences. It is subject to further research to provide approaches that generate these evidences reliably. Finally, an exploration of additional fundamental considerations complementing our collection is likely to be necessary.

References

  • [1] Y. Akagi, R. Kato, S. Kitajima, J. Antona-Makoshi, and N. Uchida (2019-10) A Risk-index based Sampling Method to Generate Scenarios for the Evaluation of Automated Driving Vehicle Safety. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 667–672. External Links: Document Cited by: §III-D2, TABLE I.
  • [2] N. Akhtar and A. Mian (2018)

    Threat of adversarial attacks on deep learning in computer vision: a survey

    .
    IEEE Access 6, pp. 14410–14430. Cited by: §III-A, TABLE I.
  • [3] C. Amersbach and H. Winner (2019) Functional decomposition — A contribution to overcome the parameter space explosion during validation of highly automated driving. Traffic injury prevention 20 (sup1), pp. S52–S57. Cited by: §III-D1, TABLE I.
  • [4] J. Bach, J. Langner, S. Otten, E. Sax, and M. Holzäpfel (2017-06) Test scenario selection for system-level verification and validation of geolocation-dependent automotive control systems. In 2017 International Conference on Engineering, Technology and Innovation (ICE/ITMC), pp. 203–210. External Links: Document Cited by: §III-B, TABLE I.
  • [5] G. Bagschik, T. Menzel, and M. Maurer (2018) Ontology based scene creation for the development of automated vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1813–1820. Cited by: §III-B, TABLE I.
  • [6] E. Böde, M. Büker, U. Eberle, M. Fränzle, S. Gerwinn, and B. Kramer (2018) Efficient Splitting of Test and Simulation Cases for the Verification of Highly Automated Driving Functions. In International Conference on Computer Safety, Reliability, and Security, pp. 139–153. Cited by: §III-E, TABLE I.
  • [7] T. Brade, B. Kramer, and C. Neurohr Paradigms in Scenario-Based Testing for Automated Driving. Note: unpublished Cited by: §III-D1, §III-F, TABLE I.
  • [8] ENABLE-S3 Consortium (2019-05) Testing and Validation of Highly Automated Systems. Technical report Cited by: §I, Fig. 1, §II-B.
  • [9] S. Gerwinn, E. Möhlmann, and A. Sieper (2019) Statistical Model Checking for Scenario-based verification of ADAS. In Control Strategies for Advanced Driver Assistance Systems and Autonomous Driving Functions, pp. 67–87. Cited by: §III-F, TABLE I.
  • [10] A. Graeter, M. Rosenquist, E. Steiger, and M. Harrer (2019) ERTRAC Connected Automated Driving Roadmap. ERTRAC Working Group 9. Cited by: §I.
  • [11] K. Groh, T. Kuehbeck, B. Fleischmann, M. Schiementz, and C. Chibelushi (2017) Towards a Scenario-Based Assessment Method for Highly Automated Driving Functions. Cited by: §III-F, TABLE I.
  • [12] R. Gruner, P. Henzler, G. Hinz, C. Eckstein, and A. Knoll (2017-06) Spatiotemporal representation of driving scenarios and classification using neural networks. In 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 1782–1788. External Links: Document Cited by: §III-B, TABLE I.
  • [13] S. Hakuli and M. Krug (2015) Virtuelle Integration. In Handbuch Fahrerassistenzsysteme, pp. 125–138. Cited by: §III-E, TABLE I.
  • [14] S. Hallerbach, Y. Xia, U. Eberle, and F. Koester (2018) Simulation-based identification of critical scenarios for cooperative and automated vehicles. SAE International Journal of Connected and Automated Vehicles 1 (2018-01-1066), pp. 93–106. External Links: Document Cited by: §III-D2, TABLE I.
  • [15] F. Hauer, T. Schmidt, B. Holzmüller, and A. Pretschner (2019-10) Did We Test All Scenarios for Automated and Autonomous Driving Systems?. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 2950–2955. External Links: Document Cited by: §III-B, TABLE I.
  • [16] W. Huang, K. Wang, Y. Lv, and F. Zhu (2016) Autonomous vehicles testing methods review. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pp. 163–168. External Links: Document Cited by: §II-B, §III-E, TABLE I.
  • [17] ISO (2011) ISO 26262: Road vehicles – Functional safety. Norm, ISO, Geneva, Switzerland. Cited by: §I, §II-B, §II.
  • [18] ISO (2013) ISO/IEC/IEEE 29119 Software and systems engineering – Software testing. Norm, ISO, Geneva, Switzerland. Cited by: §II-B.
  • [19] ISO (2019) ISO/PAS 21448: Road vehicles – Safety of the intended functionality. Norm/PAS, ISO, Geneva, Switzerland. Cited by: §I, §II-A, §II-A, §III-A, TABLE I.
  • [20] P. Junietz, F. Bonakdar, B. Klamann, and H. Winner (2018-11) Criticality Metric for the Safety Validation of Automated Driving using Model Predictive Trajectory Optimization. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 60–65. Note: ISSN: 2153-0009 External Links: Document Cited by: §III-D2, TABLE I.
  • [21] P. Junietz, F. Bonakdar, B. Klamann, and H. Winner (2018) Criticality Metric for the Safety Validation of Automated Driving using Model Predictive Trajectory Optimization. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 60–65. Cited by: §III-D2, TABLE I.
  • [22] P. Junietz, J. Schneider, and H. Winner (2017) Metrik zur Bewertung der Kritikalität von Verkehrssituationen und -szenarien. In 11. Workshop Fahrerassistenzsysteme, Cited by: §III-D2, TABLE I.
  • [23] P. Junietz, U. Steininger, and H. Winner (2019) Macroscopic safety requirements for highly automated driving. Transportation research record 2673 (3), pp. 1–10. External Links: Document Cited by: §III-C, §III-D2, TABLE I.
  • [24] S. Kalisvaart, Z. Slavik, and O. Op den Camp (2020) Using Scenarios in Safety Validation of Automated Systems. In Validation and Verification of Automated Systems: Results of the ENABLE-S3 Project, A. Leitner, D. Watzenig, and J. Ibanez-Guzman (Eds.), pp. 27–44 (en). External Links: ISBN 978-3-030-14628-3, Link, Document Cited by: §II-B.
  • [25] N. Kalra and S. M. Paddock (2016) Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?. Transportation Research Part A: Policy and Practice 94, pp. 182–193. Cited by: §I.
  • [26] B. Klamann, M. Lippert, C. Amersbach, and H. Winner (2019-10) Defining Pass-/Fail-Criteria for Particular Tests of Automated Driving Functions. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 169–174. External Links: Document Cited by: §III-C, TABLE I.
  • [27] P. Koopman, B. Osyk, and J. Weast (2019) Autonomous Vehicles Meet the Physical World: RSS, Variability, Uncertainty, and Proving Safety. In International Conference on Computer Safety, Reliability, and Security, pp. 245–253. Cited by: §I.
  • [28] H. Lakkaraju, E. Kamar, R. Caruana, and E. Horvitz (2017) Identifying unknown unknowns in the open world: representations and policies for guided exploration. In

    Thirty-First AAAI Conference on Artificial Intelligence

    ,
    Cited by: §III-A.
  • [29] T. Menzel, G. Bagschik, L. Isensee, A. Schomburg, and M. Maurer (2019) From Functional to Logical Scenarios: Detailing a Keyword-Based Scenario Description for Execution in a Simulation Environment. In 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 2383–2390. External Links: Document Cited by: §III-B, TABLE I.
  • [30] T. Menzel, G. Bagschik, and M. Maurer (2018) Scenarios for development, test and validation of automated vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1821–1827. Cited by: §II-A, §III-A, §III-D2, TABLE I.
  • [31] C. Neurohr, B. Kramer, M. Büker, E. Böde, M. Fränzle, and W. Damm (2020-02) Identification & quantification of hazardous scenarios for highly automated driving. Note: DOI 10.13140/RG.2.2.26704.66564 Cited by: §III-B, TABLE I.
  • [32] PEGASUS Consortium (2019-05) The PEGASUS method. Technical report Cited by: §I, Fig. 1, §II-B.
  • [33] PEGASUS (2019) Safety Statement. Poster. Note: www.pegasusprojekt.de/files/tmpl/Pegasus-Abschlussveranstaltung/28_Safety_Statement.pdf Cited by: §III-F, TABLE I.
  • [34] A. Poddey, T. Brade, J. E. Stellet, and W. Branz (2019-01) On the validation of complex systems operating in open contexts. arXiv:1902.10517. External Links: Link Cited by: §I, §III-A, TABLE I.
  • [35] T. Ponn, C. Gnandt, and F. Diermeyer (2019) An Optimization-based Method to Identify Relevant Scenarios for Type Approval of Automated Vehicles. In Proceedings of the ESV—International Technical Conference on the Enhanced Safety of Vehicles, Eindhoven, The Netherlands, pp. 10–13. Cited by: §III-B, §III-D1, TABLE I.
  • [36] A. Pütz, A. Zlocki, J. Bock, and L. Eckstein (2017) System validation of highly automated vehicles with a database of relevant traffic scenarios. Cited by: §III-B, TABLE I.
  • [37] R. G. Sargent (2010) Verification and validation of simulation models. In Proceedings of the 2010 winter simulation conference, pp. 166–183. Cited by: §III-E, TABLE I.
  • [38] V. Schönemann, H. Winner, T. Glock, E. Sax, B. Boeddeker, G. Verhaeg, F. Tronci, and G. García Padilla (2019-09) Fault tree-based derivation of safety requirements for automated driving on the example of cooperative valet parking. In 26th International Technical Conference on the Enhanced Safety of Vehicles (ESV) 2019, pp. . Cited by: §III-C, TABLE I.
  • [39] S. Shalev-Shwartz, S. Shammah, and A. Shashua (2018-10) On a Formal Model of Safe and Scalable Self-driving Cars. arXiv:1708.06374. External Links: Link Cited by: §I.
  • [40] C. Sippl, F. Bock, D. Wittmann, H. Altinger, and R. German (2016) From Simulation Data to Test Cases for Fully Automated Driving and ADAS. In Testing Software and Systems, F. Wotawa, M. Nica, and N. Kushik (Eds.), Lecture Notes in Computer Science, Cham, pp. 191–206 (en). External Links: ISBN 978-3-319-47443-4, Document Cited by: §III-D2, TABLE I.
  • [41] M. Steimle, T. Menzel, and M. Maurer (2019-05) A Method for Classifying Test Bench Configurations in a Scenario-Based Test Approach for Automated Vehicles. arXiv:1905.09018. Note: Comment: Declined at 2019 IEEE Intelligent Vehicles Symposium, will be updated in near future, 7 pages, 8 figures External Links: Link Cited by: §III-E, TABLE I.
  • [42] J. E. Stellet, M. R. Zofka, J. Schumacher, T. Schamm, F. Niewels, and J. M. Zöllner (2015-09) Testing of Advanced Driver Assistance Towards Automated Driving: A Survey and Taxonomy on Existing Approaches and Open Questions. In 2015 IEEE 18th International Conference on Intelligent Transportation Systems, pp. 1455–1462. Note: ISSN: 2153-0009 External Links: Document Cited by: §II.
  • [43] Z. Tahir and R. Alexander (2020) Coverage based testing for V&V and Safety Assurance of Self-driving Autonomous Vehicle: A Systematic Literature Review. In The Second IEEE International Conference On Artificial Intelligence Testing, Cited by: §II.
  • [44] W. Wachenfeld and H. Winner (2016) The release of autonomous vehicles. In Autonomous driving, pp. 425–449. Cited by: §I.
  • [45] S. Wagner, K. Groh, T. Kuhbeck, M. Dorfel, and A. Knoll (2018-06) Using Time-to-React based on Naturalistic Traffic Object Behavior for Scenario-Based Risk Assessment of Automated Driving. In 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1521–1528. Note: ISSN: 1931-0587 External Links: Document Cited by: §III-D2, TABLE I.
  • [46] H. Watanabe, L. Tobisch, J. Rost, J. Wallner, and G. Prokop (2019-09) Scenario Mining for Development of Predictive Safety Functions. In 2019 IEEE International Conference on Vehicular Electronics and Safety (ICVES), pp. 1–7. Note: ISSN: 2643-9743 External Links: Document Cited by: §III-B, TABLE I.
  • [47] H. Weber, J. Bock, J. Klimke, C. Roesener, J. Hiller, R. Krajewski, A. Zlocki, and L. Eckstein (2019) A framework for definition of logical scenarios for safety assurance of automated driving. Traffic injury prevention 20 (sup1), pp. S65–S70. Cited by: §III-B, TABLE I.
  • [48] N. Weber, D. Frerichs, and U. Eberle (2020-01) A simulation-based, statistical approach for the derivation of concrete scenarios for the release of highly automated driving functions. Note: DOI 10.13140/RG.2.2.15306.31683/1 Cited by: §III-D1, §III-D2, TABLE I.
  • [49] M. R. Zofka, F. Kuhnt, R. Kohlhaas, C. Rist, T. Schamm, and J. M. Zöllner (2015-07) Data-driven simulation and parametrization of traffic scenarios for the development of advanced driver assistance systems. In 2015 18th International Conference on Information Fusion (Fusion), pp. 1422–1428. Cited by: §III-B, TABLE I.