Integrating Testing and Operation-related Quantitative Evidences in Assurance Cases to Argue Safety of Data-Driven AI/ML Components

02/10/2022
by   Michael Kläs, et al.
0

In the future, AI will increasingly find its way into systems that can potentially cause physical harm to humans. For such safety-critical systems, it must be demonstrated that their residual risk does not exceed what is acceptable. This includes, in particular, the AI components that are part of such systems' safety-related functions. Assurance cases are an intensively discussed option today for specifying a sound and comprehensive safety argument to demonstrate a system's safety. In previous work, it has been suggested to argue safety for AI components by structuring assurance cases based on two complementary risk acceptance criteria. One of these criteria is used to derive quantitative targets regarding the AI. The argumentation structures commonly proposed to show the achievement of such quantitative targets, however, focus on failure rates from statistical testing. Further important aspects are only considered in a qualitative manner – if at all. In contrast, this paper proposes a more holistic argumentation structure for having achieved the target, namely a structure that integrates test results with runtime aspects and the impact of scope compliance and test data quality in a quantitative manner. We elaborate different argumentation options, present the underlying mathematical considerations, and discuss resulting implications for their practical application. Using the proposed argumentation structure might not only increase the integrity of assurance cases but may also allow claims on quantitative targets that would not be justifiable otherwise.

READ FULL TEXT

Authors

page 2

page 4

06/22/2020

Online Handbook of Argumentation for AI: Volume 1

This volume contains revised versions of the papers selected for the fir...
06/16/2021

Online Handbook of Argumentation for AI: Volume 2

This volume contains revised versions of the papers selected for the sec...
02/14/2019

Assurance of System Safety: A Survey of Design and Argument Patterns

The specification, design, and assurance of safety encompasses various c...
06/14/2022

Architectural patterns for handling runtime uncertainty of data-driven models in safety-critical perception

Data-driven models (DDM) based on machine learning and other AI techniqu...
11/26/2020

Transdisciplinary AI Observatory – Retrospective Analyses and Future-Oriented Contradistinctions

In the last years, AI safety gained international recognition in the lig...
09/02/2020

Estimating the Brittleness of AI: Safety Integrity Levels and the Need for Testing Out-Of-Distribution Performance

Test, Evaluation, Verification, and Validation (TEVV) for Artificial Int...
10/20/2021

Exploring the Relationship Between "Positive Risk Balance" and "Absence of Unreasonable Risk"

International discussions on the overarching topic of how to define and ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.