Integrating Testing and Operation-related Quantitative Evidences in Assurance Cases to Argue Safety of Data-Driven AI/ML Components

02/10/2022
by   Michael Kläs, et al.
0

In the future, AI will increasingly find its way into systems that can potentially cause physical harm to humans. For such safety-critical systems, it must be demonstrated that their residual risk does not exceed what is acceptable. This includes, in particular, the AI components that are part of such systems' safety-related functions. Assurance cases are an intensively discussed option today for specifying a sound and comprehensive safety argument to demonstrate a system's safety. In previous work, it has been suggested to argue safety for AI components by structuring assurance cases based on two complementary risk acceptance criteria. One of these criteria is used to derive quantitative targets regarding the AI. The argumentation structures commonly proposed to show the achievement of such quantitative targets, however, focus on failure rates from statistical testing. Further important aspects are only considered in a qualitative manner – if at all. In contrast, this paper proposes a more holistic argumentation structure for having achieved the target, namely a structure that integrates test results with runtime aspects and the impact of scope compliance and test data quality in a quantitative manner. We elaborate different argumentation options, present the underlying mathematical considerations, and discuss resulting implications for their practical application. Using the proposed argumentation structure might not only increase the integrity of assurance cases but may also allow claims on quantitative targets that would not be justifiable otherwise.

READ FULL TEXT

page 2

page 4

research
12/15/2022

Online Handbook of Argumentation for AI: Volume 3

This volume contains revised versions of the papers selected for the thi...
research
06/22/2020

Online Handbook of Argumentation for AI: Volume 1

This volume contains revised versions of the papers selected for the fir...
research
06/16/2021

Online Handbook of Argumentation for AI: Volume 2

This volume contains revised versions of the papers selected for the sec...
research
02/14/2019

Assurance of System Safety: A Survey of Design and Argument Patterns

The specification, design, and assurance of safety encompasses various c...
research
04/20/2023

On Quantification for SOTIF Validation of Automated Driving Systems

Automated driving systems are safety-critical cyber-physical systems who...
research
09/23/2022

Facilitating Change Implementation for Continuous ML-Safety Assurance

We propose a method for deploying a safety-critical machine-learning com...
research
09/02/2020

Estimating the Brittleness of AI: Safety Integrity Levels and the Need for Testing Out-Of-Distribution Performance

Test, Evaluation, Verification, and Validation (TEVV) for Artificial Int...

Please sign up or login with your details

Forgot password? Click here to reset