Assurance Cases as Foundation Stone for Auditing AI-enabled and Autonomous Systems: Workshop Results and Political Recommendations for Action from the ExamAI Project

08/17/2022
by   Rasmus Adler, et al.
0

The European Machinery Directive and related harmonized standards do consider that software is used to generate safety-relevant behavior of the machinery but do not consider all kinds of software. In particular, software based on machine learning (ML) are not considered for the realization of safety-relevant behavior. This limits the introduction of suitable safety concepts for autonomous mobile robots and other autonomous machinery, which commonly depend on ML-based functions. We investigated this issue and the way safety standards define safety measures to be implemented against software faults. Functional safety standards use Safety Integrity Levels (SILs) to define which safety measures shall be implemented. They provide rules for determining the SIL and rules for selecting safety measures depending on the SIL. In this paper, we argue that this approach can hardly be adopted with respect to ML and other kinds of Artificial Intelligence (AI). Instead of simple rules for determining an SIL and applying related measures against faults, we propose the use of assurance cases to argue that the individually selected and applied measures are sufficient in the given case. To get a first rating regarding the feasibility and usefulness of our proposal, we presented and discussed it in a workshop with experts from industry, German statutory accident insurance companies, work safety and standardization commissions, and representatives from various national, European, and international working groups dealing with safety and AI. In this paper, we summarize the proposal and the workshop discussion. Moreover, we check to which extent our proposal is in line with the European AI Act proposal and current safety standardization initiatives addressing AI and Autonomous Systems

READ FULL TEXT
research
09/11/2021

Risk Management of AI/ML Software as a Medical Device (SaMD): On ISO 14971 and Related Standards and Guidances

Safety and efficacy are the paramount objectives of medical device regul...
research
08/26/2021

AI at work – Mitigating safety and discriminatory risk with technical standards

The use of artificial intelligence (AI) and AI methods in the workplace ...
research
08/30/2022

Foreseeing the Impact of the Proposed AI Act on the Sustainability and Safety of Critical Infrastructures

The AI Act has been recently proposed by the European Commission to regu...
research
08/04/2020

Safety design concepts for statistical machine learning components toward accordance with functional safety standards

In recent years, curial incidents and accidents have been reported due t...
research
02/19/2020

A Structured Approach to Trustworthy Autonomous/Cognitive Systems

Autonomous systems with cognitive features are on their way into the mar...
research
01/09/2023

Architecting Safer Autonomous Aviation Systems

The aviation literature gives relatively little guidance to practitioner...
research
07/07/2023

QI2 – an Interactive Tool for Data Quality Assurance

The importance of high data quality is increasing with the growing impac...

Please sign up or login with your details

Forgot password? Click here to reset