DeepAI AI Chat
Log In Sign Up

Reliability Assessment and Safety Arguments for Machine Learning Components in Assuring Learning-Enabled Autonomous Systems

11/30/2021
by   Xingyu Zhao, et al.
University of Liverpool
GOV.UK
Heriot-Watt University
3

The increasing use of Machine Learning (ML) components embedded in autonomous systems – so-called Learning-Enabled Systems (LES) – has resulted in the pressing need to assure their functional safety. As for traditional functional safety, the emerging consensus within both, industry and academia, is to use assurance cases for this purpose. Typically assurance cases support claims of reliability in support of safety, and can be viewed as a structured way of organising arguments and evidence generated from safety analysis and reliability modelling activities. While such assurance activities are traditionally guided by consensus-based standards developed from vast engineering experience, LES pose new challenges in safety-critical application due to the characteristics and design of ML models. In this article, we first present an overall assurance framework for LES with an emphasis on quantitative aspects, e.g., breaking down system-level safety targets to component-level requirements and supporting claims stated in reliability metrics. We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers that utilises the operational profile and robustness verification evidence. We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM and propose practical solutions. Probabilistic safety arguments at the lower ML component-level are also developed based on the RAM. Finally, to evaluate and demonstrate our methods, we not only conduct experiments on synthetic/benchmark datasets but also demonstrate the scope of our methods with a comprehensive case study on Autonomous Underwater Vehicles in simulation.

READ FULL TEXT

page 21

page 22

page 23

page 30

04/16/2022

Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a Pedestrian Automatic Emergency Brake System

Integration of Machine Learning (ML) components in critical applications...
04/22/2021

Enabling Cross-Layer Reliability and Functional Safety Assessment Through ML-Based Compact Models

Typical design flows are hierarchical and rely on assembling many indivi...
06/21/2022

A Hierarchical HAZOP-Like Safety Analysis for Learning-Enabled Systems

Hazard and Operability Analysis (HAZOP) is a powerful safety analysis te...
09/23/2022

Facilitating Change Implementation for Continuous ML-Safety Assurance

We propose a method for deploying a safety-critical machine-learning com...
02/08/2022

If a Human Can See It, So Should Your System: Reliability Requirements for Machine Vision Components

Machine Vision Components (MVC) are becoming safety-critical. Assuring t...
06/18/2020

Quantifying Assurance in Learning-enabled Systems

Dependability assurance of systems embedding machine learning(ML) compon...