Identifying the Hazard Boundary of ML-enabled Autonomous Systems Using Cooperative Co-Evolutionary Search

01/31/2023
by   Sepehr Sharifi, et al.
0

In Machine Learning (ML)-enabled autonomous systems (MLASs), it is essential to identify the hazard boundary of ML Components (MLCs) in the MLAS under analysis. Given that such boundary captures the conditions in terms of MLC behavior and system context that can lead to hazards, it can then be used to, for example, build a safety monitor that can take any predefined fallback mechanisms at runtime when reaching the hazard boundary. However, determining such hazard boundary for an ML component is challenging. This is due to the space combining system contexts (i.e., scenarios) and MLC behaviors (i.e., inputs and outputs) being far too large for exhaustive exploration and even to handle using conventional metaheuristics, such as genetic algorithms. Additionally, the high computational cost of simulations required to determine any MLAS safety violations makes the problem even more challenging. Furthermore, it is unrealistic to consider a region in the problem space deterministically safe or unsafe due to the uncontrollable parameters in simulations and the non-linear behaviors of ML models (e.g., deep neural networks) in the MLAS under analysis. To address the challenges, we propose MLCSHE (ML Component Safety Hazard Envelope), a novel method based on a Cooperative Co-Evolutionary Algorithm (CCEA), which aims to tackle a high-dimensional problem by decomposing it into two lower-dimensional search subproblems. Moreover, we take a probabilistic view of safe and unsafe regions and define a novel fitness function to measure the distance from the probabilistic hazard boundary and thus drive the search effectively. We evaluate the effectiveness and efficiency of MLCSHE on a complex Autonomous Vehicle (AV) case study. Our evaluation results show that MLCSHE is significantly more effective and efficient compared to a standard genetic algorithm and random search.

READ FULL TEXT
research
04/16/2022

Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a Pedestrian Automatic Emergency Brake System

Integration of Machine Learning (ML) components in critical applications...
research
09/28/2021

Runtime Safety Assurance for Learning-enabled Control of Autonomous Driving Vehicles

Providing safety guarantees for Autonomous Vehicle (AV) systems with mac...
research
01/29/2021

Safety Case Templates for Autonomous Systems

This report documents safety assurance argument templates to support the...
research
06/18/2020

Quantifying Assurance in Learning-enabled Systems

Dependability assurance of systems embedding machine learning(ML) compon...
research
10/04/2021

Benchmarking Safety Monitors for Image Classifiers with Machine Learning

High-accurate machine learning (ML) image classifiers cannot guarantee t...
research
06/21/2022

A Hierarchical HAZOP-Like Safety Analysis for Learning-Enabled Systems

Hazard and Operability Analysis (HAZOP) is a powerful safety analysis te...
research
03/13/2023

Harmonic Field-based Provable Exploration of 3D Indoor Environments

This work presents an safe and efficient methodology for autonomous indo...

Please sign up or login with your details

Forgot password? Click here to reset