Simulator-based explanation and debugging of hazard-triggering events in DNN-based safety-critical systems

04/01/2022
by   Hazem Fahmy, et al.
0

When Deep Neural Networks (DNNs) are used in safety-critical systems, engineers should determine the safety risks associated with DNN errors observed during testing. For DNNs processing images, engineers visually inspect all error-inducing images to determine common characteristics among them. Such characteristics correspond to hazard-triggering events (e.g., low illumination) that are essential inputs for safety analysis. Though informative, such activity is expensive and error-prone. To support such safety analysis practices, we propose SEDE, a technique that generates readable descriptions for commonalities in error-inducing, real-world images and improves the DNN through effective retraining. SEDE leverages the availability of simulators, which are commonly used for cyber-physical systems. SEDE relies on genetic algorithms to drive simulators towards the generation of images that are similar to error-inducing, real-world images in the test set; it then leverages rule learning algorithms to derive expressions that capture commonalities in terms of simulator parameter values. The derived expressions are then used to generate additional images to retrain and improve the DNN. With DNNs performing in-car sensing tasks, SEDE successfully characterized hazard-triggering events leading to a DNN accuracy drop. Also, SEDE enabled retraining to achieve significant improvements in DNN accuracy, up to 18 percentage points.

READ FULL TEXT
research
02/03/2020

Supporting DNN Safety Analysis and Retraining through Heatmap-based Unsupervised Learning

Deep neural networks (DNNs) are increasingly critical in modern safety-c...
research
04/12/2023

AutoRepair: Automated Repair for AI-Enabled Cyber-Physical Systems under Safety-Critical Conditions

Cyber-Physical Systems (CPS) have been widely deployed in safety-critica...
research
10/15/2022

HUDD: A tool to debug DNNs for safety analysis

We present HUDD, a tool that supports safety analysis practices for syst...
research
01/13/2022

Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering

Deep neural networks (DNNs) have demonstrated superior performance over ...
research
10/27/2022

Many-Objective Reinforcement Learning for Online Testing of DNN-Enabled Systems

Deep Neural Networks (DNNs) have been widely used to perform real-world ...
research
01/31/2023

DNN Explanation for Safety Analysis: an Empirical Evaluation of Clustering-based Approaches

The adoption of deep neural networks (DNNs) in safety-critical contexts ...

Please sign up or login with your details

Forgot password? Click here to reset