DNN Explanation for Safety Analysis: an Empirical Evaluation of Clustering-based Approaches

01/31/2023
by   Mohammed Oualid Attaoui, et al.
0

The adoption of deep neural networks (DNNs) in safety-critical contexts is often prevented by the lack of effective means to explain their results, especially when they are erroneous. In our previous work, we proposed a white-box approach (HUDD) and a black-box approach (SAFE) to automatically characterize DNN failures. They both identify clusters of similar images from a potentially large set of images leading to DNN failures. However, the analysis pipelines for HUDD and SAFE were instantiated in specific ways according to common practices, deferring the analysis of other pipelines to future work. In this paper, we report on an empirical evaluation of 99 different pipelines for root cause analysis of DNN failures. They combine transfer learning, autoencoders, heatmaps of neuron relevance, dimensionality reduction techniques, and different clustering algorithms. Our results show that the best pipeline combines transfer learning, DBSCAN, and UMAP. It leads to clusters almost exclusively capturing images of the same failure scenario, thus facilitating root cause analysis. Further, it generates distinct clusters for each root cause of failure, thus enabling engineers to detect all the unsafe scenarios. Interestingly, these results hold even for failure scenarios that are only observed in a small percentage of the failing images.

READ FULL TEXT

page 4

page 9

page 18

page 28

research
10/15/2022

HUDD: A tool to debug DNNs for safety analysis

We present HUDD, a tool that supports safety analysis practices for syst...
research
01/13/2022

Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering

Deep neural networks (DNNs) have demonstrated superior performance over ...
research
02/03/2020

Supporting DNN Safety Analysis and Retraining through Heatmap-based Unsupervised Learning

Deep neural networks (DNNs) are increasingly critical in modern safety-c...
research
02/11/2020

Debugging Machine Learning Pipelines

Machine learning tasks entail the use of complex computational pipelines...
research
04/12/2020

BugDoc: Algorithms to Debug Computational Processes

Data analysis for scientific experiments and enterprises, large-scale si...
research
04/01/2022

Simulator-based explanation and debugging of hazard-triggering events in DNN-based safety-critical systems

When Deep Neural Networks (DNNs) are used in safety-critical systems, en...
research
11/18/2015

Using Abduction in Markov Logic Networks for Root Cause Analysis

IT infrastructure is a crucial part in most of today's business operatio...

Please sign up or login with your details

Forgot password? Click here to reset