Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering

01/13/2022
by   Mohammed Oualid Attaoui, et al.
0

Deep neural networks (DNNs) have demonstrated superior performance over classical machine learning to support many features in safety-critical systems. Although DNNs are now widely used in such systems (e.g., self driving cars), there is limited progress regarding automated support for functional safety analysis in DNN-based systems. For example, the identification of root causes of errors, to enable both risk analysis and DNN retraining, remains an open problem. In this paper, we propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors. SAFE relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images. It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error. Last, clusters are used to effectively retrain and improve the DNN. The black-box nature of SAFE is motivated by our objective not to require changes or even access to the DNN internals to facilitate adoption. Experimental results show the superior ability of SAFE in identifying different root causes of DNN errors based on case studies in the automotive domain. It also yields significant improvements in DNN accuracy after retraining, while saving significant execution time and memory when compared to alternatives.

READ FULL TEXT
research
02/03/2020

Supporting DNN Safety Analysis and Retraining through Heatmap-based Unsupervised Learning

Deep neural networks (DNNs) are increasingly critical in modern safety-c...
research
10/15/2022

HUDD: A tool to debug DNNs for safety analysis

We present HUDD, a tool that supports safety analysis practices for syst...
research
01/31/2023

DNN Explanation for Safety Analysis: an Empirical Evaluation of Clustering-based Approaches

The adoption of deep neural networks (DNNs) in safety-critical contexts ...
research
04/01/2022

Simulator-based explanation and debugging of hazard-triggering events in DNN-based safety-critical systems

When Deep Neural Networks (DNNs) are used in safety-critical systems, en...
research
09/05/2019

Detecting Deep Neural Network Defects with Data Flow Analysis

Deep neural networks (DNNs) are shown to be promising solutions in many ...
research
06/03/2021

DeepOpt: Scalable Specification-based Falsification of Neural Networks using Black-Box Optimization

Decisions made by deep neural networks (DNNs) have a tremendous impact o...
research
01/07/2020

PaRoT: A Practical Framework for Robust Deep Neural Network Training

Deep Neural Networks (DNNs) are finding important applications in safety...

Please sign up or login with your details

Forgot password? Click here to reset