Performance Analysis of Out-of-Distribution Detection on Trained Neural Networks

04/26/2022
by   Jens Henriksson, et al.
0

Several areas have been improved with Deep Learning during the past years. Implementing Deep Neural Networks (DNN) for non-safety related applications have shown remarkable achievements over the past years; however, for using DNNs in safety critical applications, we are missing approaches for verifying the robustness of such models. A common challenge for DNNs occurs when exposed to out-of-distribution samples that are outside of the scope of a DNN, but which result in high confidence outputs despite no prior knowledge of such input. In this paper, we analyze three methods that separate between in- and out-of-distribution data, called supervisors, on four well-known DNN architectures. We find that the outlier detection performance improves with the quality of the model. We also analyse the performance of the particular supervisors during the training procedure by applying the supervisor at a predefined interval to investigate its performance as the training proceeds. We observe that understanding the relationship between training results and supervisor performance is crucial to improve the model's robustness and to indicate, what input samples require further measures to improve the robustness of a DNN. In addition, our work paves the road towards an instrument for safety argumentation for safety critical applications. This paper is an extended version of our previous work presented at 2019 SEAA (cf. [1]); here, we elaborate on the used metrics, add an additional supervisor and test them on two additional datasets.

READ FULL TEXT
research
03/29/2021

Performance Analysis of Out-of-Distribution Detection on Various Trained Neural Networks

Several areas have been improved with Deep Learning during the past year...
research
03/04/2019

Towards Structured Evaluation of Deep Neural Network Supervisors

Deep Neural Networks (DNN) have improved the quality of several non-safe...
research
08/16/2020

GLOD: Gaussian Likelihood Out of Distribution Detector

Discriminative deep neural networks (DNNs) do well at classifying input ...
research
12/29/2022

Detection of out-of-distribution samples using binary neuron activation patterns

Deep neural networks (DNN) have outstanding performance in various appli...
research
07/03/2020

Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring

Inference accuracy of deep neural networks (DNNs) is a crucial performan...
research
11/02/2022

Verifying And Interpreting Neural Networks using Finite Automata

Verifying properties and interpreting the behaviour of deep neural netwo...
research
04/11/2023

Unsupervised out-of-distribution detection for safer robotically-guided retinal microsurgery

Purpose: A fundamental problem in designing safe machine learning system...

Please sign up or login with your details

Forgot password? Click here to reset