A Forgotten Danger in DNN Supervision Testing: Generating and Detecting True Ambiguity

07/21/2022
by   Michael Weiss, et al.
11

Deep Neural Networks (DNNs) are becoming a crucial component of modern software systems, but they are prone to fail under conditions that are different from the ones observed during training (out-of-distribution inputs) or on inputs that are truly ambiguous, i.e., inputs that admit multiple classes with nonzero probability in their ground truth labels. Recent work proposed DNN supervisors to detect high-uncertainty inputs before their possible misclassification leads to any harm. To test and compare the capabilities of DNN supervisors, researchers proposed test generation techniques, to focus the testing effort on high-uncertainty inputs that should be recognized as anomalous by supervisors. However, existing test generators can only produce out-of-distribution inputs. No existing model- and supervisor-independent technique supports the generation of truly ambiguous test inputs. In this paper, we propose a novel way to generate ambiguous inputs to test DNN supervisors and used it to empirically compare several existing supervisor techniques. In particular, we propose AmbiGuess to generate ambiguous samples for image classification problems. AmbiGuess is based on gradient-guided sampling in the latent space of a regularized adversarial autoencoder. Moreover, we conducted what is - to the best of our knowledge - the most extensive comparative study of DNN supervisors, considering their capabilities to detect 4 distinct types of high-uncertainty inputs, including truly ambiguous ones.

READ FULL TEXT

page 1

page 4

page 7

page 10

research
02/26/2021

Distribution-Aware Testing of Neural Networks Using Generative Models

The reliability of software that has a Deep Neural Network (DNN) as a co...
research
07/29/2020

Detecting Anomalous Inputs to DNN Classifiers By Joint Statistical Testing at the Layers

Detecting anomalous inputs, such as adversarial and out-of-distribution ...
research
07/15/2021

On the Importance of Regularisation Auxiliary Information in OOD Detection

Neural networks are often utilised in critical domain applications (e.g....
research
12/16/2019

On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration

Uncertainty estimates help to identify ambiguous, novel, or anomalous in...
research
02/01/2021

Fail-Safe Execution of Deep Learning based Systems through Uncertainty Monitoring

Modern software systems rely on Deep Neural Networks (DNN) when processi...
research
07/18/2023

Conformal prediction under ambiguous ground truth

In safety-critical classification tasks, conformal prediction allows to ...
research
03/23/2021

Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs

Deep neural networks (DNNs) are known to produce incorrect predictions w...

Please sign up or login with your details

Forgot password? Click here to reset