Improving Out-of-Distribution Detection via Epistemic Uncertainty Adversarial Training

09/05/2022
by   Derek Everett, et al.
0

The quantification of uncertainty is important for the adoption of machine learning, especially to reject out-of-distribution (OOD) data back to human experts for review. Yet progress has been slow, as a balance must be struck between computational efficiency and the quality of uncertainty estimates. For this reason many use deep ensembles of neural networks or Monte Carlo dropout for reasonable uncertainty estimates at relatively minimal compute and memory. Surprisingly, when we focus on the real-world applicable constraint of ≤ 1% false positive rate (FPR), prior methods fail to reliably detect OOD samples as such. Notably, even Gaussian random noise fails to trigger these popular OOD techniques. We help to alleviate this problem by devising a simple adversarial training scheme that incorporates an attack of the epistemic uncertainty predicted by the dropout ensemble. We demonstrate this method improves OOD detection performance on standard data (i.e., not adversarially crafted), and improves the standardized partial AUC from near-random guessing performance to ≥ 0.75.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2018

Prior Networks for Detection of Adversarial Attacks

Adversarial examples are considered a serious issue for safety critical ...
research
08/01/2019

Sampling-free Epistemic Uncertainty Estimation Using Approximated Variance Propagation

We present a sampling-free approach for computing the epistemic uncertai...
research
03/08/2017

Dropout Inference in Bayesian Neural Networks with Alpha-divergences

To obtain uncertainty estimates with real-world Bayesian deep learning m...
research
02/02/2023

Normalizing Flow Ensembles for Rich Aleatoric and Epistemic Uncertainty Modeling

In this work, we demonstrate how to reliably estimate epistemic uncertai...
research
03/22/2021

On the Robustness of Monte Carlo Dropout Trained with Noisy Labels

The memorization effect of deep learning hinders its performance to effe...
research
09/20/2019

Characterizing Sources of Uncertainty to Proxy Calibration and Disambiguate Annotator and Data Bias

Supporting model interpretability for complex phenomena where annotators...
research
05/31/2019

Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness

Ensemble approaches for uncertainty estimation have recently been applie...

Please sign up or login with your details

Forgot password? Click here to reset