Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?

10/28/2020
by   Anna-Kathrin Kopetzki, et al.
11

Robustness to adversarial perturbations and accurate uncertainty estimation are crucial for reliable application of deep learning in real world settings. Dirichlet-based uncertainty (DBU) models are a family of models that predict the parameters of a Dirichlet distribution (instead of a categorical one) and promise to signal when not to trust their predictions. Untrustworthy predictions are obtained on unknown or ambiguous samples and marked with a high uncertainty by the models. In this work, we show that DBU models with standard training are not robust w.r.t. three important tasks in the field of uncertainty estimation. In particular, we evaluate how useful the uncertainty estimates are to (1) indicate correctly classified samples, and (2) to detect adversarial examples that try to fool classification. We further evaluate the reliability of DBU models on the task of (3) distinguishing between in-distribution (ID) and out-of-distribution (OOD) data. To this end, we present the first study of certifiable robustness for DBU models. Furthermore, we propose novel uncertainty attacks that fool models into assigning high confidence to OOD data and low confidence to ID data, respectively. Based on our results, we explore the first approaches to make DBU models more robust. We use adversarial training procedures based on label attacks, uncertainty attacks, or random noise and demonstrate how they affect robustness of DBU models on ID data and OOD data.

READ FULL TEXT

page 5

page 6

page 28

research
10/10/2019

Information Robust Dirichlet Networks for Predictive Uncertainty Estimation

Precise estimation of uncertainty in predictions for AI systems is a cri...
research
11/29/2022

Understanding and Enhancing Robustness of Concept-based Models

Rising usage of deep neural networks to perform decision making in criti...
research
10/24/2022

Data-IQ: Characterizing subgroups with heterogeneous outcomes in tabular data

High model performance, on average, can hide that models may systematica...
research
02/15/2023

Uncertainty-Estimation with Normalized Logits for Out-of-Distribution Detection

Out-of-distribution (OOD) detection is critical for preventing deep lear...
research
04/09/2022

Uncertainty-Informed Deep Learning Models Enable High-Confidence Predictions for Digital Histopathology

A model's ability to express its own predictive uncertainty is an essent...
research
10/24/2022

Reliability-Aware Prediction via Uncertainty Learning for Person Image Retrieval

Current person image retrieval methods have achieved great improvements ...
research
02/25/2022

Deep Dirichlet uncertainty for unsupervised out-of-distribution detection of eye fundus photographs in glaucoma screening

The development of automatic tools for early glaucoma diagnosis with col...

Please sign up or login with your details

Forgot password? Click here to reset