Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration

12/20/2020
by   Christian Tomani, et al.
0

To facilitate a wide-spread acceptance of AI systems guiding decision making in real-world applications, trustworthiness of deployed models is key. That is, it is crucial for predictive models to be uncertainty-aware and yield well-calibrated (and thus trustworthy) predictions for both in-domain samples as well as under domain shift. Recent efforts to account for predictive uncertainty include post-processing steps for trained neural networks, Bayesian neural networks as well as alternative non-Bayesian approaches such as ensemble approaches and evidential deep learning. Here, we propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift. We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions for a wide range of domain drifts. We comprehensively evaluate previously proposed approaches on different data modalities, a large range of data sets including sequence data, network architectures and perturbation strategies. We observe that our modelling approach substantially outperforms existing state-of-the-art approaches, yielding well-calibrated predictions under domain drift.

READ FULL TEXT

Authors

12/20/2020

Post-hoc Uncertainty Calibration for Domain Drift Scenarios

We address the problem of uncertainty calibration. While standard deep n...
12/22/2017

Obtaining Accurate Probabilistic Causal Inference by Post-Processing Calibration

Discovery of an accurate causal Bayesian network structure from observat...
11/25/2019

A Novel Unsupervised Post-Processing Calibration Method for DNNS with Robustness to Domain Shift

The uncertainty estimation is critical in real-world decision making app...
06/18/2020

Calibrated Reliable Regression using Maximum Mean Discrepancy

Accurate quantification of uncertainty is crucial for real-world applica...
02/21/2020

Calibrating Deep Neural Networks using Focal Loss

Miscalibration – a mismatch between a model's confidence and its correct...
12/10/2021

Recalibration of Predictive Models as Approximate Probabilistic Updates

The output of predictive models is routinely recalibrated by reconciling...
05/10/2018

Loss-Calibrated Approximate Inference in Bayesian Neural Networks

Current approaches in approximate inference for Bayesian neural networks...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.