DeepAI AI Chat
Log In Sign Up

Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration

12/20/2020
by   Christian Tomani, et al.
0

To facilitate a wide-spread acceptance of AI systems guiding decision making in real-world applications, trustworthiness of deployed models is key. That is, it is crucial for predictive models to be uncertainty-aware and yield well-calibrated (and thus trustworthy) predictions for both in-domain samples as well as under domain shift. Recent efforts to account for predictive uncertainty include post-processing steps for trained neural networks, Bayesian neural networks as well as alternative non-Bayesian approaches such as ensemble approaches and evidential deep learning. Here, we propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift. We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions for a wide range of domain drifts. We comprehensively evaluate previously proposed approaches on different data modalities, a large range of data sets including sequence data, network architectures and perturbation strategies. We observe that our modelling approach substantially outperforms existing state-of-the-art approaches, yielding well-calibrated predictions under domain drift.

READ FULL TEXT
12/20/2020

Post-hoc Uncertainty Calibration for Domain Drift Scenarios

We address the problem of uncertainty calibration. While standard deep n...
09/29/2022

Bayesian Neural Network Versus Ex-Post Calibration For Prediction Uncertainty

Probabilistic predictions from neural networks which account for predict...
11/25/2019

A Novel Unsupervised Post-Processing Calibration Method for DNNS with Robustness to Domain Shift

The uncertainty estimation is critical in real-world decision making app...
06/18/2020

Calibrated Reliable Regression using Maximum Mean Discrepancy

Accurate quantification of uncertainty is crucial for real-world applica...
12/22/2017

Obtaining Accurate Probabilistic Causal Inference by Post-Processing Calibration

Discovery of an accurate causal Bayesian network structure from observat...
02/21/2020

Calibrating Deep Neural Networks using Focal Loss

Miscalibration – a mismatch between a model's confidence and its correct...
12/01/2022

Deep Kernel Learning for Mortality Prediction in the Face of Temporal Shift

Neural models, with their ability to provide novel representations, have...