-
Post-hoc Uncertainty Calibration for Domain Drift Scenarios
We address the problem of uncertainty calibration. While standard deep n...
read it
-
Calibrated Reliable Regression using Maximum Mean Discrepancy
Accurate quantification of uncertainty is crucial for real-world applica...
read it
-
Obtaining Accurate Probabilistic Causal Inference by Post-Processing Calibration
Discovery of an accurate causal Bayesian network structure from observat...
read it
-
A Novel Unsupervised Post-Processing Calibration Method for DNNS with Robustness to Domain Shift
The uncertainty estimation is critical in real-world decision making app...
read it
-
Should Ensemble Members Be Calibrated?
Underlying the use of statistical approaches for a wide range of applica...
read it
-
Overcoming model simplifications when quantifying predictive uncertainty
It is generally accepted that all models are wrong -- the difficulty is ...
read it
-
Loss-Calibrated Approximate Inference in Bayesian Neural Networks
Current approaches in approximate inference for Bayesian neural networks...
read it
Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration
To facilitate a wide-spread acceptance of AI systems guiding decision making in real-world applications, trustworthiness of deployed models is key. That is, it is crucial for predictive models to be uncertainty-aware and yield well-calibrated (and thus trustworthy) predictions for both in-domain samples as well as under domain shift. Recent efforts to account for predictive uncertainty include post-processing steps for trained neural networks, Bayesian neural networks as well as alternative non-Bayesian approaches such as ensemble approaches and evidential deep learning. Here, we propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift. We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions for a wide range of domain drifts. We comprehensively evaluate previously proposed approaches on different data modalities, a large range of data sets including sequence data, network architectures and perturbation strategies. We observe that our modelling approach substantially outperforms existing state-of-the-art approaches, yielding well-calibrated predictions under domain drift.
READ FULL TEXT
Comments
There are no comments yet.