A Novel Unsupervised Post-Processing Calibration Method for DNNS with Robustness to Domain Shift

11/25/2019
by   Azadeh Sadat Mozafari, et al.
0

The uncertainty estimation is critical in real-world decision making applications, especially when distributional shift between the training and test data are prevalent. Many calibration methods in the literature have been proposed to improve the predictive uncertainty of DNNs which are generally not well-calibrated. However, none of them is specifically designed to work properly under domain shift condition. In this paper, we propose Unsupervised Temperature Scaling (UTS) as a robust calibration method to domain shift. It exploits unlabeled test samples instead of the training one to adjust the uncertainty prediction of deep models towards the test distribution. UTS utilizes a novel loss function, weighted NLL, which allows unsupervised calibration. We evaluate UTS on a wide range of model-datasets to show the possibility of calibration without labels and demonstrate the robustness of UTS compared to other methods (e.g., TS, MC-dropout, SVI, ensembles) in shifted domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2018

A New Loss Function for Temperature Scaling to have Better Calibrated Deep Networks

However Deep neural networks recently have achieved impressive results f...
research
12/20/2020

Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration

To facilitate a wide-spread acceptance of AI systems guiding decision ma...
research
06/29/2020

Unsupervised Calibration under Covariate Shift

A probabilistic model is said to be calibrated if its predicted probabil...
research
05/01/2019

Unsupervised Temperature Scaling: Post-Processing Unsupervised Calibration of Deep Models Decisions

Great performances of deep learning are undeniable, with impressive resu...
research
03/23/2023

Benchmarking the Reliability of Post-training Quantization: a Particular Focus on Worst-case Performance

Post-training quantization (PTQ) is a popular method for compressing dee...
research
06/09/2022

BSM loss: A superior way in modeling aleatory uncertainty of fine_grained classification

Artificial intelligence(AI)-assisted method had received much attention ...
research
10/09/2022

Test-time Recalibration of Conformal Predictors Under Distribution Shift Based on Unlabeled Examples

Modern image classifiers achieve high predictive accuracy, but the predi...

Please sign up or login with your details

Forgot password? Click here to reset