DeepAI
Log In Sign Up

Test-time Recalibration of Conformal Predictors Under Distribution Shift Based on Unlabeled Examples

10/09/2022
by   Fatih Furkan Yilmaz, et al.
0

Modern image classifiers achieve high predictive accuracy, but the predictions typically come without reliable uncertainty estimates. Conformal prediction algorithms provide uncertainty estimates by predicting a set of classes based on the probability estimates of the classifier (for example, the softmax scores). To provide such sets, conformal prediction algorithms often rely on estimating a cutoff threshold for the probability estimates, and this threshold is chosen based on a calibration set. Conformal prediction methods guarantee reliability only when the calibration set is from the same distribution as the test set. Therefore, the methods need to be recalibrated for new distributions. However, in practice, labeled data from new distributions is rarely available, making calibration infeasible. In this work, we consider the problem of predicting the cutoff threshold for a new distribution based on unlabeled examples only. While it is impossible in general to guarantee reliability when calibrating based on unlabeled examples, we show that our method provides excellent uncertainty estimates under natural distribution shifts, and provably works for a specific model of a distribution shift.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/11/2022

Leveraging Unlabeled Data to Predict Out-of-Distribution Performance

Real-world machine learning deployments are characterized by mismatches ...
07/07/2021

Predicting with Confidence on Unseen Distributions

Recent work has shown that the performance of machine learning models ca...
08/27/2019

Locally Optimized Random Forests

Standard supervised learning procedures are validated against a test set...
05/26/2020

Improving Regression Uncertainty Estimates with an Empirical Prior

While machine learning models capable of producing uncertainty estimates...
07/03/2020

Diagnostic Uncertainty Calibration: Towards Reliable Machine Predictions in Medical Domain

Label disagreement between human experts is a common issue in the medica...
11/11/2020

Automatic Open-World Reliability Assessment

Image classification in the open-world must handle out-of-distribution (...
06/22/2021

The Hitchhiker's Guide to Prior-Shift Adaptation

In many computer vision classification tasks, class priors at test time ...