Test-time Recalibration of Conformal Predictors Under Distribution Shift Based on Unlabeled Examples

10/09/2022
by   Fatih Furkan Yilmaz, et al.
0

Modern image classifiers achieve high predictive accuracy, but the predictions typically come without reliable uncertainty estimates. Conformal prediction algorithms provide uncertainty estimates by predicting a set of classes based on the probability estimates of the classifier (for example, the softmax scores). To provide such sets, conformal prediction algorithms often rely on estimating a cutoff threshold for the probability estimates, and this threshold is chosen based on a calibration set. Conformal prediction methods guarantee reliability only when the calibration set is from the same distribution as the test set. Therefore, the methods need to be recalibrated for new distributions. However, in practice, labeled data from new distributions is rarely available, making calibration infeasible. In this work, we consider the problem of predicting the cutoff threshold for a new distribution based on unlabeled examples only. While it is impossible in general to guarantee reliability when calibrating based on unlabeled examples, we show that our method provides excellent uncertainty estimates under natural distribution shifts, and provably works for a specific model of a distribution shift.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2022

Leveraging Unlabeled Data to Predict Out-of-Distribution Performance

Real-world machine learning deployments are characterized by mismatches ...
research
04/06/2023

Reliable Learning for Test-time Attacks and Distribution Shift

Machine learning algorithms are often used in environments which are not...
research
08/27/2019

Locally Optimized Random Forests

Standard supervised learning procedures are validated against a test set...
research
07/07/2021

Predicting with Confidence on Unseen Distributions

Recent work has shown that the performance of machine learning models ca...
research
11/25/2019

A Novel Unsupervised Post-Processing Calibration Method for DNNS with Robustness to Domain Shift

The uncertainty estimation is critical in real-world decision making app...
research
06/01/2023

(Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy

We derive an (almost) guaranteed upper bound on the error of deep neural...
research
02/12/2018

Detecting and Correcting for Label Shift with Black Box Predictors

Faced with distribution shift between training and test set, we wish to ...

Please sign up or login with your details

Forgot password? Click here to reset