DeepAI AI Chat
Log In Sign Up

Unsupervised Calibration under Covariate Shift

06/29/2020
by   Anusri Pampari, et al.
12

A probabilistic model is said to be calibrated if its predicted probabilities match the corresponding empirical frequencies. Calibration is important for uncertainty quantification and decision making in safety-critical applications. While calibration of classifiers has been widely studied, we find that calibration is brittle and can be easily lost under minimal covariate shifts. Existing techniques, including domain adaptation ones, primarily focus on prediction accuracy and do not guarantee calibration neither in theory nor in practice. In this work, we formally introduce the problem of calibration under domain shift, and propose an importance sampling based approach to address it. We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.

READ FULL TEXT
09/15/2022

Towards Improving Calibration in Object Detection Under Domain Shift

The increasing use of deep neural networks in safety-critical applicatio...
02/19/2019

Evaluating model calibration in classification

Probabilistic classifiers output a probability distribution on target cl...
03/08/2023

HappyMap: A Generalized Multi-calibration Method

Multi-calibration is a powerful and evolving concept originating in the ...
06/23/2020

Calibration of Neural Networks using Splines

Calibrating neural networks is of utmost importance when employing them ...
05/28/2019

Evaluating and Calibrating Uncertainty Prediction in Regression Tasks

Predicting not only the target but also an accurate measure of uncertain...
05/19/2022

Calibration Matters: Tackling Maximization Bias in Large-scale Advertising Recommendation Systems

Calibration is defined as the ratio of the average predicted click rate ...
03/04/2021

Distribution-free uncertainty quantification for classification under label shift

Trustworthy deployment of ML models requires a proper measure of uncerta...