DeepAI
Log In Sign Up

Improved Predictive Uncertainty using Corruption-based Calibration

06/07/2021
by   Tiago Salvador, et al.
0

We propose a simple post hoc calibration method to estimate the confidence/uncertainty that a model prediction is correct on data with covariate shift, as represented by the large-scale corrupted data benchmark [Ovadia et al, 2019]. We achieve this by synthesizing surrogate calibration sets by corrupting the calibration set with varying intensities of a known corruption. Our method demonstrates significant improvements on the benchmark on a wide range of covariate shifts.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/20/2020

Post-hoc Uncertainty Calibration for Domain Drift Scenarios

We address the problem of uncertainty calibration. While standard deep n...
10/28/2021

Exploring Covariate and Concept Shift for Detection and Calibration of Out-of-Distribution Data

Moving beyond testing on in-distribution data works on Out-of-Distributi...
05/19/2022

Calibration Matters: Tackling Maximization Bias in Large-scale Advertising Recommendation Systems

Calibration is defined as the ratio of the average predicted click rate ...
06/19/2020

Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift

Covariate shift has been shown to sharply degrade both predictive accura...
05/28/2019

Evaluating and Calibrating Uncertainty Prediction in Regression Tasks

Predicting not only the target but also an accurate measure of uncertain...
05/04/2021

A Finer Calibration Analysis for Adversarial Robustness

We present a more general analysis of H-calibration for adversarially ro...
07/13/2022

Estimating Classification Confidence Using Kernel Densities

This paper investigates the post-hoc calibration of confidence for "expl...