Uncertainty Estimates of Predictions via a General Bias-Variance Decomposition

10/21/2022
by   Sebastian Gruber, et al.
0

Reliably estimating the uncertainty of a prediction throughout the model lifecycle is crucial in many safety-critical applications. The most common way to measure this uncertainty is via the predicted confidence. While this tends to work well for in-domain samples, these estimates are unreliable under domain drift. Alternatively, a bias-variance decomposition allows to directly measure the predictive uncertainty across the entire input space. But, such a decomposition for proper scores does not exist in current literature, and for exponential families it is convoluted. In this work, we introduce a general bias-variance decomposition for proper scores and reformulate the exponential family case, giving rise to the Bregman Information as the variance term in both cases. This allows us to prove that the Bregman Information for classification measures the uncertainty in the logit space. We showcase the practical relevance of this decomposition on two downstream tasks. First, we show how to construct confidence intervals for predictions on the instance-level based on the Bregman Information. Second, we demonstrate how different approximations of the instance-level Bregman Information allow reliable out-of-distribution detection for all degrees of domain drift.

READ FULL TEXT

page 7

page 19

page 20

research
02/13/2020

Simple and Accurate Uncertainty Quantification from Bias-Variance Decomposition

Accurate uncertainty quantification is crucial for many applications whe...
research
11/03/2020

Uncertainty Quantification in Extreme Learning Machine: Analytical Developments, Variance Estimates and Confidence Intervals

Uncertainty quantification is crucial to assess prediction quality of a ...
research
06/07/2021

How to Evaluate Uncertainty Estimates in Machine Learning for Regression?

As neural networks become more popular, the need for accompanying uncert...
research
06/27/2022

Network resampling for estimating uncertainty

With network data becoming ubiquitous in many applications, many models ...
research
06/02/2023

A Data-Driven Measure of Relative Uncertainty for Misclassification Detection

Misclassification detection is an important problem in machine learning,...
research
04/25/2021

Model-based metrics: Sample-efficient estimates of predictive model subpopulation performance

Machine learning models - now commonly developed to screen, diagnose, or...
research
02/18/2020

Uncertainty in Structured Prediction

Uncertainty estimation is important for ensuring safety and robustness o...

Please sign up or login with your details

Forgot password? Click here to reset