How to Evaluate Uncertainty Estimates in Machine Learning for Regression?

06/07/2021
by   Laurens Sluijterman, et al.
0

As neural networks become more popular, the need for accompanying uncertainty estimates increases. The current testing methodology focusses on how good the predictive uncertainty estimates explain the differences between predictions and observations in a previously unseen test set. Intuitively this is a logical approach. The current setup of benchmark data sets also allows easy comparison between the different methods. We demonstrate, however, through both theoretical arguments and simulations that this way of evaluating the quality of uncertainty estimates has serious flaws. Firstly, it cannot disentangle the aleatoric from the epistemic uncertainty. Secondly, the current methodology considers the uncertainty averaged over all test samples, implicitly averaging out overconfident and underconfident predictions. When checking if the correct fraction of test points falls inside prediction intervals, a good score on average gives no guarantee that the intervals are sensible for individual points. We demonstrate through practical examples that these effects can result in favoring a method, based on the predictive uncertainty, that has undesirable behaviour of the confidence intervals. Finally, we propose a simulation-based testing approach that addresses these problems while still allowing easy comparison between different methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/27/2021

Exploring Uncertainty in Deep Learning for Construction of Prediction Intervals

Deep learning has achieved impressive performance on many tasks in recen...
research
07/19/2020

Prediction Intervals: Split Normal Mixture from Quality-Driven Deep Ensembles

Prediction intervals are a machine- and human-interpretable way to repre...
research
11/02/2018

Frequentist uncertainty estimates for deep learning

We provide frequentist estimates of aleatoric and epistemic uncertainty ...
research
10/21/2022

Uncertainty Estimates of Predictions via a General Bias-Variance Decomposition

Reliably estimating the uncertainty of a prediction throughout the model...
research
05/12/2021

Learning Uncertainty with Artificial Neural Networks for Improved Remaining Time Prediction of Business Processes

Artificial neural networks will always make a prediction, even when comp...
research
04/25/2021

Model-based metrics: Sample-efficient estimates of predictive model subpopulation performance

Machine learning models - now commonly developed to screen, diagnose, or...
research
04/25/2014

Quantifying Uncertainty in Random Forests via Confidence Intervals and Hypothesis Tests

This work develops formal statistical inference procedures for machine l...

Please sign up or login with your details

Forgot password? Click here to reset