Overcoming model simplifications when quantifying predictive uncertainty

03/21/2017
by   George M. Mathews, et al.
0

It is generally accepted that all models are wrong -- the difficulty is determining which are useful. Here, a useful model is considered as one that is capable of combining data and expert knowledge, through an inversion or calibration process, to adequately characterize the uncertainty in predictions of interest. This paper derives conditions that specify which simplified models are useful and how they should be calibrated. To start, the notion of an optimal simplification is defined. This relates the model simplifications to the nature of the data and predictions, and determines when a standard probabilistic calibration scheme is capable of accurately characterizing uncertainty. Furthermore, two additional conditions are defined for suboptimal models that determine when the simplifications can be safely ignored. The first allows a suboptimally simplified model to be used in a way that replicates the performance of an optimal model. This is achieved through the judicial selection of a prior term for the calibration process that explicitly includes the nature of the data, predictions and modelling simplifications. The second considers the dependency structure between the predictions and the available data to gain insights into when the simplifications can be overcome by using the right calibration data. Furthermore, the derived conditions are related to the commonly used calibration schemes based on Tikhonov and subspace regularization. To allow concrete insights to be obtained, the analysis is performed under a linear expansion of the model equations and where the predictive uncertainty is characterized via second order moments only.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/25/2022

Calibrated Selective Classification

Selective classification allows models to abstain from making prediction...
research
07/03/2023

Nonparametric Bayesian approach for quantifying the conditional uncertainty of input parameters in chained numerical models

Nowadays, numerical models are widely used in most of engineering fields...
research
03/15/2022

Trustworthy Deep Learning via Proper Calibration Errors: A Unifying Approach for Quantifying the Reliability of Predictive Uncertainty

With model trustworthiness being crucial for sensitive real-world applic...
research
06/15/2021

Revisiting the Calibration of Modern Neural Networks

Accurate estimation of predictive uncertainty (model calibration) is ess...
research
12/02/2021

Why Calibration Error is Wrong Given Model Uncertainty: Using Posterior Predictive Checks with Deep Learning

Within the last few years, there has been a move towards using statistic...
research
02/05/2020

A Speaker Verification Backend for Improved Calibration Performance across Varying Conditions

In a recent work, we presented a discriminative backend for speaker veri...
research
06/05/2023

A Large-Scale Study of Probabilistic Calibration in Neural Network Regression

Accurate probabilistic predictions are essential for optimal decision ma...

Please sign up or login with your details

Forgot password? Click here to reset