A view on model misspecification in uncertainty quantification

10/30/2022
by   Yuko Kato, et al.
0

Estimating uncertainty of machine learning models is essential to assess the quality of the predictions that these models provide. However, there are several factors that influence the quality of uncertainty estimates, one of which is the amount of model misspecification. Model misspecification always exists as models are mere simplifications or approximations to reality. The question arises whether the estimated uncertainty under model misspecification is reliable or not. In this paper, we argue that model misspecification should receive more attention, by providing thought experiments and contextualizing these with relevant literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2023

Timeseries-aware Uncertainty Wrappers for Uncertainty Quantification of Information-Fusion-Enhanced AI Models based on Machine Learning

As the use of Artificial Intelligence (AI) components in cyber-physical ...
research
10/18/2022

Uncertainty in Extreme Multi-label Classification

Uncertainty quantification is one of the most crucial tasks to obtain tr...
research
05/14/2020

Estimating predictive uncertainty for rumour verification models

The inability to correctly resolve rumours circulating online can have h...
research
07/07/2023

URL: A Representation Learning Benchmark for Transferable Uncertainty Estimates

Representation learning has significantly driven the field to develop pr...
research
11/17/2021

Uncertainty Quantification of Surrogate Explanations: an Ordinal Consensus Approach

Explainability of black-box machine learning models is crucial, in parti...
research
07/11/2022

DAUX: a Density-based Approach for Uncertainty eXplanations

Uncertainty quantification (UQ) is essential for creating trustworthy ma...

Please sign up or login with your details

Forgot password? Click here to reset