DeepAI AI Chat
Log In Sign Up

Data Consistency Approach to Model Validation

08/17/2018
by   Andreas Svensson, et al.
0

In scientific inference problems, the underlying statistical modeling assumptions have a crucial impact on the end results. There exist, however, only a few automatic means for validating these fundamental modelling assumptions. The contribution in this paper is a general criterion to evaluate the consistency of a set of statistical models with respect to observed data. This is achieved by automatically gauging the models' ability to generate data that is similar to the observed data. Importantly, the criterion follows from the model class itself and is therefore directly applicable to a broad range of inference problems with varying data types. The proposed data consistency criterion is illustrated and evaluated using three synthetic and two real data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/21/2019

Multi-model mimicry for model selection according to generalised goodness-of-fit criteria

Selecting between candidate models is at the core of statistical practic...
10/22/2017

A Novel Bayesian Cluster Enumeration Criterion for Unsupervised Learning

The Bayesian Information Criterion (BIC) has been widely used for estima...
12/07/2017

How consistent is my model with the data? Information-Theoretic Model Check

The choice of model class is fundamental in statistical learning and sys...
12/07/2017

Is My Model Flexible Enough? Information-Theoretic Model Check

The choice of model class is fundamental in statistical learning and sys...
11/25/2021

Variational Gibbs inference for statistical model estimation from incomplete data

Statistical models are central to machine learning with broad applicabil...
11/13/2018

What is really needed to justify ignoring the response mechanism for modelling purposes?

With incomplete data, the standard argument for when the response mechan...