Diagnosing model misspecification and performing generalized Bayes' updates via probabilistic classifiers

12/12/2019
by   Owen Thomas, et al.
0

Model misspecification is a long-standing enigma of the Bayesian inference framework as posteriors tend to get overly concentrated on ill-informed parameter values towards the large sample limit. Tempering of the likelihood has been established as a safer way to do updates from prior to posterior in the presence of model misspecification. At one extreme tempering can ignore the data altogether and at the other extreme it provides the standard Bayes' update when no misspecification is assumed to be present. However, it is an open issue how to best recognize misspecification and choose a suitable level of tempering without access to the true generating model. Here we show how probabilistic classifiers can be employed to resolve this issue. By training a probabilistic classifier to discriminate between simulated and observed data provides an estimate of the ratio between the model likelihood and the likelihood of the data under the unobserved true generative process, within the discriminatory abilities of the classifier. The expectation of the logarithm of a ratio with respect to the data generating process gives an estimation of the negative Kullback-Leibler divergence between the statistical generative model and the true generative distribution. Using a set of canonical examples we show that this divergence provides a useful misspecification diagnostic, a model comparison tool, and a method to inform a generalised Bayesian update in the presence of misspecification for likelihood-based models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2020

Generalised Bayes Updates with f-divergences through Probabilistic Classifiers

A stream of algorithmic advances has steadily increased the popularity o...
research
10/21/2019

Safe-Bayesian Generalized Linear Regression

We study generalized Bayesian inference under misspecification, i.e. whe...
research
02/27/2022

Towards Unifying Logical Entailment and Statistical Estimation

This paper gives a generative model of the interpretation of formal logi...
research
04/08/2021

Synthetic Likelihood in Misspecified Models: Consequences and Corrections

We analyse the behaviour of the synthetic likelihood (SL) method when th...
research
01/31/2023

On the Stability of General Bayesian Inference

We study the stability of posterior predictive inferences to the specifi...
research
08/13/2020

A statistical theory of cold posteriors in deep neural networks

To get Bayesian neural networks to perform comparably to standard neural...
research
02/20/2018

Actively Avoiding Nonsense in Generative Models

A generative model may generate utter nonsense when it is fit to maximiz...

Please sign up or login with your details

Forgot password? Click here to reset