
Bayesian model checking: A comparison of tests
Two procedures for checking Bayesian models are compared using a simple ...
read it

Posterior Dispersion Indices
Probabilistic modeling is cyclical: we specify a model, infer its poster...
read it

Statistical Assessment of Replicability via Bayesian Model Criticism
Assessment of replicability is critical to ensure the quality and rigor ...
read it

Hierarchical network models for structured exchangeable interaction processes
Network data often arises via a series of structured interactions among ...
read it

How consistent is my model with the data? InformationTheoretic Model Check
The choice of model class is fundamental in statistical learning and sys...
read it

The Bayesian Method of Tensor Networks
Bayesian learning is a powerful learning framework which combines the ex...
read it

A Cautionary Tail: A Framework and Casey Study for Testing Predictive Model Validity
Data scientists frequently train predictive models on administrative dat...
read it
Population Predictive Checks
Bayesian modeling has become a staple for researchers analyzing data. Thanks to recent developments in approximate posterior inference, modern researchers can easily build, use, and revise complicated Bayesian models for large and rich data. These new abilities, however, bring into focus the problem of model assessment. Researchers need tools to diagnose the fitness of their models, to understand where a model falls short, and to guide its revision. In this paper we develop a new method for Bayesian model checking, the population predictive check (PopPC). PopPCs are built on posterior predictive checks (PPC), a seminal method that checks a model by assessing the posterior predictive distribution on the observed data. Though powerful, PPCs use the data twiceboth to calculate the posterior predictive and to evaluate itwhich can lead to overconfident assessments. PopPCs, in contrast, compare the posterior predictive distribution to the population distribution of the data. This strategy blends Bayesian modeling with frequentist assessment, leading to a robust check that validates the model on its generalization. Of course the population distribution is not usually available; thus we use tools like the bootstrap and cross validation to estimate the PopPC. Further, we extend PopPCs to hierarchical models. We study PopPCs on classical regression and a hierarchical model of text. We show that PopPCs are robust to overfitting and can be easily deployed on a broad family of models.
READ FULL TEXT
Comments
There are no comments yet.