Population Predictive Checks
Bayesian modeling has become a staple for researchers analyzing data. Thanks to recent developments in approximate posterior inference, modern researchers can easily build, use, and revise complicated Bayesian models for large and rich data. These new abilities, however, bring into focus the problem of model assessment. Researchers need tools to diagnose the fitness of their models, to understand where a model falls short, and to guide its revision. In this paper we develop a new method for Bayesian model checking, the population predictive check (Pop-PC). Pop-PCs are built on posterior predictive checks (PPC), a seminal method that checks a model by assessing the posterior predictive distribution on the observed data. Though powerful, PPCs use the data twice---both to calculate the posterior predictive and to evaluate it---which can lead to overconfident assessments. Pop-PCs, in contrast, compare the posterior predictive distribution to the population distribution of the data. This strategy blends Bayesian modeling with frequentist assessment, leading to a robust check that validates the model on its generalization. Of course the population distribution is not usually available; thus we use tools like the bootstrap and cross validation to estimate the Pop-PC. Further, we extend Pop-PCs to hierarchical models. We study Pop-PCs on classical regression and a hierarchical model of text. We show that Pop-PCs are robust to overfitting and can be easily deployed on a broad family of models.
READ FULL TEXT