Using bagged posteriors for robust inference and model criticism

12/15/2019
by   Jonathan H. Huggins, et al.
1

Standard Bayesian inference is known to be sensitive to model misspecification, leading to unreliable uncertainty quantification and poor predictive performance. However, finding generally applicable and computationally feasible methods for robust Bayesian inference under misspecification has proven to be a difficult challenge. An intriguing, easy-to-use, and widely applicable approach is to use bagging on the Bayesian posterior ("BayesBag"); that is, to use the average of posterior distributions conditioned on bootstrapped datasets. In this paper, we comprehensively develop the asymptotic theory of BayesBag, propose a model–data mismatch index for model criticism using BayesBag, and empirically validate our theory and methodology on synthetic and real-world data in linear regression (both feature selection and parameter inference), sparse logistic regression, insurance loss prediction, and phylogenetic tree reconstruction. We find that in the presence of significant misspecification, BayesBag yields more reproducible inferences, has better predictive accuracy, and selects correct models more often than the standard Bayesian posterior; meanwhile, when the model is correctly specified, BayesBag produces superior or equally good results for parameter inference and prediction, while being slightly more conservative for model selection. Overall, our results demonstrate that BayesBag combines the attractive modeling features of standard Bayesian inference with the distributional robustness properties of frequentist methods, providing benefits over both Bayes alone and the bootstrap alone.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset