Robust and Reproducible Model Selection Using Bagged Posteriors

07/24/2020
by   Jonathan H. Huggins, et al.
0

Bayesian model selection is premised on the assumption that the data are generated from one of the postulated models, however, in many applications, all of these models are incorrect. When two or more models provide a nearly equally good fit to the data, Bayesian model selection can be highly unstable, potentially leading to self-contradictory findings. In this paper, we explore using bagging on the posterior distribution ("BayesBag") when performing model selection – that is, averaging the posterior model probabilities over many bootstrapped datasets. We provide theoretical results characterizing the asymptotic behavior of the standard posterior and the BayesBag posterior under misspecification, in the model selection setting. We empirically assess the BayesBag approach on synthetic and real-world data in (i) feature selection for linear regression and (ii) phylogenetic tree reconstruction. Our theory and experiments show that in the presence of misspecification, BayesBag provides (a) greater reproducibility and (b) greater accuracy in selecting the correct model, compared to the standard Bayesian posterior; on the other hand, under correct specification, BayesBag is slightly more conservative than the standard posterior. Overall, our results demonstrate that BayesBag provides an easy-to-use and widely applicable approach that improves upon standard Bayesian model selection by making it more stable and reproducible.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset