Approximating multivariate posterior distribution functions from Monte Carlo samples for sequential Bayesian inference

12/12/2017
by   Bram Thijssen, et al.
0

An important feature of Bayesian statistics is the possibility to do sequential inference: the posterior distribution obtained after seeing a first dataset can be used as prior for a second inference. However, when Monte Carlo sampling methods are used for the inference, we only have one set of samples from the posterior distribution, which is typically insufficient for accurate sequential inference. In order to do sequential inference in this case, it is necessary to estimate a functional description of the posterior probability from the Monte Carlo samples. Here, we explore whether it is feasible to perform sequential inference based on Monte Carlo samples, in a multivariate context. To approximate the posterior distribution, we can use either the apparent density based on the sample positions (density estimation) or the relative posterior probability of the samples (regression). Specifically, we evaluate the accuracy of kernel density estimation, Gaussian mixtures, vine copulas and Gaussian process regression; and we test whether they can be used for sequential Bayesian inference. Additionally, both the density estimation and the regression methods can be used to obtain a post-hoc estimate of the marginal likelihood. In low dimensionality, Gaussian processes are most accurate, whereas in higher dimensionality Gaussian mixtures or vine copulas perform better. We show that sequential inference can be computationally more efficient than joint inference, and we also illustrate the limits of this approach with a failure case. Since the performance is likely to be case-specific, we provide an R package mvdens that provides a unified interface for the density approximation methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset