Better Together: pooling information in likelihood-free inference

12/05/2022
by   David T. Frazier, et al.
0

Likelihood-free inference (LFI) methods, such as Approximate Bayesian computation (ABC), are now routinely applied to conduct inference in complex models. While the application of LFI is now commonplace, the choice of which summary statistics to use in the construction of the posterior remains an open question that is fraught with both practical and theoretical challenges. Instead of choosing a single vector of summaries on which to base inference, we suggest a new pooled posterior and show how to optimally combine inferences from different LFI posteriors. This pooled approach to inference obviates the need to choose a single vector of summaries, or even a single LFI algorithm, and delivers guaranteed inferential accuracy without requiring the computational resources associated with sampling LFI posteriors in high-dimensions. We illustrate this approach through a series of benchmark examples considered in the LFI literature.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset