Approximate Shannon Sampling in Importance Sampling: Nearly Consistent Finite Particle Estimates

09/23/2019 ∙ by Alec Koppel, et al. ∙ 0

In Bayesian inference, we seek to compute information about random variables such as moments or quantiles on the basis of data and prior information. When the distribution of random variables is complicated, Monte Carlo (MC) sampling is usually required. Importance sampling is a standard MC tool for addressing this problem: one generates a collection of samples according to an importance distribution, computes their contribution to an unnormalized density, i.e., the importance weight, and then sums the result followed by normalization. This procedure is asymptotically consistent as the number of MC samples, and hence deltas (particles) that parameterize the density estimate, go to infinity. However, retaining in infnitely many particles is intractable. Thus, we propose a scheme for only keeping a nite representative subset of particles and their augmented importance weights that is nearly consistent.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.