Optimal Monte Carlo Estimation of Belief Network Inference

02/13/2013
by   Malcolm Pradhan, et al.
0

We present two Monte Carlo sampling algorithms for probabilistic inference that guarantee polynomial-time convergence for a larger class of network than current sampling algorithms provide. These new methods are variants of the known likelihood weighting algorithm. We use of recent advances in the theory of optimal stopping rules for Monte Carlo simulation to obtain an inference approximation with relative error epsilon and a small failure probability delta. We present an empirical evaluation of the algorithms which demonstrates their improved performance.

READ FULL TEXT

page 1

page 2

page 3

page 6

page 7

page 8

research
07/01/2020

Scalable Monte Carlo Inference and Rescaled Local Asymptotic Normality

Statisticians are usually glad to obtain additional data, but Monte Carl...
research
12/01/2020

mlOSP: Towards a Unified Implementation of Regression Monte Carlo Algorithms

We introduce mlOSP, a computational template for Machine Learning for Op...
research
02/23/2023

Challenging the Lévy Flight Foraging Hypothesis -A Joint Monte Carlo and Numerical PDE Approach

For a Lévy process on the flat torus 𝕋^2 with power law jump length dist...
research
12/05/2012

On Some Integrated Approaches to Inference

We present arguments for the formulation of unified approach to differen...
research
02/03/2020

Randomized optimal stopping algorithms and their convergence analysis

In this paper we study randomized optimal stopping problems and consider...
research
11/15/2018

Histogram-Free Multicanonical Monte Carlo Sampling to Calculate the Density of States

We report a new multicanonical Monte Carlo algorithm to obtain the densi...
research
06/13/2012

Sampling First Order Logical Particles

Approximate inference in dynamic systems is the problem of estimating th...

Please sign up or login with your details

Forgot password? Click here to reset