Gaussian Mixture Reduction for Time-Constrained Approximate Inference in Hybrid Bayesian Networks

06/06/2018
by   Cheol Young Park, et al.
0

Hybrid Bayesian Networks (HBNs), which contain both discrete and continuous variables, arise naturally in many application areas (e.g., image understanding, data fusion, medical diagnosis, fraud detection). This paper concerns inference in an important subclass of HBNs, the conditional Gaussian (CG) networks, in which all continuous random variables have Gaussian distributions and all children of continuous random variables must be continuous. Inference in CG networks can be NP-hard even for special-case structures, such as poly-trees, where inference in discrete Bayesian networks can be performed in polynomial time. Therefore, approximate inference is required. In approximate inference, it is often necessary to trade off accuracy against solution time. This paper presents an extension to the Hybrid Message Passing inference algorithm for general CG networks and an algorithm for optimizing its accuracy given a bound on computation time. The extended algorithm uses Gaussian mixture reduction to prevent an exponential increase in the number of Gaussian mixture components. The trade-off algorithm performs pre-processing to find optimal run-time settings for the extended algorithm. Experimental results for four CG networks compare performance of the extended algorithm with existing algorithms and show the optimal settings for these CG networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2022

Structure Learning for Hybrid Bayesian Networks

Bayesian networks have been used as a mechanism to represent the joint d...
research
01/10/2013

Inference in Hybrid Networks: Theoretical Limits and Practical Algorithms

An important subclass of hybrid Bayesian networks are those that represe...
research
11/18/2020

Plug-And-Play Learned Gaussian-mixture Approximate Message Passing

Deep unfolding showed to be a very successful approach for accelerating ...
research
01/10/2013

Exact Inference in Networks with Discrete Children of Continuous Parents

Many real life domains contain a mixture of discrete and continuous vari...
research
03/19/2012

Parameter Learning in PRISM Programs with Continuous Random Variables

Probabilistic Logic Programming (PLP), exemplified by Sato and Kameya's ...
research
01/23/2013

Approximate Learning in Complex Dynamic Bayesian Networks

In this paper we extend the work of Smith and Papamichail (1999) and pre...
research
07/18/2020

Analysis of Bayesian Networks via Prob-Solvable Loops

Prob-solvable loops are probabilistic programs with polynomial assignmen...

Please sign up or login with your details

Forgot password? Click here to reset