Approximate Learning in Complex Dynamic Bayesian Networks

by   Raffaella Settimi, et al.

In this paper we extend the work of Smith and Papamichail (1999) and present fast approximate Bayesian algorithms for learning in complex scenarios where at any time frame, the relationships between explanatory state space variables can be described by a Bayesian network that evolve dynamically over time and the observations taken are not necessarily Gaussian. It uses recent developments in approximate Bayesian forecasting methods in combination with more familiar Gaussian propagation algorithms on junction trees. The procedure for learning state parameters from data is given explicitly for common sampling distributions and the methodology is illustrated through a real application. The efficiency of the dynamic approximation is explored by using the Hellinger divergence measure and theoretical bounds for the efficacy of such a procedure are discussed.


page 1

page 2

page 3

page 5

page 6

page 7

page 8

page 9


The Computational Power of Dynamic Bayesian Networks

This paper considers the computational power of constant size, dynamic B...

Approximate Inference Algorithms for Hybrid Bayesian Networks with Discrete Constraints

In this paper, we consider Hybrid Mixed Networks (HMN) which are Hybrid ...

Inference in Multiply Sectioned Bayesian Networks with Extended Shafer-Shenoy and Lazy Propagation

As Bayesian networks are applied to larger and more complex problem doma...

Irregular-Time Bayesian Networks

In many fields observations are performed irregularly along time, due to...

New Results for the MAP Problem in Bayesian Networks

This paper presents new results for the (partial) maximum a posteriori (...

MIxBN: library for learning Bayesian networks from mixed data

This paper describes a new library for learning Bayesian networks from d...

Modeling Transportation Routines using Hybrid Dynamic Mixed Networks

This paper describes a general framework called Hybrid Dynamic Mixed Net...