DeepAI AI Chat
Log In Sign Up

A Distribution Similarity Based Regularizer for Learning Bayesian Networks

by   Weirui Kong, et al.
The University of British Columbia

Probabilistic graphical models compactly represent joint distributions by decomposing them into factors over subsets of random variables. In Bayesian networks, the factors are conditional probability distributions. For many problems, common information exists among those factors. Adding similarity restrictions can be viewed as imposing prior knowledge for model regularization. With proper restrictions, learned models usually generalize better. In this work, we study methods that exploit such high-level similarities to regularize the learning process and apply them to the task of modeling the wave propagation in inhomogeneous media. We propose a novel distribution-based penalization approach that encourages similar conditional probability distribution rather than force the parameters to be similar explicitly. We show in experiment that our proposed algorithm solves the modeling wave propagation problem, which other baseline methods are not able to solve.


Marginalization in Bayesian Networks: Integrating Exact and Approximate Inference

Bayesian Networks are probabilistic graphical models that can compactly ...

Credal Networks under Maximum Entropy

We apply the principle of maximum entropy to select a unique joint proba...

Probability Bracket Notation, Multivariable Systems and Static Bayesian Networks

Probability Bracket Notation (PBN) is applied to systems of multiple ran...

Parameter Adjustment in Bayes Networks. The generalized noisy OR-gate

Spiegelhalter and Lauritzen [15] studied sequential learning in Bayesian...

On the Foundations of Cycles in Bayesian Networks

Bayesian networks (BNs) are a probabilistic graphical model widely used ...

Probabilistic Argumentation and Information Algebras of Probability Potentials on Families of Compatible Frames

Probabilistic argumentation is an alternative to causal modeling with Ba...