A Generalization of the Chow-Liu Algorithm and its Application to Statistical Learning

02/10/2010
by   Joe Suzuki, et al.
0

We extend the Chow-Liu algorithm for general random variables while the previous versions only considered finite cases. In particular, this paper applies the generalization to Suzuki's learning algorithm that generates from data forests rather than trees based on the minimum description length by balancing the fitness of the data to the forest and the simplicity of the forest. As a result, we successfully obtain an algorithm when both of the Gaussian and finite random variables are present.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2022

Subtractive random forests

Motivated by online recommendation systems, we study a family of random ...
research
03/19/2012

Parameter Learning in PRISM Programs with Continuous Random Variables

Probabilistic Logic Programming (PLP), exemplified by Sato and Kameya's ...
research
01/20/2019

Four Deviations Suffice for Rank 1 Matrices

We prove a matrix discrepancy bound that strengthens the famous Kadison-...
research
05/06/2019

Learning Clique Forests

We propose a topological learning algorithm for the estimation of the co...
research
05/17/2018

A Forest Mixture Bound for Block-Free Parallel Inference

Coordinate ascent variational inference is an important algorithm for in...
research
08/31/2021

Bayesian learning of forest and tree graphical models

In Bayesian learning of Gaussian graphical model structure, it is common...
research
06/15/2016

Network Maximal Correlation

We introduce Network Maximal Correlation (NMC) as a multivariate measure...

Please sign up or login with your details

Forgot password? Click here to reset