Learning Mixtures of DAG Models

01/30/2013
by   Bo Thiesson, et al.
0

We describe computationally efficient methods for learning mixtures in which each component is a directed acyclic graphical model (mixtures of DAGs or MDAGs). We argue that simple search-and-score algorithms are infeasible for a variety of problems, and introduce a feasible approach in which parameter and structure search is interleaved and expected data is treated as real data. Our approach can be viewed as a combination of (1) the Cheeseman--Stutz asymptotic approximation for model posterior probability and (2) the Expectation--Maximization algorithm. We evaluate our procedure for selecting among MDAGs on synthetic and real examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2019

Model-based clustering and classification using mixtures of multivariate skewed power exponential distributions

Families of mixtures of multivariate power exponential (MPE) distributio...
research
12/02/2013

Families of Parsimonious Finite Mixtures of Regression Models

Finite mixtures of regression models offer a flexible framework for inve...
research
06/08/2015

Learning Mixtures of Ising Models using Pseudolikelihood

Maximum pseudolikelihood method has been among the most important method...
research
01/24/2022

Decentralized EM to Learn Gaussian Mixtures from Datasets Distributed by Features

Expectation Maximization (EM) is the standard method to learn Gaussian m...
research
12/19/2016

An extended Perona-Malik model based on probabilistic models

The Perona-Malik model has been very successful at restoring images from...
research
10/10/2021

Fitting large mixture models using stochastic component selection

Traditional methods for unsupervised learning of finite mixture models r...

Please sign up or login with your details

Forgot password? Click here to reset