DeepAI AI Chat
Log In Sign Up

Evidence estimation in finite and infinite mixture models and applications

by   Adrien Hairault, et al.

Estimating the model evidence - or mariginal likelihood of the data - is a notoriously difficult task for finite and infinite mixture models and we reexamine here different Monte Carlo techniques advocated in the recent literature, as well as novel approaches based on Geyer (1994) reverse logistic regression technique, Chib (1995) algorithm, and Sequential Monte Carlo (SMC). Applications are numerous. In particular, testing for the number of components in a finite mixture model or against the fit of a finite mixture model for a given dataset has long been and still is an issue of much interest, albeit yet missing a fully satisfactory resolution. Using a Bayes factor to find the right number of components K in a finite mixture model is known to provide a consistent procedure. We furthermore establish the consistence of the Bayes factor when comparing a parametric family of finite mixtures against the nonparametric 'strongly identifiable' Dirichlet Process Mixture (DPM) model.


page 1

page 2

page 3

page 4


An Operator Theoretic Approach to Nonparametric Mixture Models

When estimating finite mixture models, it is common to make assumptions ...

Estimating the Number of Components in Finite Mixture Models via the Group-Sort-Fuse Procedure

Estimation of the number of components (or order) of a finite mixture mo...

Multidimensional Membership Mixture Models

We present the multidimensional membership mixture (M3) models where eve...

Order selection with confidence for finite mixture models

The determination of the number of mixture components (the order) of a f...

Finite mixture models are typically inconsistent for the number of components

Scientists and engineers are often interested in learning the number of ...

An enriched mixture model for functional clustering

There is an increasingly rich literature about Bayesian nonparametric mo...

On learning parametric-output HMMs

We present a novel approach for learning an HMM whose outputs are distri...