
Is infinity that far? A Bayesian nonparametric perspective of finite mixture models
Mixture models are one of the most widely used statistical tools when de...
read it

Gibbs Sampling for (Coupled) Infinite Mixture Models in the Stick Breaking Representation
Nonparametric Bayesian approaches to clustering, information retrieval, ...
read it

Approximate Bayesian Computation for Finite Mixture Models
Finite mixture models are used in statistics and other disciplines, but ...
read it

Fast Learning of Clusters and Topics via Sparse Posteriors
Mixture models and topic models generate each observation from a single ...
read it

QuasiBernoulli Stickbreaking: Infinite Mixture with Cluster Consistency
In mixture modeling and clustering application, the number of components...
read it

Finite mixture models are typically inconsistent for the number of components
Scientists and engineers are often interested in learning the number of ...
read it

TreeGuided MCMC Inference for Normalized Random Measure Mixture Models
Normalized random measures (NRMs) provide a broad class of discrete rand...
read it
MCMC computations for Bayesian mixture models using repulsive point processes
Repulsive mixture models have recently gained popularity for Bayesian cluster detection. Compared to more traditional mixture models, repulsive mixture models produce a smaller number of well separated clusters. The most commonly used methods for posterior inference either require to fix a priori the number of components or are based on reversible jump MCMC computation. We present a general framework for mixture models, when the prior of the `cluster centres' is a finite repulsive point process depending on a hyperparameter, specified by a density which may depend on an intractable normalizing constant. By investigating the posterior characterization of this class of mixture models, we derive a MCMC algorithm which avoids the wellknown difficulties associated to reversible jump MCMC computation. In particular, we use an ancillary variable method, which eliminates the problem of having intractable normalizing constants in the Hastings ratio. The ancillary variable method relies on a perfect simulation algorithm, and we demonstrate this is fast because the number of components is typically small. In several simulation studies and an application on sociological data, we illustrate the advantage of our new methodology over existing methods, and we compare the use of a determinantal or a repulsive Gibbs point process prior model.
READ FULL TEXT
Comments
There are no comments yet.