DeepAI AI Chat
Log In Sign Up

On a Loss-based prior for the number of components in mixture models

by   Clara Grazian, et al.

We propose a prior distribution for the number of components of a finite mixture model. The novelty is that the prior distribution is obtained by considering the loss one would incur if the true value representing the number of components were not considered. The prior has an elegant and easy to implement structure, which allows to naturally include any prior information one may have as well as to opt for a default solution in cases where this information is not available. The performance of the prior, and comparison with existing alternatives, is studied through the analysis of both real and simulated data.


page 1

page 2

page 3

page 4


Generalized Identifiability Bounds for Mixture Models with Grouped Samples

Recent work has shown that finite mixture models with m components are i...

Exact fit of simple finite mixture models

How to forecast next year's portfolio-wide credit default rate based on ...

Loss based prior for the degrees of freedom of the Wishart distribution

In this paper we propose a novel method to deal with Vector Autoregressi...

Consistency of mixture models with a prior on the number of components

This article establishes general conditions for posterior consistency of...

Model Selection for Mixture Models - Perspectives and Strategies

Determining the number G of components in a finite mixture distribution ...

Order selection with confidence for finite mixture models

The determination of the number of mixture components (the order) of a f...

Spying on the prior of the number of data clusters and the partition distribution in Bayesian cluster analysis

Mixture models represent the key modelling approach for Bayesian cluster...