Spying on the prior of the number of data clusters and the partition distribution in Bayesian cluster analysis

12/22/2020
by   Jan Greve, et al.
0

Mixture models represent the key modelling approach for Bayesian cluster analysis. Different likelihood and prior specifications are required to capture the prototypical shape of the clusters. In addition, the mixture modelling approaches also crucially differ in the specification of the prior on the number of components and the prior on the component weight distribution. We investigate how these specifications impact on the implicitly induced prior on the number of 'filled' components, i.e., data clusters, and the prior on the partitions. We derive computationally feasible calculations to obtain these implicit priors for reasonable data analysis settings and make a reference implementation available in the R package 'fipp'. In many applications the implicit priors are of more practical relevance than the explicit priors imposed and thus suitable prior specifications depend on the implicit priors induced. We highlight the insights which may be gained from inspecting these implicit priors by analysing them for three different modelling approaches previously proposed for Bayesian cluster analysis. These modelling approaches consist of the Dirichlet process mixture and the static and dynamic mixture of finite mixtures model. The default priors suggested in the literature for these modelling approaches are used and the induced priors compared. Based on the implicit priors, we discuss the suitability of these modelling approaches and prior specifications when aiming at sparse cluster solutions and flexibility in the prior on the partitions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset