# On a Loss-based prior for the number of components in mixture models

We propose a prior distribution for the number of components of a finite mixture model. The novelty is that the prior distribution is obtained by considering the loss one would incur if the true value representing the number of components were not considered. The prior has an elegant and easy to implement structure, which allows to naturally include any prior information one may have as well as to opt for a default solution in cases where this information is not available. The performance of the prior, and comparison with existing alternatives, is studied through the analysis of both real and simulated data.

## Authors

• 7 publications
• 8 publications
• 6 publications
06/30/2016

### An Operator Theoretic Approach to Nonparametric Mixture Models

When estimating finite mixture models, it is common to make assumptions ...
06/23/2014

### Exact fit of simple finite mixture models

How to forecast next year's portfolio-wide credit default rate based on ...
03/24/2021

### Loss based prior for the degrees of freedom of the Wishart distribution

In this paper we propose a novel method to deal with Vector Autoregressi...
05/06/2022

12/24/2018

### Model Selection for Mixture Models - Perspectives and Strategies

Determining the number G of components in a finite mixture distribution ...
07/08/2020

### Finite mixture models are typically inconsistent for the number of components

Scientists and engineers are often interested in learning the number of ...
03/19/2021

### Order selection with confidence for finite mixture models

The determination of the number of mixture components (the order) of a f...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

This paper takes a novel look at the construction of a prior distribution for the number of components for finite mixture models. These models represent a flexible and rich way of modeling data, allowing to extend the collection of probability distribution that can be considered and used. Mixture models have been widely developed and researched upon for over a century. To name a few key contributions, we have

Titterington et al. (1985), Neal (1992), McLachlan and Peel (2000), Marin et al. (2005), Frühwirth–Schnatter (2006) and the forthcoming Celeux et al. (2018). Besides the general literature on mixture models, a wide range of applications have been discussed, including genetics and gene expression profiling (McLachlan et al., 2002; Yeung et al., 2001), economics and finance (Juárez and Steel, 2010; Dias et al., 2010), social sciences (Reynolds at al., 2000; Handcock at al., 2007) and more.

The basic idea of a mixture model is to assume that observations are drawn from a density which is the result of a combination of components

 x∼k∑j=1ωjfj(⋅∣θj), (1)

where the form of is known for each , while the parameters and the weights

are unknown and have to be estimated. In this work, we assume

to be unknown as well and, in accordance to the Bayesian framework, we assign a prior distribution on it.

Besides the above approach, which is what we use here, other methods to deal with an unknown have been presented. One way is based on model selection and consists in fitting mixtures with (for a suitable ) and comparing the models through some index, such as the Bayesian information criteria; see, for example, Baudry et al. (2010). Alternatively, one could set a large and let the weights posterior behaviour to identify which components are meaningful. This is known as an overfitted mixture model and the aim is to define a prior distribution which has a conservative property in reducing a posteriori the number of meaningful components (Rousseau and Mengersen, 2011); Grazian and Robert (2018) have discussed the same approach by using the Jeffreys prior for the mixture weights conditionally on the other parameters.

Several techniques have been proposed to deal with through the use of a prior : Richardson and Green (1997), Stephens (2000), Nobile and Fearnside (2007) and McCullagh and Yang (2008)

. A well-known and widely used method is the reversible Markov chain Monte Carlo

(Green, 1995) which, due to its non-trivial set up, has led to the search of alternatives. A recent and interesting one is proposed by Miller and Harrison (2018), where the model in (1) is written as

 k∼P(k), (ω1,…,ωk)∼Dir(γ,…,γ), Z1,…,Zn∼(ω1,…,ωk), θ1,…,θk∼H, xi∼fθZj,

where is the prior on the number of components defined over the set , is the prior base measure, both the and the are conditionally independent and identically distributed and the are latent variable describing the component membership. It is then highlighted the parallelism with the stick-breaking representation of the Dirichlet mixture model and how the Dirichlet mixture model samplers can be applied to finite mixtures as well. Both simulations and real data analysis performed in the present work have been obtained by using the Jain-Neal split-merge samplers (Jain and Neal, 2004, 2007), as implemented in Miller and Harrison (2018).

In terms of the determination of , which is the focus of this work, the literature is definitely gaunt. In particular, it appears that there is only one proposed prior for with a non-informative flavour, that is (Nobile, 2005)

. Although other authors proposed to use a prior proportional to a Poisson distribution, see for example

Phillips and Smith (1996) and Stephens (2000), only Nobile (2005) gave some theoretical justifications on how to choose the Poisson parameter when there is lack of prior knowledge about . Another option, suitable when there is no sufficient prior information, would be to assign equal prior mass to every value of

; however, in the case one would like to consider, at least theoretically, the possibility of having an infinite support for the number of components, this last solution would not be viable or would need a truncation of the support which might influence inference. Finally, the geometric distribution depicts a possible representation of prior uncertainty

(Miller and Harrison, 2018), although no discussion is reserved in setting the value of the parameter in a scenario of insufficient prior information for the number of components.

Although the illustrations we present here will refer to mixtures of univariate and multivariate normal densities, the loss-based prior for we introduce does not depend on the form of the s, therefore suitable for any mixture. Throughout the paper we will adopt, for the weights and the component parameters, the priors proposed in Miller and Harrison (2018); this will not affect the analysis of the results and the comparisons among different priors for .

## 2 Prior for the number of components

A finite mixture distribution (or, simply, mixture model) is based on the assumption that a set of observed random variables

has been generated by a process that can be represented by a weighted sum of probability distributions. That is

 g(x∣k,ω,θ)=k∑j=1ωjfj(xi∣θj),i=1,…,n (2)

where is the probability distribution of the -th component, ,

is the (possibly vector-valued) parameter of

and are the weights of the components, with for and . For the model (2) the prior can be specified as follows

 π(k,ω,θ)=P(k)π(ω∣k)π(θ∣k). (3)

As mentioned above, the aim of this paper is to define a prior for , therefore the prior distributions for and will be chosen to be proper “standard” priors, minimally informative if necessary; see, for example, Richardson and Green (1997) or Miller and Harrison (2018).

The posterior for is then given by

 P(k∣x)∝∫f(x∣k,ω,θ)×π(k)π(ω∣k)π(θ∣k)dωdθ. (4)

It is now fundamental to discuss the support of . Although for practical purposes the range of values can take is finite, , it may be appropriate to define a prior over . In fact, by truncating the support of there may be possible distortions of the posterior around the boundary, affecting the inferential results. However, it has to be noticed that this is needed when using a uniform prior, since the prior on must be proper, as proved by Nobile (2005). It seems, therefore, more reasonable to use a proper prior defined on .

To obtain the loss-based prior for , we employ the approach introduced in Villa and Walker (2015), where a worth is associated to each value of by considering the loss one would incur if that were removed, and it is the true number of components. Then, the prior is obtained by linking the worth

through the self-information loss function

(Merhav and Feder, 1998). In brief, the self-information loss function associates a loss to a probability statement, say , and it has the form .

To determine the worth of a given , we proceed as follows. First, we note that there is no loss in information. In fact, a well-known Bayesian property (Berk, 1966) shows that the posterior distribution of a parameter (if the true parameter value is removed) accumulates, in a Kullback-Leibler sense, on the parameter value that identifies the model most similar to the true one. This is equivalent in minimising the loss in information one would incur. Now, if we consider a mixture with components, the minimum loss would be measured from any mixture with components. In fact, the mixture with components has more parameters (i.e. uncertainty) than the mixture with components, meaning that the informational content of the former is larger than the one of the latter and the loss in information is zero. That is, . However, should we consider only the loss in information, the resulting prior would be the uniform, that is

 −logP(k) = 0 P(k) ∝ 1

To have a sensible prior, we add a loss component that takes into consideration the increasing complexity of the mixture model. This can be interpreted as the loss in complexity that we incur if we remove the -mixture and it is the true one, and it is related to the number of parameters we avoid in estimating (and the number of extra observations we would need to add, in general, to have reliable estimates). We therefore have . As such, the total loss associated to the mixture with components is given by , yielding

 P(k)∝exp{−c⋅k}, (5)

where is included as loss functions are defined up to a constant. Although the prior in (5) could be directly used, with the the interpretation that is a hyper-parameter which allows to control for sparsity, our recommendation is to reparametrise it by setting and assign a suitable prior to . In particular, by having , the prior for is a particular beta-negative-binomial, that is a beta-geometric distribution, when the support for is infinite, as the following Definition 2.1 (whose detailed derivation is in the Appendix 1) shows.

###### Definition 2.1

Consider the prior distribution for the number of components of a finite mixture model, as defined in (5), where we set and . If we choose , with , then

 P(k|p)=pk−1(1−p),

which is a geometric distribution with parameter , and

 P(k)=Γ(α+β)Γ(α)Γ(β)Γ(k+β−1)Γ(α+1)Γ(k+α+β), (6)

which is a beta-negative-binomial distribution where the number of failures before the experiment is stopped equal to 1, and shape parameters

and .

The prior in (6) is strictly positive on the whole support of . This is a necessary condition (Nobile, 1994) to have consistency on the number of components. In addition, the prior in (6) is proper, which is another requirement to yield a proper posterior (Nobile, 2004) when the support is . On this aspect, as the Jeffreys prior for a geometric distribution is improper, the prior for will be improper as well. As such, a default choice for should be chosen on different grounds. In particular, the default choice will not give any preference to particular values of , and this can be achieved by setting . The resulting prior is then a beta-negative-binomial with all parameter values equal to one which can be rewritten as

 P(k)=1k(k+1). (7)

In a more general setting, the parameters and of the Beta prior on can be used to reflect available prior information about the true number of components. The expectation of the prior in (6) is

 E(k)=E(E{k|p})=E(p−1)=α+β−1α−1,for α>1, (8)

while the variance has the form

 Var(k)=E(Var{k|p})+Var(E{k|p})=αβ(α+β−1)(α−2)(α−1)2,for α>2.

From equation (8) we see that, as , then . So, for a given , we have that the hyper-parameter can be interpreted as the quantity controlling how many components in the mixture we want a priori. The choice of , among values strictly larger than 2, allows to control the variance of the prior, i.e. how certain (or uncertain) a priori we are about the true value of .

If the support for is finite, say , the prior for the number of components (with ) will have the form:

 P(k)=∫10Γ(α+β)Γ(α)Γ(β)pk+α−2(1−p)β11−pKdp, (9)

which does not have a closed form. Although the prior in (9) can be easily implemented in a Markov Chain Monte Carlo procedure, one has to be careful as its performance might depend on the choice of . Besides this, the prior certainly yields a proper posterior for and is consistent on the number of components.

## 3 Illustrations

To illustrate the performance of the loss-based prior we have run a simulation study and analysed two data sets. In both cases, we have considered univariate and multivariate scenarios, comparing the proposed prior, under default settings, with current alternatives found in the literature.

Before describing the analysis and illustrate the results, the following clarifications have to be made. First, as the aim of this paper is to propose a novel prior distribution for the number of components, we do not discuss in detail the prior assigned to model weights and to the parameters of the components of the mixture. Second, for the same reason, we limit the examples to mixture of normal densities. In fact, keeping both model and priors relatively straightforward allows to better appreciate any difference in the priors. Finally, the computational algorithm implemented assumes that the maximum number of components in the mixture is 50, so that the uniform prior is defined over ; although the truncation is necessary for the uniform prior only, so to have a proper posterior, the choice of 50 is sufficiently large to not interfere with any of the analysis performed.

### 3.1 Real data sets

In this section we illustrate the performance of the prior by analysing two available data sets. The first is the galaxy data set (Roeder, 1990), which is considered a benchmark for comparison in the univariate case. We also consider a multivariate case; in particular, the discriminating cancer subtypes using gene expression data set (Miller and Harrison, 2018), which has observations for variables.

The galaxy data sets contains the velocities of 82 galaxies in the Corona Borealis region. Given that the focus here is on the prior for the number of component, we do not go beyond an already tested set up for the model. In particular, the model used in Richardson and Green (1997) where the components of the mixture are normal densities, i.e. ), with independent priors for the parameters, normal densities for the means () and gamma densities for the precision (). We also have , , with , while data-dependent priors are chosen for the remaining hyper-parameters: , and .

Table 1 shows the posterior distribution for the first ten values of obtained by implementing the loss-based prior, the uniform prior and the Poisson(1) prior. The posterior modes are, respectively, at , and . The posteriors are also plotted in Figure 1. There is no unanimous agreement in the number of components in the literature, obviously, and this is supported by the results in Table 1, which shows estimates of

comparable to what has been identified. However, while the posterior 95% credible intervals obtained with the loss-based prior and the uniform prior,

and respectively, are sensible, the interval for the Poisson(1) appears to be quite narrow , excluding values of previously estimated in the literature. It seems that the loss-based prior provides an intermediate posterior distribution, between the one deriving from the Poisson prior, which is very peaked around , and the one deriving from the uniform prior, which gives non-negligible posterior mass to large values as .

The analysis of the cancer data has given results very similar for the three priors. As Figure 2 shows, the posterior distributions for do not differ much, which is supported by the posterior mode, , and posterior 95% credible intervals, , in all cases. It is obvious that the amount on information about in the data is sufficiently strong to dominate any of the used priors.

### 3.2 Simulation study

The simulation study consisted in drawing 100 samples per scenario and compute summaries of indexes of the posterior distributions for the number of components. The details of the simulation study are discussed in the Appendix 2. Briefly, we have considered sample sizes of 50, 100, 500 and 2000. The choice of these sample sizes, for the univariate case, is connected with McCullagh and Yang (2008), where it has been discussed the challenges of estimating more that components when the number of observations is actually ; thus, a choice of a minimum number of observation larger than the maximum number of observations allows to avoid any pitfall raised in McCullagh and Yang (2008). For the univariate case, we have considered mixtures of components, while for the multivariate case we have considered multivariate mixtures of normal densities of dimension and .

In Table 2 we have reported the posterior average mode and average 95% credible intervals for the univariate case. Note that the lower and upper limits of the average posterior credible intervals have been obtained by averaging, respectively, the 100 lower and upper intervals for the 100 samples. We have omitted here the case where and , as well as we have considered only one mixture for and . Full results are available in the Appendix 2. We note that, as one would expect, the results improve as the sample size grows, and this is reflected in the values of the posterior (average) mode and on the size of the posterior (average) credible interval. Although for the posterior mode appears not to concentrate on the true number of components, in particular for and , the true is within the credible intervals.

The results concerning mixture of three multivariate normal densities are shown in Table 3. We again note an improvement on the (average) posterior mode and on the (average) posterior 95% credible interval as increases. There is though an improvement on the interval size as the dimensionality increases. It appears that, for a given sample size, the prior is more accurate in detecting the right number of components when increases, and this is in line with the results obtained for the analysis of the cancer gene data set.

## 4 Discussion

To make inference on the number of components in a finite mixture model using the Bayesian framework, one may consider an infinite support. This allows to avoid any potential dependence from the maximum number of components arbitrarily set, giving at the same time more versatility and elegance. The loss-based prior provides a flexible solution to the problem as it allows to incorporate any prior information one may have as well as opting for a default solution in scenario or actual or alleged prior “ignorance”. In this work, we have shown that, in a setting of limited information, the prior chosen for the number of components influences the posterior distribution; in particular, the uniform prior, which is often used as a default prior, does not seem to be conservative, allowing the posterior distribution to be quite spread towards the possible values. In terms of inference, some level of conservativeness should be preferred, given the fact that the complexity of the inferential problem explodes with the number of meaningful components. On the other hand, the Poisson(1) prior seems to be too conservative, so that, as it is evident in the simulation study, the true value may not even included in the posterior credible interval.

Analysis on both real and simulated data shows that the loss-based prior represents a good compromise between having a prior which excessively penalises for complexity (i.e. the Poisson prior with parameter equal to one) and the uniform prior which suffers from theoretical and implementation weaknesses.

## Appendix 1: Derivation of Definition 1

Starting from , we set , obtaining

 P(K|p) =pk∑km=1pm =pkp/(1−p) =pk−1(1−p), (S1)

which is a geometric distribution defined over the positive integers and with parameter . Given the prior , the marginal for the number of components is given by

 P(k) =∫10pk−1(1−p)π(p)dp =∫10pk−1(1−p)1B(α,β)pβ−1(1−p)α−1dp =1B(α,β)∫10pk+β−2(1−p)αdp =Γ(α+β)Γ(α)Γ(β)Γ(k+β−1)Γ(α+1)Γ(k+α+β), (S2)

where is the beta function. Therefore, the marginal distribution for in (S2) is beta-negative-binomial with the parameter expressing the number of failures before stopping equal to 1, and shape parameter . In fact, the generic beta-negative-binomial defined is defined over , and has probability mass function

 f(k|α,β,r) =(r+k−1k)Γ(α+r)Γ(β+k)Γ(α+β)Γ(α+r+β+k)Γ(α)Γ(β) =Γ(r+k)k!Γ(r)Γ(α+r)Γ(β+k)Γ(α+β)Γ(α+r+β+k)Γ(α)Γ(β),

which, when , becomes

 f(k|α,β,1) =Γ(1+k)k!Γ(1)Γ(α+1)Γ(β+k)Γ(α+β)Γ(α+1+β+k)Γ(α)Γ(β) =Γ(α+β)Γ(α)Γ(β)Γ(k+β)Γ(α+1)Γ(α+1+β+k)k!k!.

Given that we have defined over the set of strictly positive integer, we rewrite in terms of , obtaining

 f(k|α,β,1)=Γ(α+β)Γ(α)Γ(β)Γ(k+β−1)Γ(α+1)Γ(α+β+k),

which has the same form as (S2), retrieving the result. We note that the distribution in (S2) is a particular case of the beta-negative-binomial, called the beta-geometric distribution.

## Appendix 2: Simulation study

The simulation study has been performed by drawing 100 samples from different mixture models, encompassing univariate and multivariate mixtures. We have considered the following sample sizes: and, for the multivariate cases, we have considered distributions of dimension .

As the focus of the paper is about making inference on the number of components in a finite mixture, we have not given particular emphasis on the prior distributions for the mixture weights and parameters, as well as the computational techniques. In the univariate case, the models considered (Richardson and Green, 1997) have normal components, , with the following independent priors on the parameters:

 μj ∼ N(μ0,σ20) λj ∼ Gamma(a,b).

The hyper-parameters of the normal prior have been set as follows:

 μ0=(maxxi+minxi)/2andσ0=maxxi−minxi,

while for the Gamma prior we have set:

 a=2andb∼Gamma(0.2,10/σ20).

For the multivariate case, for each component , we have , with

 λt ∼ Gamma(d,h) μt|λt ∼ N(0,(wλi)−1).

We have set . To perform the analysis, given the non-triviality of the Markov Chain Reversible Jump, we have opted for the algorithm implemented in Miller and Harrison (2018), included the code provided by the authors.

Finally, each generated sample has been analysed by considering the proposed loss-based prior as well as two options available in the literature (which have a non-informative flavour): the uniform prior and the Poisson(1) (Nobile, 2005). The algorithm implemented considers the maximum number of components equal to 50, so that the uniform prior for is defined over . Note that the truncation is necessary when the uniform prior is employed to ensure a proper posterior, while id does not impact the performance of the other two priors. In any case, the truncation point is sufficiently large to not impact the analysis of both real and simulated data.

For the univariate case, we have considered the mixtures as in Table 4, while for the multivariate, we have considered the three-component mixtures of the following form:

 xi∼13N(m,I)+13N(0,I)+13N(−m,I),

with , for . This choice (Miller and Harrison, 2018) will prevent the posterior from concentrating too quickly.

To remove any possible effect of the priors on the mixture weights and parameters, we have applied a large burn-in period. That is, we have run the simulation for 100,000 iteration and kept the last 1,000 only. As such, convergence has been ensured and the comparison between the prior for the number of components is reliable and meaningful. Moreover, for computational purposes, we have considered .

For each scenario, we have computed the average (over the 100 draws) of the posterior mode, as well as the average 95% credible interval; the lower and upper limits of these intervals have been obtained by averaging, respectively, the 100 lower limits and the 100 upper limits of the posterior credible intervals. The results for the univariate cases are reported from Table 5 to Table 12, while for the multivariate, from Table 13 to Table 15.

From the simulation study, in particular when comparing the three priors, we can observe the following. In the univariate cases, the uniform prior tends to have the largest size of the posterior credible intervals. This is expected as the mass on each value of is constant. Whilst this may be regarded as a disadvantage when is relatively low, we see that, besides the case of and or , the posterior credible intervals always contain the true value of the number of components. At the opposite, we see that the Poisson prior (with parameter equal to 1), tends to return the smallest credible intervals. However, due to its fast decrease to zero for increasing , there are already cases for where, if the information in the sample is not sufficient, it fails to contain the true parameter value. The proposed loss-based prior, on the other hand, has a behaviour that is somewhere between the others. In fact, the credible interval size is closer to the Poisson prior (rather than the uniform prior) and, with the exception of and , it always contain the true value.

When we compare the priors in the multivariate setting, we note very few differences. Posterior average modes and credible intervals have similar values, with negligible impact for difference dimensionality.

## References

• Baudry et al. (2010) Baudry, J. P., Raftery, A. E., Celeux, G., Lo, K., and Gottardo, R. (2010). Combining mixture components for clustering. Journal of computational and graphical statistics 19(2), 332–353
• Berk (1966) Berk, R.H. (1966). Limiting behaviour of posterior distributions when the model is incorrect. Ann. of Math. Statist. 37, 51–58
• Bernardo and Smith (1994) Bernardo, J.M. and Smith, A.F.M. (1994). Bayesian Theory. New York: John Wiley & Sons.
• Celeux et al. (2018) Celeux, G., Frühwirth–Schnatter, S. and Robert, C.P. (2018). Handbook of Mixture Analysis. Boca Raton, FL: CRC Press.
• Dias et al. (2010) Dias, J.G., Vermunt, J.K. and Ramos, S. (2010).

Mixture hidden Markov models in finance research.

A. Fink et al., (eds.), Advances in Data Analysis, Data Handling and Business Intelligence, Studies in Classification, Data Analysis, and Knowledge Organization DOI: 10.1007/978-3-642-01044-6, Springer-Verlag Berlin Heidelberg
• Frühwirth–Schnatter (2006) Frühwirth–Schnatter, S. (2006). Finite Mixture and Markov Switching Models. Springer-Verlag, New York..
• Grazian and Robert (2018) Grazian, C., and Robert, C. P. (2018). Jeffreys priors for mixture estimation: Properties and alternatives. Computational Statistics & Data Analysis 121, 149–163
• Green (1995) Green, AP.J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 82, 711–732
• Handcock at al. (2007) Handcock, M.S., Raftery, A.E. and Tantrum, J.M. (2007). Model-based clustering for social networks. J. R. Stat. Soc. A 170, 301–354
• Jain and Neal (2004) Jain, S., and Neal, R. M. (2004). A Split-Merge Markov Chain Monte Carlo Procedure for the Dirichlet Process Mixture Model. Journal of Computational and Graphical Statistics 13, 158–182
• Jain and Neal (2007) Jain, S., and Neal, R. M. (2007). Splitting and Merging Components of a Nonconjugate Dirichlet Process Mixture Model. Bayesian Analysis 2, 445–472
• Jeffreys (1961) Jeffreys, H. (1961). Theory of Probability. London: Oxford University Press.
• Juárez and Steel (2010) Juárez, M.A. and Steel, M.F.J. (2010).

Model-based clustering of non-Gaussian panel data based on skew-

distributions.
J. Bus. Econ. Stat. 28, 52–66
• Kullback and Leibler (1951) Kullback, S. and Leibler, R.A. (1951). On information and sufficiency. Ann. of Math. Statist. 22, 79–86
• Lo (1984) Lo, A.Y. (1984). On a class of Bayesian nonparametric estimates: I. density estimates. Ann. of Statist. 12, 351–357
• Malsiner–Walli et al. (2016) Malsiner–Walli, G., Frühwirth–Schnatter, S., and Grün, B. (2016). Model-based clustering based on sparse finite Gaussian mixtures. Stat. Comput. 26, 303–324
• McCullagh and Yang (2008) McCullagh, P. and Yang, J. (2008). How many clusters? Bayesian Analysis 3, 101–120
• McLachlan and Peel (2000) McLachlan, G.J. and Peel, D. (2000). Finite Mixture Models. New York: J. Wiley
• McLachlan et al. (2002) McLachlan, G.J., Bean, R.W. and Peel, D. (2002). A mixture-model based approach to the clustering of microarray expression data. Bioinformatics 18, 413–422
• Merhav and Feder (1998) Merhav, N. and Feder, M. (1998). Universal prediction. IEEE Trans. Inf. Theory 44, 2124–2147
• Marin et al. (2005) Marin, J.M., Mengersen, K.L. and Robert, C.P. (2005). Bayesian Modelling and Inference on Mixtures of Distributions. D. Dey and C.R. Rao (eds.), Handbook of Statistics, Vol. 25, pp. 459–507 Elsevier.
• Miller and Harrison (2018) Miller, J.W. and Harrison, M.T. (2018). Mixture models with a prior on the number of components. J. American Stats. Assoc. 113, 340–356
• Neal (1992) Neal, R.M. (1992). Bayesian Mixture Modeling, in Maximum Entropy and Bayesian Methods eds. C.R. Smith, G.J. Erickson and P.O. Neudorfer. New York: Springer
• Nobile (1994) Nobile, A. (1994). Bayesian Analysis of Finite Mixture Distributions. Ph.D. thesis, Pittsburgh, PA: Department of Statistics, Carneige Mellon University.
• Nobile (2004) Nobile, A. (2004). On the posterior distribution of the number of components in a finite mixture. Ann. of Statist. 32, 2044–2073
• Nobile (2005) Nobile, A. (2005). Bayesian finite mixtures: a note on prior specification and posterior computation. Technical Report Department of Statistics, University of Glasgow
• Nobile and Fearnside (2007) Nobile, A. and Fearnside, A.T. (2007). Bayesian finite mixtures with an unknown number of components: the allocation sampler. Statistics and Computing 17, 147–162
• Phillips and Smith (1996) Phillips, D.B and Smith, A.F.M. (1996). Bayesian model comparison via jump diffusions. Gilks W.R., Richardson S. and Spiegelhalter D.J. (eds.), Markov Chain Monte Carlo in Practice Chapman & Hall, London
• Reynolds at al. (2000) Reynolds, D.A., Quatieri, T.F. and Dunn, R.B. (2000).

Speaker verification using adapted Gaussian mixture models.

Data Signal Processing 10, 19–41
• Richardson and Green (1997) Richardson, S. and Green, P. J. (1997). On Bayesian Analysis of Mixtures With an Unknown Number of Components. J. R. Stat. Soc. B 59, 731–792
• Roeder (1990) Roeder, K. (1990). Density Estimation With Confidence Sets Exemplified by Superclusters and Voids in the Galaxies. J. American Stats. Assoc. 85, 617–624
• Rousseau and Mengersen (2011) Rousseau, J., and Mengersen, K. (2011). Asymptotic behaviour of the posterior distribution in overfitted mixture models. Journal of the Royal Statistical Society: Series B 73(5), 689–710
• Stephens (2000) Stephens, M. (2000). Bayesian analysis of mixture models with an unknown number of components - an alternative to reversible jump methods. Ann. of Statist. 28, 40–74
• Titterington et al. (1985) Titterington, D.M., Smith, A.F.M. and Markov, U.E. (1985). Statistical Analysis of Finite Mixture Distributions. New York: J. Wiley
• Villa and Walker (2015) Villa, C. and Walker, S.G. (2015).

An objective Bayesian criterion to determine model prior probabilities.

Scandinavian Journal of Statistics 42, 947–966
• Walker (2007) Walker, S.G. (2007). Sampling the Dirichlet mixture model with slices. Communications in Statistics: Simulation and Computation 36, 45–54
• Yeung et al. (2001) Yeung, K.Y., Fraley, C., Murua, A., Raftery, A.E. and Ruzzo, W.L. (2001). Model-based clustering and data transformation for gene expression data. Bioinformatics 17, 977–987