1 Introduction
This paper takes a novel look at the construction of a prior distribution for the number of components for finite mixture models. These models represent a flexible and rich way of modeling data, allowing to extend the collection of probability distribution that can be considered and used. Mixture models have been widely developed and researched upon for over a century. To name a few key contributions, we have
Titterington et al. (1985), Neal (1992), McLachlan and Peel (2000), Marin et al. (2005), Frühwirth–Schnatter (2006) and the forthcoming Celeux et al. (2018). Besides the general literature on mixture models, a wide range of applications have been discussed, including genetics and gene expression profiling (McLachlan et al., 2002; Yeung et al., 2001), economics and finance (Juárez and Steel, 2010; Dias et al., 2010), social sciences (Reynolds at al., 2000; Handcock at al., 2007) and more.The basic idea of a mixture model is to assume that observations are drawn from a density which is the result of a combination of components
(1) 
where the form of is known for each , while the parameters and the weights
are unknown and have to be estimated. In this work, we assume
to be unknown as well and, in accordance to the Bayesian framework, we assign a prior distribution on it.Besides the above approach, which is what we use here, other methods to deal with an unknown have been presented. One way is based on model selection and consists in fitting mixtures with (for a suitable ) and comparing the models through some index, such as the Bayesian information criteria; see, for example, Baudry et al. (2010). Alternatively, one could set a large and let the weights posterior behaviour to identify which components are meaningful. This is known as an overfitted mixture model and the aim is to define a prior distribution which has a conservative property in reducing a posteriori the number of meaningful components (Rousseau and Mengersen, 2011); Grazian and Robert (2018) have discussed the same approach by using the Jeffreys prior for the mixture weights conditionally on the other parameters.
Several techniques have been proposed to deal with through the use of a prior : Richardson and Green (1997), Stephens (2000), Nobile and Fearnside (2007) and McCullagh and Yang (2008)
. A wellknown and widely used method is the reversible Markov chain Monte Carlo
(Green, 1995) which, due to its nontrivial set up, has led to the search of alternatives. A recent and interesting one is proposed by Miller and Harrison (2018), where the model in (1) is written aswhere is the prior on the number of components defined over the set , is the prior base measure, both the and the are conditionally independent and identically distributed and the are latent variable describing the component membership. It is then highlighted the parallelism with the stickbreaking representation of the Dirichlet mixture model and how the Dirichlet mixture model samplers can be applied to finite mixtures as well. Both simulations and real data analysis performed in the present work have been obtained by using the JainNeal splitmerge samplers (Jain and Neal, 2004, 2007), as implemented in Miller and Harrison (2018).
In terms of the determination of , which is the focus of this work, the literature is definitely gaunt. In particular, it appears that there is only one proposed prior for with a noninformative flavour, that is (Nobile, 2005)
. Although other authors proposed to use a prior proportional to a Poisson distribution, see for example
Phillips and Smith (1996) and Stephens (2000), only Nobile (2005) gave some theoretical justifications on how to choose the Poisson parameter when there is lack of prior knowledge about . Another option, suitable when there is no sufficient prior information, would be to assign equal prior mass to every value of; however, in the case one would like to consider, at least theoretically, the possibility of having an infinite support for the number of components, this last solution would not be viable or would need a truncation of the support which might influence inference. Finally, the geometric distribution depicts a possible representation of prior uncertainty
(Miller and Harrison, 2018), although no discussion is reserved in setting the value of the parameter in a scenario of insufficient prior information for the number of components.Although the illustrations we present here will refer to mixtures of univariate and multivariate normal densities, the lossbased prior for we introduce does not depend on the form of the s, therefore suitable for any mixture. Throughout the paper we will adopt, for the weights and the component parameters, the priors proposed in Miller and Harrison (2018); this will not affect the analysis of the results and the comparisons among different priors for .
2 Prior for the number of components
A finite mixture distribution (or, simply, mixture model) is based on the assumption that a set of observed random variables
has been generated by a process that can be represented by a weighted sum of probability distributions. That is(2) 
where is the probability distribution of the th component, ,
is the (possibly vectorvalued) parameter of
and are the weights of the components, with for and . For the model (2) the prior can be specified as follows(3) 
As mentioned above, the aim of this paper is to define a prior for , therefore the prior distributions for and will be chosen to be proper “standard” priors, minimally informative if necessary; see, for example, Richardson and Green (1997) or Miller and Harrison (2018).
The posterior for is then given by
(4) 
It is now fundamental to discuss the support of . Although for practical purposes the range of values can take is finite, , it may be appropriate to define a prior over . In fact, by truncating the support of there may be possible distortions of the posterior around the boundary, affecting the inferential results. However, it has to be noticed that this is needed when using a uniform prior, since the prior on must be proper, as proved by Nobile (2005). It seems, therefore, more reasonable to use a proper prior defined on .
To obtain the lossbased prior for , we employ the approach introduced in Villa and Walker (2015), where a worth is associated to each value of by considering the loss one would incur if that were removed, and it is the true number of components. Then, the prior is obtained by linking the worth
through the selfinformation loss function
(Merhav and Feder, 1998). In brief, the selfinformation loss function associates a loss to a probability statement, say , and it has the form .To determine the worth of a given , we proceed as follows. First, we note that there is no loss in information. In fact, a wellknown Bayesian property (Berk, 1966) shows that the posterior distribution of a parameter (if the true parameter value is removed) accumulates, in a KullbackLeibler sense, on the parameter value that identifies the model most similar to the true one. This is equivalent in minimising the loss in information one would incur. Now, if we consider a mixture with components, the minimum loss would be measured from any mixture with components. In fact, the mixture with components has more parameters (i.e. uncertainty) than the mixture with components, meaning that the informational content of the former is larger than the one of the latter and the loss in information is zero. That is, . However, should we consider only the loss in information, the resulting prior would be the uniform, that is
To have a sensible prior, we add a loss component that takes into consideration the increasing complexity of the mixture model. This can be interpreted as the loss in complexity that we incur if we remove the mixture and it is the true one, and it is related to the number of parameters we avoid in estimating (and the number of extra observations we would need to add, in general, to have reliable estimates). We therefore have . As such, the total loss associated to the mixture with components is given by , yielding
(5) 
where is included as loss functions are defined up to a constant. Although the prior in (5) could be directly used, with the the interpretation that is a hyperparameter which allows to control for sparsity, our recommendation is to reparametrise it by setting and assign a suitable prior to . In particular, by having , the prior for is a particular betanegativebinomial, that is a betageometric distribution, when the support for is infinite, as the following Definition 2.1 (whose detailed derivation is in the Appendix 1) shows.
Definition 2.1
Consider the prior distribution for the number of components of a finite mixture model, as defined in (5), where we set and . If we choose , with , then
which is a geometric distribution with parameter , and
(6) 
which is a betanegativebinomial distribution where the number of failures before the experiment is stopped equal to 1, and shape parameters
and .The prior in (6) is strictly positive on the whole support of . This is a necessary condition (Nobile, 1994) to have consistency on the number of components. In addition, the prior in (6) is proper, which is another requirement to yield a proper posterior (Nobile, 2004) when the support is . On this aspect, as the Jeffreys prior for a geometric distribution is improper, the prior for will be improper as well. As such, a default choice for should be chosen on different grounds. In particular, the default choice will not give any preference to particular values of , and this can be achieved by setting . The resulting prior is then a betanegativebinomial with all parameter values equal to one which can be rewritten as
(7) 
In a more general setting, the parameters and of the Beta prior on can be used to reflect available prior information about the true number of components. The expectation of the prior in (6) is
(8) 
while the variance has the form
From equation (8) we see that, as , then . So, for a given , we have that the hyperparameter can be interpreted as the quantity controlling how many components in the mixture we want a priori. The choice of , among values strictly larger than 2, allows to control the variance of the prior, i.e. how certain (or uncertain) a priori we are about the true value of .
If the support for is finite, say , the prior for the number of components (with ) will have the form:
(9) 
which does not have a closed form. Although the prior in (9) can be easily implemented in a Markov Chain Monte Carlo procedure, one has to be careful as its performance might depend on the choice of . Besides this, the prior certainly yields a proper posterior for and is consistent on the number of components.
3 Illustrations
To illustrate the performance of the lossbased prior we have run a simulation study and analysed two data sets. In both cases, we have considered univariate and multivariate scenarios, comparing the proposed prior, under default settings, with current alternatives found in the literature.
Before describing the analysis and illustrate the results, the following clarifications have to be made. First, as the aim of this paper is to propose a novel prior distribution for the number of components, we do not discuss in detail the prior assigned to model weights and to the parameters of the components of the mixture. Second, for the same reason, we limit the examples to mixture of normal densities. In fact, keeping both model and priors relatively straightforward allows to better appreciate any difference in the priors. Finally, the computational algorithm implemented assumes that the maximum number of components in the mixture is 50, so that the uniform prior is defined over ; although the truncation is necessary for the uniform prior only, so to have a proper posterior, the choice of 50 is sufficiently large to not interfere with any of the analysis performed.
3.1 Real data sets
In this section we illustrate the performance of the prior by analysing two available data sets. The first is the galaxy data set (Roeder, 1990), which is considered a benchmark for comparison in the univariate case. We also consider a multivariate case; in particular, the discriminating cancer subtypes using gene expression data set (Miller and Harrison, 2018), which has observations for variables.
The galaxy data sets contains the velocities of 82 galaxies in the Corona Borealis region. Given that the focus here is on the prior for the number of component, we do not go beyond an already tested set up for the model. In particular, the model used in Richardson and Green (1997) where the components of the mixture are normal densities, i.e. ), with independent priors for the parameters, normal densities for the means () and gamma densities for the precision (). We also have , , with , while datadependent priors are chosen for the remaining hyperparameters: , and .
1  2  3  4  5  6  7  8  9  10  

LB  0  0  0.18  0.24  0.22  0.16  0.10  0.05  0.03  0.01 
UN  0  0  0.06  0.13  0.19  0.20  0.16  0.11  0.07  0.04 
PO  0  0  0.58  0.31  0.09  0.01  0  0  0  0 
Table 1 shows the posterior distribution for the first ten values of obtained by implementing the lossbased prior, the uniform prior and the Poisson(1) prior. The posterior modes are, respectively, at , and . The posteriors are also plotted in Figure 1. There is no unanimous agreement in the number of components in the literature, obviously, and this is supported by the results in Table 1, which shows estimates of
comparable to what has been identified. However, while the posterior 95% credible intervals obtained with the lossbased prior and the uniform prior,
and respectively, are sensible, the interval for the Poisson(1) appears to be quite narrow , excluding values of previously estimated in the literature. It seems that the lossbased prior provides an intermediate posterior distribution, between the one deriving from the Poisson prior, which is very peaked around , and the one deriving from the uniform prior, which gives nonnegligible posterior mass to large values as .The analysis of the cancer data has given results very similar for the three priors. As Figure 2 shows, the posterior distributions for do not differ much, which is supported by the posterior mode, , and posterior 95% credible intervals, , in all cases. It is obvious that the amount on information about in the data is sufficiently strong to dominate any of the used priors.
3.2 Simulation study
The simulation study consisted in drawing 100 samples per scenario and compute summaries of indexes of the posterior distributions for the number of components. The details of the simulation study are discussed in the Appendix 2. Briefly, we have considered sample sizes of 50, 100, 500 and 2000. The choice of these sample sizes, for the univariate case, is connected with McCullagh and Yang (2008), where it has been discussed the challenges of estimating more that components when the number of observations is actually ; thus, a choice of a minimum number of observation larger than the maximum number of observations allows to avoid any pitfall raised in McCullagh and Yang (2008). For the univariate case, we have considered mixtures of components, while for the multivariate case we have considered multivariate mixtures of normal densities of dimension and .
Number of components  

50  2.07  (2.00, 5.28)  4.23  (2.47, 10.77)  1.72  (1.00, 6.05) 
100  2.02  (2.00, 4.54)  3.26  (2.56, 6.38)  2.76  (1.41, 6.88) 
500  2.00  (2.00, 3.47)  3.38  (3.07, 4.99)  5.65  (3.81, 9.85) 
2000  2.03  (2.00, 3.15)  4.00  (3.91, 5.16)  7.06  (5.74, 11.23) 
In Table 2 we have reported the posterior average mode and average 95% credible intervals for the univariate case. Note that the lower and upper limits of the average posterior credible intervals have been obtained by averaging, respectively, the 100 lower and upper intervals for the 100 samples. We have omitted here the case where and , as well as we have considered only one mixture for and . Full results are available in the Appendix 2. We note that, as one would expect, the results improve as the sample size grows, and this is reflected in the values of the posterior (average) mode and on the size of the posterior (average) credible interval. Although for the posterior mode appears not to concentrate on the true number of components, in particular for and , the true is within the credible intervals.
The results concerning mixture of three multivariate normal densities are shown in Table 3. We again note an improvement on the (average) posterior mode and on the (average) posterior 95% credible interval as increases. There is though an improvement on the interval size as the dimensionality increases. It appears that, for a given sample size, the prior is more accurate in detecting the right number of components when increases, and this is in line with the results obtained for the analysis of the cancer gene data set.
Number of dimensions  

50  2.05  (2.00, 3.54)  2.11  (2.00, 3.26)  2.06  (2.00, 3.13) 
100  2.61  (2.18, 3.87)  2.56  (2.22, 3.71)  2.24  (2.11, 3.33) 
500  3.01  (3.00, 3.91)  3.00  (3.00, 3.26)  3.00  (3.00, 3.08) 
2000  3.00  (3.00, 3.14)  3.00  (3.00, 3.02)  3.00  (3.00, 3.00) 
4 Discussion
To make inference on the number of components in a finite mixture model using the Bayesian framework, one may consider an infinite support. This allows to avoid any potential dependence from the maximum number of components arbitrarily set, giving at the same time more versatility and elegance. The lossbased prior provides a flexible solution to the problem as it allows to incorporate any prior information one may have as well as opting for a default solution in scenario or actual or alleged prior “ignorance”. In this work, we have shown that, in a setting of limited information, the prior chosen for the number of components influences the posterior distribution; in particular, the uniform prior, which is often used as a default prior, does not seem to be conservative, allowing the posterior distribution to be quite spread towards the possible values. In terms of inference, some level of conservativeness should be preferred, given the fact that the complexity of the inferential problem explodes with the number of meaningful components. On the other hand, the Poisson(1) prior seems to be too conservative, so that, as it is evident in the simulation study, the true value may not even included in the posterior credible interval.
Analysis on both real and simulated data shows that the lossbased prior represents a good compromise between having a prior which excessively penalises for complexity (i.e. the Poisson prior with parameter equal to one) and the uniform prior which suffers from theoretical and implementation weaknesses.
Appendix 1: Derivation of Definition 1
Starting from , we set , obtaining
(S1) 
which is a geometric distribution defined over the positive integers and with parameter . Given the prior , the marginal for the number of components is given by
(S2) 
where is the beta function. Therefore, the marginal distribution for in (S2) is betanegativebinomial with the parameter expressing the number of failures before stopping equal to 1, and shape parameter . In fact, the generic betanegativebinomial defined is defined over , and has probability mass function
which, when , becomes
Given that we have defined over the set of strictly positive integer, we rewrite in terms of , obtaining
which has the same form as (S2), retrieving the result. We note that the distribution in (S2) is a particular case of the betanegativebinomial, called the betageometric distribution.
Appendix 2: Simulation study
The simulation study has been performed by drawing 100 samples from different mixture models, encompassing univariate and multivariate mixtures. We have considered the following sample sizes: and, for the multivariate cases, we have considered distributions of dimension .
As the focus of the paper is about making inference on the number of components in a finite mixture, we have not given particular emphasis on the prior distributions for the mixture weights and parameters, as well as the computational techniques. In the univariate case, the models considered (Richardson and Green, 1997) have normal components, , with the following independent priors on the parameters:
The hyperparameters of the normal prior have been set as follows:
while for the Gamma prior we have set:
For the multivariate case, for each component , we have , with
We have set . To perform the analysis, given the nontriviality of the Markov Chain Reversible Jump, we have opted for the algorithm implemented in Miller and Harrison (2018), included the code provided by the authors.
Finally, each generated sample has been analysed by considering the proposed lossbased prior as well as two options available in the literature (which have a noninformative flavour): the uniform prior and the Poisson(1) (Nobile, 2005). The algorithm implemented considers the maximum number of components equal to 50, so that the uniform prior for is defined over . Note that the truncation is necessary when the uniform prior is employed to ensure a proper posterior, while id does not impact the performance of the other two priors. In any case, the truncation point is sufficiently large to not impact the analysis of both real and simulated data.
For the univariate case, we have considered the mixtures as in Table 4, while for the multivariate, we have considered the threecomponent mixtures of the following form:
with , for . This choice (Miller and Harrison, 2018) will prevent the posterior from concentrating too quickly.
Name  Mixture model  

1  
2  
2  
2  
4  
4  
6  
12 
To remove any possible effect of the priors on the mixture weights and parameters, we have applied a large burnin period. That is, we have run the simulation for 100,000 iteration and kept the last 1,000 only. As such, convergence has been ensured and the comparison between the prior for the number of components is reliable and meaningful. Moreover, for computational purposes, we have considered .
For each scenario, we have computed the average (over the 100 draws) of the posterior mode, as well as the average 95% credible interval; the lower and upper limits of these intervals have been obtained by averaging, respectively, the 100 lower limits and the 100 upper limits of the posterior credible intervals. The results for the univariate cases are reported from Table 5 to Table 12, while for the multivariate, from Table 13 to Table 15.
LossBased  Uniform  Poi(1)  

Mode  C.I.  Mode  C.I.  Mode  C.I.  
50  1.02  (1, 3.18)  1.10  (1, 6.43)  1.05  (1, 2.62) 
100  1.01  (1, 2.44)  1.05  (1, 4.53)  1.02  (1, 2.24) 
500  1.01  (1, 2.01)  1.03  (1, 2.55)  1.01  (1, 2.03) 
2000  1.00  (1, 1.44)  1.01  (1, 2.11)  1.00  (1, 1.58) 
LossBased  Uniform  Poi(1)  

Mode  C.I.  Mode  C.I.  Mode  C.I.  
50  2.03  (2, 5.28)  2.15  (2.01, 8.21)  2.02  (2, 3.59) 
100  2.01  (2, 4.54)  2.10  (2.00, 6.47)  2.00  (2, 3.27) 
500  2.00  (2, 3.47)  2.01  (2.00, 4.38)  2.00  (2, 3.03) 
2000  2.01  (2, 3.15)  2.03  (2.00, 3.56)  2.01  (2, 2.85) 
LossBased  Uniform  Poi(1)  

Mode  C.I.  Mode  C.I.  Mode  C.I.  
50  1.47  (1.12, 4.83)  1.77  (1.29, 8.78)  1.57  (1.14, 3.38) 
100  2.03  (2.00, 4.41)  2.09  (2.00, 6.30)  2.02  (2.00, 3.18) 
500  2.01  (2.00, 3.35)  2.01  (2.00, 4.28)  2.00  (2.00, 3.02) 
2000  2.01  (2.00, 3.16)  2.06  (2.01, 3.56)  2.01  (2.00, 2.88) 
LossBased  Uniform  Poi(1)  

Mode  C.I.  Mode  C.I.  Mode  C.I.  
50  2.12  (1.55, 6.53)  2.79  (1.83, 10.19)  2.02  (1.54, 3.95) 
100  2.23  (2.00, 5.38)  2.37  (2.02, 7.64)  2.09  (2.00, 3.59) 
500  2.01  (2.00, 3.18)  2.05  (2.00, 3.73)  2.00  (2.00, 3.02) 
2000  2.00  (2.00, 2.70)  2.00  (2.00, 3.03)  2.00  (2.00, 2.49) 
LossBased  Uniform  Poi(1)  

Mode  C.I.  Mode  C.I.  Mode  C.I.  
50  1.57  (1.03, 6.20)  2.50  (1.29, 11.41)  1.71  (1.02, 3.74) 
100  2.35  (1.55, 6.60)  3.04  (1.96, 9.61)  2.24  (1.50, 4.09) 
500  3.88  (3.23, 7.73)  4.41  (3.44, 9.30)  3.70  (3.13, 5.10) 
2000  4.05  (4.00, 7.46)  4.35  (4.00, 8.61)  4.00  (4.00, 5.18) 
LossBased  Uniform  Poi(1)  

Mode  C.I.  Mode  C.I.  Mode  C.I.  
50  3.42  (2.47, 10.77)  4.84  (2.96, 17.16)  2.73  (2.25, 4.55) 
100  3.07  (2.56, 6.38)  3.36  (2.68, 8.46)  2.88  (2.40, 4.25) 
500  3.34  (3.07, 4.99)  3.42  (3.09, 5.49)  3.25  (3.03, 4.28) 
2000  4.00  (3.91, 5.16)  4.00  (3.93, 5.28)  3.97  (3.89, 4.56) 
LossBased  Uniform  Poi(1)  

Mode  C.I.  Mode  C.I.  Mode  C.I.  
50  1.40  (1.00, 6.05)  2.36  (1.06, 11.87)  1.59  (1.00, 3.69) 
100  2.26  (1.41, 6.88)  3.07  (1.97, 10.21)  2.09  (1.44, 4.06) 
500  5.18  (3.81, 9.85)  5.90  (4.06, 11.69)  4.12  (3.30, 5.96) 
2000  6.71  (5.74, 11.23)  7.20  (5.88, 12.33)  5.84  (5.50, 7.18) 
LossBased  Uniform  Poi(1)  

Mode  C.I.  Mode  C.I.  Mode  C.I.  
50  1.74  (1.22, 6.46)  2.49  (1.43, 11.93)  1.81  (1.19, 3.84) 
100  2.48  (1.73, 7.44)  3.36  (2.00, 10.91)  2.30  (1.63, 4.23) 
500  5.12  (1.73, 7.44)  5.12  (4.06, 12.21)  4.09  (3.39, 6.01) 
2000  10.60  (7.64, 17.75)  11.70  (8.34, 19.69)  6.81  (5.81, 8.59) 
LossBased  Uniform  Poi(1)  

Mode  C.I.  Mode  C.I.  Mode  C.I.  
50  2.04  (2.00, 3.54)  2.13  (2.00, 4.43)  2.02  (2.00, 3.12) 
100  2.61  (2.18, 3.87)  2.73  (2.30, 4.44)  2.58  (2.16, 3.58) 
500  3.00  (3.00, 3.91)  3.01  (3.00, 4.01)  3.00  (3.00, 3.15) 
2000  3.00  (3.00, 3.14)  3.00  (3.00, 3.33)  3.00  (3.00, 3.05) 
LossBased  Uniform  Poi(1)  

Mode  C.I.  Mode  C.I.  Mode  C.I.  
50  2.09  (2.00, 3.26)  2.16  (2.00, 4.13)  2.06  (2.00, 3.11) 
100  2.56  (2.22, 3.71)  2.63  (2.26, 3.94)  2.49  (2.22, 3.43) 
500  3.00  (3.00, 3.26)  3.00  (3.00, 4.00)  3.00  (3.00, 3.01) 
2000  3.00  (3.00, 3.02)  3.00  (3.00, 3.09)  3.00  (3.00, 3.01) 
LossBased  Uniform  Poi(1)  

Mode  C.I.  Mode  C.I.  Mode  C.I.  
50  2.05  (2.00, 3.13)  2.08  (2.00, 3.61)  2.02  (2.00, 3.06) 
100  2.23  (2.11, 3.33)  2.30  (2.11, 3.44)  2.22  (2.11, 3.00) 
500  3.00  (3.00, 3.08)  3.00  (3.00, 3.98)  3.00  (3.00, 3.01) 
2000  3.00  (3.00, 3.00)  3.00  (3.00, 3.00)  3.00  (3.00, 3.00) 
From the simulation study, in particular when comparing the three priors, we can observe the following. In the univariate cases, the uniform prior tends to have the largest size of the posterior credible intervals. This is expected as the mass on each value of is constant. Whilst this may be regarded as a disadvantage when is relatively low, we see that, besides the case of and or , the posterior credible intervals always contain the true value of the number of components. At the opposite, we see that the Poisson prior (with parameter equal to 1), tends to return the smallest credible intervals. However, due to its fast decrease to zero for increasing , there are already cases for where, if the information in the sample is not sufficient, it fails to contain the true parameter value. The proposed lossbased prior, on the other hand, has a behaviour that is somewhere between the others. In fact, the credible interval size is closer to the Poisson prior (rather than the uniform prior) and, with the exception of and , it always contain the true value.
When we compare the priors in the multivariate setting, we note very few differences. Posterior average modes and credible intervals have similar values, with negligible impact for difference dimensionality.
References
 Baudry et al. (2010) Baudry, J. P., Raftery, A. E., Celeux, G., Lo, K., and Gottardo, R. (2010). Combining mixture components for clustering. Journal of computational and graphical statistics 19(2), 332–353
 Berk (1966) Berk, R.H. (1966). Limiting behaviour of posterior distributions when the model is incorrect. Ann. of Math. Statist. 37, 51–58
 Bernardo and Smith (1994) Bernardo, J.M. and Smith, A.F.M. (1994). Bayesian Theory. New York: John Wiley & Sons.
 Celeux et al. (2018) Celeux, G., Frühwirth–Schnatter, S. and Robert, C.P. (2018). Handbook of Mixture Analysis. Boca Raton, FL: CRC Press.

Dias et al. (2010)
Dias, J.G., Vermunt, J.K. and Ramos, S. (2010).
Mixture hidden Markov models in finance research.
A. Fink et al., (eds.), Advances in Data Analysis, Data Handling and Business Intelligence, Studies in Classification, Data Analysis, and Knowledge Organization DOI: 10.1007/9783642010446, SpringerVerlag Berlin Heidelberg  Frühwirth–Schnatter (2006) Frühwirth–Schnatter, S. (2006). Finite Mixture and Markov Switching Models. SpringerVerlag, New York..
 Grazian and Robert (2018) Grazian, C., and Robert, C. P. (2018). Jeffreys priors for mixture estimation: Properties and alternatives. Computational Statistics & Data Analysis 121, 149–163
 Green (1995) Green, AP.J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 82, 711–732
 Handcock at al. (2007) Handcock, M.S., Raftery, A.E. and Tantrum, J.M. (2007). Modelbased clustering for social networks. J. R. Stat. Soc. A 170, 301–354
 Jain and Neal (2004) Jain, S., and Neal, R. M. (2004). A SplitMerge Markov Chain Monte Carlo Procedure for the Dirichlet Process Mixture Model. Journal of Computational and Graphical Statistics 13, 158–182
 Jain and Neal (2007) Jain, S., and Neal, R. M. (2007). Splitting and Merging Components of a Nonconjugate Dirichlet Process Mixture Model. Bayesian Analysis 2, 445–472
 Jeffreys (1961) Jeffreys, H. (1961). Theory of Probability. London: Oxford University Press.

Juárez and Steel (2010)
Juárez, M.A. and Steel, M.F.J. (2010).
Modelbased clustering of nonGaussian panel data based on skew
distributions. J. Bus. Econ. Stat. 28, 52–66  Kullback and Leibler (1951) Kullback, S. and Leibler, R.A. (1951). On information and sufficiency. Ann. of Math. Statist. 22, 79–86
 Lo (1984) Lo, A.Y. (1984). On a class of Bayesian nonparametric estimates: I. density estimates. Ann. of Statist. 12, 351–357
 Malsiner–Walli et al. (2016) Malsiner–Walli, G., Frühwirth–Schnatter, S., and Grün, B. (2016). Modelbased clustering based on sparse finite Gaussian mixtures. Stat. Comput. 26, 303–324
 McCullagh and Yang (2008) McCullagh, P. and Yang, J. (2008). How many clusters? Bayesian Analysis 3, 101–120
 McLachlan and Peel (2000) McLachlan, G.J. and Peel, D. (2000). Finite Mixture Models. New York: J. Wiley
 McLachlan et al. (2002) McLachlan, G.J., Bean, R.W. and Peel, D. (2002). A mixturemodel based approach to the clustering of microarray expression data. Bioinformatics 18, 413–422
 Merhav and Feder (1998) Merhav, N. and Feder, M. (1998). Universal prediction. IEEE Trans. Inf. Theory 44, 2124–2147
 Marin et al. (2005) Marin, J.M., Mengersen, K.L. and Robert, C.P. (2005). Bayesian Modelling and Inference on Mixtures of Distributions. D. Dey and C.R. Rao (eds.), Handbook of Statistics, Vol. 25, pp. 459–507 Elsevier.
 Miller and Harrison (2018) Miller, J.W. and Harrison, M.T. (2018). Mixture models with a prior on the number of components. J. American Stats. Assoc. 113, 340–356
 Neal (1992) Neal, R.M. (1992). Bayesian Mixture Modeling, in Maximum Entropy and Bayesian Methods eds. C.R. Smith, G.J. Erickson and P.O. Neudorfer. New York: Springer
 Nobile (1994) Nobile, A. (1994). Bayesian Analysis of Finite Mixture Distributions. Ph.D. thesis, Pittsburgh, PA: Department of Statistics, Carneige Mellon University.
 Nobile (2004) Nobile, A. (2004). On the posterior distribution of the number of components in a finite mixture. Ann. of Statist. 32, 2044–2073
 Nobile (2005) Nobile, A. (2005). Bayesian finite mixtures: a note on prior specification and posterior computation. Technical Report Department of Statistics, University of Glasgow
 Nobile and Fearnside (2007) Nobile, A. and Fearnside, A.T. (2007). Bayesian finite mixtures with an unknown number of components: the allocation sampler. Statistics and Computing 17, 147–162
 Phillips and Smith (1996) Phillips, D.B and Smith, A.F.M. (1996). Bayesian model comparison via jump diffusions. Gilks W.R., Richardson S. and Spiegelhalter D.J. (eds.), Markov Chain Monte Carlo in Practice Chapman & Hall, London

Reynolds at al. (2000)
Reynolds, D.A., Quatieri, T.F. and Dunn, R.B. (2000).
Speaker verification using adapted Gaussian mixture models.
Data Signal Processing 10, 19–41  Richardson and Green (1997) Richardson, S. and Green, P. J. (1997). On Bayesian Analysis of Mixtures With an Unknown Number of Components. J. R. Stat. Soc. B 59, 731–792
 Roeder (1990) Roeder, K. (1990). Density Estimation With Confidence Sets Exemplified by Superclusters and Voids in the Galaxies. J. American Stats. Assoc. 85, 617–624
 Rousseau and Mengersen (2011) Rousseau, J., and Mengersen, K. (2011). Asymptotic behaviour of the posterior distribution in overfitted mixture models. Journal of the Royal Statistical Society: Series B 73(5), 689–710
 Stephens (2000) Stephens, M. (2000). Bayesian analysis of mixture models with an unknown number of components  an alternative to reversible jump methods. Ann. of Statist. 28, 40–74
 Titterington et al. (1985) Titterington, D.M., Smith, A.F.M. and Markov, U.E. (1985). Statistical Analysis of Finite Mixture Distributions. New York: J. Wiley

Villa and Walker (2015)
Villa, C. and Walker, S.G. (2015).
An objective Bayesian criterion to determine model prior probabilities.
Scandinavian Journal of Statistics 42, 947–966  Walker (2007) Walker, S.G. (2007). Sampling the Dirichlet mixture model with slices. Communications in Statistics: Simulation and Computation 36, 45–54
 Yeung et al. (2001) Yeung, K.Y., Fraley, C., Murua, A., Raftery, A.E. and Ruzzo, W.L. (2001). Modelbased clustering and data transformation for gene expression data. Bioinformatics 17, 977–987
Comments
There are no comments yet.