Bayesian estimation and prediction for mixtures

05/06/2020 ∙ by Aziz LMoudden, et al. ∙ Université de Sherbrooke 0

For two vast families of mixture distributions and a given prior, we provide unified representations of posterior and predictive distributions. Model applications presented include bivariate mixtures of Gamma distributions labelled as Kibble-type, non-central Chi-square and F distributions, the distribution of R^2 in multiple regression, variance mixture of normal distributions, and mixtures of location-scale exponential distributions including the multivariate Lomax distribution. An emphasis is also placed on analytical representations and the relationships with a host of existing distributions and several hypergeomtric functions of one or two variables.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Mixture models are ubiquitous in probability and statistics. Such models, whether they are finite mixture models, mixtures of Poisson, exponential, gamma or normal distributions, etc., are quite useful and appealing for best representing data and heterogeneous environments. As well, distributional properties of mixture models are often quite elegant and instructive. However, analytical and numerical challenges are present and well documented, namely in terms of likelihood-based and Bayesian inference.

It is also the case that several familiar distributions are representable in terms of mixtures, and that such representation facilitates the derivation of various statistical properties and approaches to statistical inference. Prominent examples include the noncentral chi-square, Beta, and Fisher distributions, which arise typically in relationship to quadratic forms in normal linear models. Other important examples include the distribution of the square of a multiple correlation coefficient () in a standard multiple regression linear model with normally distributed errors, as well as the vast class of univariate or bivariate Gamma mixtures which include the Kibble distribution (see Example 1.1).

We consider mixture models for summary statistics which are of one of the following two types:

(1.1)
(1.2)

The classification of these types will be adhered to and seems to be a natural way to present the various expressions and examples that make up this paper. In the above models, the mixing variables will typically be either a discrete or continuous univariate distribution, but more generally . The parameter is unknown and a prior density will be assumed. Otherwise the densities will be assumed known (except for the value of ) and absolutely continuous with respect to a finite measure . Similarly, the densities and for , as well as and for , will be assumed to be known and absolutely continuous with respect to a finite measure . Examples will include both discrete and continuous mixing for and , as well as discrete and continuous models for the conditional distributions and .

We provide analytical expressions for Bayesian posterior distributions for based on , as well as Bayesian predictive densities for based on . We are particularly interested in eliciting the general structures driving these Bayesian solutions. We also strive for elegant and concise representations, informative connections, and aim to present various illustrations and applications. Although the findings are quite general, model applications that we present include bivariate mixtures of Gamma distributions that we label as Kibble-type, non-central Chi-square and F distributions, the distribution of in multiple regression, variance mixture of normal distributions, and mixtures of location-scale exponential distributions including the multivariate Lomax distribution. For bivariate gamma mixtures, which we focus on in Section 4, we also consider bivariate prior distributions with dependence structures, in particular as occurring under an order restriction on the parameters. The posterior and predictive distribution decompositions in this work follow familiar paths, but the analytical representations provided are nevertheless unified, useful and insightful, and lead to simplifications in Bayesian posterior analyses. This is also the case where we purposely exploit the mixture representation of a familiar distribution.

Here is a first illustration of Type I and Type II mixtures, which will be addressed more generally in Sections 3 and 4 respectively.

Example 1.1.

The Kibble bivariate distribution (Wicksell, 1933; Kibble, 1941) for , admits the following mixture representation:

(1.3)

Here , and

Such a distribution originates for instance in describing the joint distribution of sample variances generated from a bivariate normal distribution with correlation coefficient

. The cases reduce to . Now, observe that we have a Type I mixture if with known , and a Type II mixture for ; or again ; with known . A third type of mixture arises when is unknown, but will not be further addressed in this paper. The mixing representation of the Kibble distribution has been exploited in a critical fashion for Bayesian analysis by Iliopoulos et al. (2005).

2 Notations and definitions

Here are some notations and definitions used throughout concerning some special functions and various distributions that appear below, related either to the model, the mixing variables, the prior, the posterior, or the predictive distribution.

In what follows, we denote

as the Poisson distribution with mean

and p.m.f. (or density) . We write , with and

, to denote a negative binomial distribution with p.m.f. (or density)

.

Throughout the paper, we define for positive real numbers , and , the generalized hypergeometric function as

with the Pochhammer function defined here for . We write , for

, to denote a generalized hypergeometric distribution (e.g., Johnson et al., 1995) with p.m.f.

For these distributions, we will have positive ’s and ’s, and will be non negative and in the radius of convergence of the function. With such a notation for instance, we could write for , and for .

We denote , , and , for , as Gamma, Beta, and Beta type II distributions, and with densities , , and , respectively. The latter Beta type II family includes Pareto distributions on for .

The Kummer distribution of type II, denoted for parameters and , is taken with density

where is the confluent hypergeometric function of type II defined for and as: . This class of distributions include Gamma distributions with choices . The class can also be extended to include cases which correspond to distributions.

We will denote McKay’s bivariate gamma distribution with parameters as with p.d.f.

The distribution has a long history (e.g., McKay, 1934) and is a benchmark bivariate distribution to model durations that are ordered, It is easy to verify that the marginals are distributed as and , and that and are independently distributed, with . A generalization will be presented in Section 4.

3 Type I mixtures

We begin with Type I mixtures, providing general representations for the posterior distribution , as well as the predictive distribution , and following up with various examples and observations.

Theorem 3.1.

Let be conditionally independent distributed as in (1.1) and let have prior density for with respect to finite measure . Let and be the densities given by

with the density given by

Then,

  1. The posterior distribution of admits the mixture representation:

    (3.4)
  2. The Bayes predictive density of admits the representation:

    (3.5)

    with

    (3.6)

Proof. (a) We have indeed

(b) The predictive density of , i.e. the conditional density of , is given by:

where we have used (3.4). This establishes the result. ∎

Remark 3.1.

The posterior and predictive distribution representations of Theorem

3.1 are particularly appealing. Indeed, observe that posterior distribution (3.4) mixes the which correspond to the posterior density of as if one had actually observed . Moreover, the mixing density for is a weighted version of the marginal density for with weight proportional to .

The predictive distribution for mixes the same densities as the model density for , with the prior mixing density replaced by the posterior mixing density . Furthermore, this mixing density is itself a mixture of the predictive (or conditional) densities of as if one had observed .

The following examples concern posterior and predictive distribution illustrations of Theorem 3.1.

Example 3.2.

In model (1.1), consider Poisson mixing with a prior for . From this familiar set-up, we obtain as a density and as a NB p.m.f. Following (3.4), the posterior distribution is a mixture of the above ’s with mixing density on given by

Now, consider the cases: (i) , (ii) and (iii) . In the context of model (1.1), case (i)

corresponds to a non-central chi-square distribution with

degrees of freedom and non-centrality parameter (), case (ii)

to a non-central Beta distribution with shape parameters

, , and non-centrality parameter , and case (iii) to the density of , with distributed as a non-central distribution with degrees of freedom , and non-centrality parameter . The latter two cases are essentially equivalent though and related by the fact that in (ii) is distributed as is in (iii).

For (i), we obtain

(3.7)

Observe that the above generalized hypergeometric distribution reduces to a distribution for . The posterior expectation can be evaluated with the help its mixture representation and standard calculations involving the above p.m.f. One obtains

(3.8)
(3.9)

with the case simplifying to as noted by Saxena and Alam (1982).

For the two other cases, we obtain the mixing power series densities:

(3.10)

Observe that the case reduces to a NB distribution for (ii) and a NB distribution for (iii). The posterior expectation may be computed from (3.8) with simplifications occurring for , yielding for instance in (ii) : . Finally, we point out that the above posterior distribution representation applies as well for the improper prior choice (i.e., ), with not a p.m.f., but given by .

Example 3.3.

Turning now to predictive densities with the same set-up as in Example 3.2, we consider distributed identically to (i.e, and ). For (i.e., case (i)), Theorem 3.1 tells us that the Bayes predictive density of admits the mixture representation: with given in (3.6). The latter admits itself the mixture representation

(3.11)

with the former being the Bayes predictive density of based on and prior . Alternatively represented, the mixing p.m.f. for may be expressed as

which also can be viewed directly as a weighted p.m.f.

The non-central Beta and Fisher distributions are similar. For instance, in the former case with and identically distributed with and , predictive densities associated with priors are also distributed as mixtures

For the mixing p.m.f. of , a development as above yields the expression:

Example 3.4.

A doubly non-central distribution is a type I mixture (e.g., Bulgren, 1971) with and admitting the representation

(3.12)

Such a distribution arises naturally as a multiple of the ratio of two independent noncentral chi-squared distributions, and reduces to a non-central for . Consider now the application of Theorem 3.1 for the prior with , yielding the familiar distributional results:

and

Representation (3.4) tells us that the posterior distribution of is a mixture of the ’s with mixing density

(3.13)

with , , and where is the Appell function of the second kind given by

The bivariate p.m.f. in (3.13) and how it is arisen here are of interest. It is a bivariate power series p.m.f. generated by the coefficients of the Appell function and will be bona fide p.m.f. for and . Appell’s function appears in a similar way, again in a Bayesian framework, as a bivariate discrete distribution called Bailey by Laurent (2012) (see also Jones & Marchand, 2019 for another derivation). For the particular case , the p.m.f. in (3.13) simplifies and the corresponding random pair , admits the stochastic representation: and with and .

Turning to the predictive density, consider distributed as and the same prior on as above. Similarly to Example 3.3, which shares the same Poisson mixing and gamma prior, we obtain from Theorem 3.1 and (3.5), that the predictive density for admits the representation

We point out that, if the distribution of is non-identical to that of with associated degrees of freedom and , the only change in the previous expression is to replace the ’s by the ’s for the distribution of . Similar observations apply to the other examples of this section.

Example 3.5.

Consider univariate Gamma mixtures with in model (1.1), with the mixing distribution and prior , with known . Theorem 3.1 applies and tells us that the posterior distribution is a mixture of making use of a familiar posterior distribution for gamma models with gamma priors. In evaluating the mixing density of given in (3.4), it is easy to verify that and one thus obtains

which corresponds to a Kummer distribution as defined in Section 2.

Now consider the Bayesian predictive density for the Gamma mixture and , which includes the particular case of identically distributed and for and . Observe that the density is that of for the set-up , independently distributed, and with . A calculation (e.g., Aitchison & Dunsmore, 1975) yields . With the above, it follows from Theorem 3.1 that the predictive distribution for admits the representation:

An alternative representation comes from simply evaluating the marginal density of . A calculation gives:

with the confluent hypergeometric function of type II as defined in Section 2.

The final application of Theorem 3.1 concerns the coefficient of determination in a standard multiple regression context.

Example 3.6.

Consider a coefficient of determination , or square of a multiple correlation coefficient, that arises for a sample of size from , with , and the regression of based on . For more details on the underlying distributional theory, see for instance Muirhead (1982). It is well known that the distribution of is a Type I mixture (1.1) with

with is the theoretical squared multiple correlation coefficient. As in Marchand (2001), a convenient prior on is a Beta prior and it leads along with the negative binomial distributed to a conjugate posterior. Specifically, we obtain for the posterior density , as well as the marginal p.m.f.

Theorem 3.1 tells us that the posterior distribution is a mixture of the ’s with mixing

(3.14)

The result, which we have derived from the general context of Theorem 3.1 was obtained by Marchand (2001) in this specific set-up. In doing so, he defined such Beta mixtures as HyperBeta and also provided several graphs of prior-posterior densities for varying prior parameters , sample size , and observed values of .

For the predictive density of a future distributed as but allowing for a possibly different sample size , expression (3.5) tells us that such a predictive density admits the mixture representation:

(3.15)

the predictive density for based on and prior . An evaluation of (3.6) yields