An Objective Prior from a Scoring Rule

05/07/2021 ∙ by Stephen G. Walker, et al. ∙ The University of Texas at Austin Newcastle University 0

In this paper we introduce a novel objective prior distribution levering on the connections between information, divergence and scoring rules. In particular, we do so from the starting point of convex functions representing information in density functions. This provides a natural route to proper local scoring rules using Bregman divergence. Specifically, we determine the prior which solves setting the score function to be a constant. While in itself this provides motivation for an objective prior, the prior also minimizes a corresponding information criterion.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A major drawback of objective priors, such as Jeffreys prior (Jeffreys, 1961) and the reference prior (Bernardo, 1979), is that, in many cases, they are improper. While for a parameter that is defined over a bounded interval, such as , it is generally possible to derive objective prior distributions that are proper, this is not the case for parameters on or

. The literature provides many examples where improper prior distributions cannot be suitably employed; such as Bayes factors, mixture models and hierarchical models, to name but a few. Methods have been proposed to get around these obstacles, for example, Intrinsic Bayes Factors

(Berger and Pericchi, 1996) and Fractional Bayes factors (O’Hagan, 1995) or reparametrising mixture models (Grazian and Robert, 2018). However, these type of results are generally valid for a limited number of specific conditions. Additonally, improper prior distributions are not too suitable to be employed where large numbers of parameters are involved as it would be difficult to establish properness of the full posterior distribution.

The idea of this paper is to present a novel objective prior distribution for continuous parameter spaces by considering the connection between information, divergence and scoring rules. In particular, the proposed prior can be defined over and , the latter by extending the former, and it has the appealing property of being proper.

Recently, Leisen at al. (2020), introduced a new class of objective prior which solved a differential equation of the form , where is a score function and the solution acts as the prior distribution. The solution is also shown to minimize an information criterion.

There are two well known relations that connect information, proper local scoring rules and divergences. The most famous of which links Shannon information, Kullback–Leibler divergence and the log–score, given by

(1.1)

where and are two densities, and integrals will be generally defined with respect to the Lebesgue measure. The term on the left-hand-side of (1.1) is the Kullback–Leibler divergence (Kullback and Leibler, 1951) between and , the first term on the right-hand-side is the Shannon information associated with density , and the second term is the expectation of the log-score function.

Another way to connect information, divergence and proper local scoring rules, involves Fisher divergence, Fisher information, and the Hyvärinen score function (Hyvärinen (2005)):

where the final term has been obtained using an integration by parts. In general, these relationships can be expressed as

(1.2)

where denotes the divergence, the measure of information and the score.

Recently, in Parry et al. (2012), a new class of score function was introduced, where the starting point is the property of the score function, which is that

for all densities . In other words, a score is said to be proper if the above is minimised by the choice of . Let us consider the well known log-score, . Then, we have that it satisfies the above property, since for any density it is that , with equality only when . As such, we have that the log-score is a proper score. Furthermore, a score is said to be local if it only depends on through the density value . See Parry et al. (2012) and Ehm and Gneiting (2012). It has to be noted that the log-score is the only proper score to be local.

If we consider the Hyvärinen score function (Hyvärinen, 2005), which is given by

which we note depends on , i.e. and the first two derivatives, as such it is not local in the above sense. However, the locality condition can be weakened (Parry et al., 2012) by allowing the score to depend on a finite number of derivatives. Therefore, the Hyvärinen score will be a order–2 proper local scoring rule.

More generally, if a proper score depends on derivatives, then it will be defined an order–m local scoring rule. The theory in support of this, is based on the fact that the minimizer of is , and this can be investigated using variational analysis. The relevant Euler–Lagrange equation of order two being

(1.3)

The corresponding general case of (1.3) is given as equation (18) in Parry et al. (2012). Throughout this paper we will focus on the case , since this is where we draw our prior from. The Appendix provides the expression for a general .

In Parry et al. (2012), the solution to equation (1.3), is proposed using properties of differential operators and –homogeneous functions. Recall that a –homogeneous function is such that for any . In particular, the Hyvärinen score arises with and

(1.4)

Furthermore, Parry et al. (2012) and Ehm and Gneiting (2012) characterize all local and proper scoring rules of order . With this respect, as an additional interesting result, in the Appendix we present the characterization using measures of information and the Bregman divergence (Bregman, 1967). The benefits of the proposed approach are that complicated mathematical analysis is avoided and the derivation of the local rule is made explicit.

Following Parry et al. (2012) and Ehm and Gneiting (2012) and the novel derivation of their results using Bregman divergence, which is the focus of the Appendix, information, divergence and scores can be obtained as follows: For some convex function ,

1. Divergence: Given the result (A.10), we get

where

Using integration by parts on the right most intergal, and assuming that vanishes at the extremes of the integral,

2. Information: This follows from the divergence, and from (A.4), and is given by

3. Score: Again, from the form of the divergence and (A.8), this is given by

(1.5)

The score in (1.5) generalizes the Hyvärinen score, which arises when .

The paper is organised as follows. Section 2 introduces the proposed objective prior. Section 3 includes a thorough simulation study, and an application to mixture models that involves both simulated and real data. In Section 4

we have discussed another critical scenario where improper priors may resolve in improper posteriors, that is assigning an objective prior to the variance parameter in a hierarchical model. The supporting theory is presented in the Appendix. In

A.1 we use Bregman divergence to obtain general forms for score functions and associated divergences and following on from this in A.2

we detail how we use Bregman divergences to obtain a divergence between probability density functions using their first derivatives, and show how to obtain score functions from these divergences.

A.3 provides the general case using derivatives. Finally, in A.4 we make the connection with our derivations of scores and that of Parry et al. (2012).

2 New Objective prior

Leisen at al. (2020) proposed constructing objective prior distributions on parameter spaces by solving equations of the kind . Specifically, they used a weighted mixture of the log-score and the Hyvärinen score functions. Note that the sole use of the log-score function would result in the uniform prior, which is not appropriate in many cases and may yield improper posterior distributions. On the other hand, a weighted combination of the two score functions yields a differential equation given by

(2.1)

where denotes the prior density and the weight balancing the two score functions.

Solutions to the differential equation (2.1) can be found for different spaces, and constraints on the shape of can be considered; so to have a prior density with desirable behaviour, such as monotone, convex, log–concave and more.

We have already seen that the Hyvärinen score arises with ; see (1.5). An important property an objective prior distribution may be required to have is a heavy tail. We will consider such on . Mirroring the Hyvärinen score, we adopt with , and a decreasing density on . In this case, equation (1.5) becomes which, by setting to 0, becomes The solution is easily seen to be for some constant . In this case, the prior on the parameter space is

(2.2)

Interestingly, the prior in (2.2), is a Lomax distribution (Lomax, 1954) with scale parameter and shape parameter equal to 1. Recalling that the Lomax distribution can be directly connected to the Pareto Type I and Pareto Type II distributions, its heavy-tailed nature is immediately obvious.

Fig. 1 shows the prior with .

Figure 1: Prior

Making the connection more directly with the theory set out in the paper with , we have which is easy to show satisfies . Then using (A.8) we get

(2.3)

Setting this to zero; i.e. , this can be solved and the solution is precisely of the form . We now write this all out in a theorem.

Theorem 2.1.

Let be the convex function appearing in (A.3); i.e. with . Then is convex for either or . The Euler equation associated with this ; i.e. , yields

the solution to which can be written as where is the corresponding score function (2.3).

To obtain the corresponding prior on through symmetry about 0, we get

(2.4)

Here we motivate the natural objective choice for the constant as 1. The only important transformation to be considered here is . This, for example, would take variance to precision. For the prior in (2.2) to be invariant, that is , we need to have , since

which yields iff . All the illustrations that follow have been made taking this choice for .

2.1 First examples

The first simulation study was to make inference on a scale parameter; specifcally the standard deviation of a normal density with mean

and standard deviation . We compare prior (2.2) with Jeffreys prior, that is . We took 250 samples of size , obtained the posterior distributions using standard MCMC methods (6000 iterations, with a burn–in of 1000 and a thinning of 10) and computed the following two indexes. The root Mean Squared Error (MSE) divided by the true parameter value from the sample mean;

where

is the posterior mean, and the coverage of the 95% posterior credible interval for

. Table 1 shows the results for the MSE for , where we see little difference between the performance of the two priors. However, the important point is that the score prior is proper, an important property.

Jeffreys Prior Score Prior
0.25 0.0723 0.0720
0.50 0.0721 0.0721
1 0.0721 0.0716
2 0.0719 0.0716
5 0.0722 0.0720
10 0.0723 0.0716
20 0.0722 0.0718
Table 1: Posterior MSE for to compare Jeffreys prior against the prior based on scores for data generated by a normal density with mean and unknown variance. This has been obtained on 250 samples of size and with standard deviation .

The coverage of the 95% posterior credible interval is shown in Table 2, where we can also see very similar behaviour between the two priors; although both show an average slightly lower than the nominal 0.95.

Jeffreys Prior Score Prior
0.25 0.91 0.91
0.50 0.92 0.91
1 0.91 0.91
2 0.93 0.91
5 0.93 0.91
10 0.90 0.92
20 0.90 0.93
Table 2: Posterior coverage of the 95% credible interval for to compare Jeffreys prior against the prior based on scores for data generated by a normal density with mean and unknown variance. This has been obtained on 250 samples of size and with standard deviation .

To illustrate the frequentist properties of the prior in (2.4), we have compared it to a flat prior, , in making inference for a location parameter of a log–normal density with unknown and known scale parameter . Similar to the previous case, we have drawn 250 samples of size and computed the MSE and coverage of the 95% posterior credible interval. The values of considered were from the set . Table 3 shows the MSE for the two priors, where we see that, apart for a small difference for , the two priors appear to perform in a very similar fashion.

Jeffreys Prior Score Prior
0 0.0114 0.0007
1 0.0086 0.0091
5 0.0085 0.0085
10 0.0085 0.0085
50 0.0085 0.0087
100 0.0085 0.0087
Table 3: Posterior MSE for to compare Jeffreys prior against the prior based on scores for data generated from a log-normal density with unknown location parameter and known scale parameter . This has been obtained on 250 samples of size and with standard deviation .

The coverage of the 95% posterior credible interval for is shown in Table 4, where we can see a very similar behaviour for the two priors, with an exception for , although the two coverage level are perfectly acceptable.

Jeffreys Prior Score Prior
0 0.92 1.00
1 0.97 0.98
5 0.97 0.98
10 0.97 0.98
50 0.97 0.98
100 0.97 0.98
Table 4: Posterior coverage of the 95% credible interval for to compare Jeffreys prior against the prior based on scores for data generated from a log-normal density with unknown location parameter and known scale parameter . This has been obtained on 250 samples of size and with standard deviation .

The general conclusion for the two experiments above, is that the prior obtained via exhibits tails which are sufficiently heavy to generate optimal frequentist performance even for large parameter values. These properties are comparable to those obtained by Jeffreys prior, which is well–known for being the objective prior yielding good frequentist properties of the posterior. The advantage with our prior is that it is always proper.

3 Mixture models

A major area of challenge for objective priors is finite mixture models, where observations are assumed to be generated by the following model;

(3.1)

for densities , where

is vector of parameters characterising the densities. Given the ill-defined nature of the model in (

3.1), Grazian and Robert (2018), the use of improper priors for the parameters is not acceptable. In particular, if we consider densities to be location-scale distributions, Grazian and Robert (2018) show that, under certain circumstances, Jeffreys priors cannot be used, due to their improperness. For example, if all parameters are unknown (i.e. weights, location and scale parameters), then Jeffreys prior yields improper posteriors. Even in more restrictive circumstances the use of improper priors if troublesome; if we consider only the location parameters unknown, then Jeffreys prior yields improper posteriors for . The above represents severe limitations in Bayesian analysis. Therefore, the objective priors proposed in this paper represent a clear solution to the above problem, avoiding reparametrisation or addition of extra parameters, as proposed, for example in Grazian and Robert (2018), the latter resulting in an increased uncertainty.

3.1 Single sample

In this first simulation study, we illustrate the performance of the proposed prior on the following three-component mixture model, from which we have drawn a sample of size ,

(3.2)

In terms of prior distributions, we have assumed a symmetric Dirichlet prior with concentration parameters equal to one, and for the means and standard deviations of the Gaussian components the proposed prior on and on , respectively. We have also assumed independent information, so the priors for the parameters of the component have the following form,

where and The histogram of the sample, together with the true density, is shown in Fig. 2.

Figure 2: Histogram of the sample of size from model (3.2), and density (red line) of the true model.

The analysis uses a Metropolis–within–Gibbs algorithm with a total of 60000 iterations, a burn–in of 10000 and a thinning of 100. We note that the true values are within the posterior credible intervals.

Component
1 0.26 -10.2 1.1
(0.21, 0.33) (-10.5, -9.9) (0.9, 1.4)
2 0.66 0.0 1.3
(0.58,0.72) (-0.1, 0.2) (1.1, 1.5)
3 0.08 6.7 0.9
(0.04, 0.12) (6.2, 7.2) (0.6, 1.4)
Table 5: Posterior means and 95% credible intervals (in brackets) for a sample of size from model (3.2).

3.2 Repeated sampling

To have a more thorough understanding of the proposed prior implementation, we have performed some experiment on repeated sampling, taking into consideration different scenarios, which include different sample sizes and model structure. We have limited the analysis to mixture of normal densities, but it is obvious that, due to the properness of the prior, its implementation can be extended to any family of densities for the mixture components. We computed the posterior distribution for replications of sample of size of mixture models with number of components

. For this illustrations, we reported the results for the means and the standard deviations of each components, as they are estimated using the proposed prior. The models used are as follows:

Note that we have not chosen variable weights as these are not associated with a proposed prior.

Fig. 3 shows the boxplots of the posterior means for the of the mixture components, while Fig. 4 shows the boxplots of the posterior means for the standard deviations . As one would expect, the larger the sample size the less variability in the repeated estimates, for the same number of components. Keeping the sample size fixed, we notice an increase in the variability of the estimates as the number of components grows, which is also an expected result.

Figure 3: Boxplots of posterior means of the means , using the proposed prior, for samples of size , plotted by row, and number of components , plotted by column.
Figure 4: Boxplots of posterior standard deviations of the means , using the proposed prior, for samples of size , plotted by row, and number of components , plotted by column.

3.3 Real data analysis

In this section we analyse the well–known galaxy data set, which contains the velocity of 82 galaxies in the Corona Borealis region. To support a particular theory about the formation of galaxies, the analysis aims to estimate the number of stellar populations. This is a benchmark data set, well investigated in the literature, for example in Escobar and West (1995), Richardson and Green (1997) and Grazian et al. (2019)

, among others. We consider the galaxies velocities as random variables distributed according to a mixture of

normal densities. The estimation of the number of components has proved to be delicate, with estimates ranging from 3 to 7, depending on factors, such as the priors for the parameters and the Bayesian method used. The histogram of the data set with a superimposed density is presented in Fig. 5.

Figure 5: Histogram of the Galaxy data set with a smoothed density (red curve) superimposed.

To select the number of components in the mixture of normal densities, we have fitted models with components, computing the Deviance Information Criterion (DIC) under each model. The results are reported in Table 6. We notice that, according to the computed index, we identify as best model the mixture with components, which is in line with Grazian et al. (2019), and slightly more conservative than Richardson and Green (1997) and Grazian et al. (2019), where the number of components with non-zero weight is 5. Table 7 shows the posterior means for the parameters of the 4 components estimated.

2 3 4 5 6 7 8
DIC 476.72 425.20 371.48 413.68 446.45 446.54 458.81
Table 6: Deviance Information Criterion for mixture with number of components .
Component
1 0.32 19.16 0.71
2 0.31 23.59 3.46
3 0.29 22.01 4.59
4 0.10 9.31 0.58
Table 7: Posterior means of the parameters for the components. The results are reported by descending order of the weights.

4 Variance parameters in hierarchical models

In this section we discuss a well-known implementation of a hierarchical model that is proposed, for example, in Gelman (2006). The basic two-level hierarchical model is as follows:

This model has three parameters, namely , and . However, out interest for this paper is in only, noting that “regular” objective priors can be used on the remaining parameters, like , for example. Although being improper, this prior yields a proper posterior on the parameters.

The actual concern is on the variance (scale) parameter

, as if we were to put an improper prior on it, then the corresponding posterior, most likely, would be improper as well. To compare the proposed prior, we assign an inverse-gamma prior on the variance with parameters set so to define a very sparse probability distribution. This is recommended, for example, in

Spiegelhalter et al. (1994, 2003), where the prior is , with sufficiently small. We do not discuss in detail the appropriateness of the above choice, or other alternatives; the reader can refer to Gelman (2006), for example, for a through discussion.

The data consists of educational testing experiments, where the parameters represent the relative effects of Scholastic Aptitude Test coaching programs in different schools. In this example, the parameter represents the between-schools variability (standard deviation) of the effects. Table 8 shows the data.

School
A 28 15
B 8 10
C -3 16
D 7 11
E -1 9
F 1 11
G 18 10
H 12 18
Table 8: Observed effects of special preparation on SAT scores on eight randomised experiments, where

are the standard errors of effect estimate.

We have compared the marginal posteriors obtained by using the inverse-gamma prior with and the proposed prior in (2.2) with . The histograms of the marginal posteriors are in Figure 6, where we note similar results. The statistics of the posteriors are reported in Table 9

, where we note a less-informative distribution when the proposed prior is employed. This is expected, as the inverse-gamma distribution is considered a relatively informative one

(Gelman, 2006).

Figure 6: Histogram of the marginal posterior for for the School problem when using an inverse-gamma prior (left) and the proposed prior based on scores (right).
Prior Mean 95% C.I.
Inverse-Gamma 3.8 (0.3, 23.9)
Score-based 2.3 (0.2, 57.8)
Table 9: Posterior statistics for the marginal distribution of for the schools problem.

5 Discussion

In this paper we have derived a class of objective prior distributions that have the appealing properties of being proper and heavy-tailed. These have been obtained by exploiting a straightforward approach to the construction of score functions (here proposed). In detail, using convex function we can find the score function with first two derivatives using (1.5). The Hyvärinen score arises with ; whereas we have used and used it to construct objective prior distributions using methodology introduced in Leisen at al. (2020).

The class of prior is heavy-tailed, behaving as for large ; this result is immeditely obvious as the prior on is a Lomax distribution with shape 1. In this respect, it behaves similar to standard objective priors but comes without the problems of being improper. The benefits of using a proper prior is that the posterior is automatically proper and so does not need to be checked.

We have showed that, when compared to Jeffreys prior on simulated data, the frequentist performances of the prior distribution derived from score functions are nearly equivalent. In addition, we have showed that, on both simulated and real data, the proposed prior is suitable to be used in a key scenario where improper priors (e.g. Jeffreys and reference) are not suitable (or are yet to be found). We have also illustrated the prior on a common problem for hierarchical models, that is assigning an objective prior for the variance parameter.

As a final point, we briefly discuss the case where a prior is needed on a multidimensional parameter space. So, say we have a model with parameters, that is , where , for . We also assume that the uni-dimensional space for each parameter is either or . Assuming relatively large, besides some specific statistical models such as regression models or graphical models, a common practice to assign objective priors on is as follows:

In other words, parameters are assumed to be independent a priori, so the join prior distribution is represented by the product of the marginal priors on each parameter. We can then set to be either (2.2) or (2.4), for , depending on .

Appendix A Appendix

a.1 Information and Bregman divergence

Information for a density function is assumed to be convex, based on the obvious notion that averaging reduces information (Topsøe, 2001). Hence, if we write , then is assumed convex. For example, when , is convex, as well as when , being convex in .

With a convex function we can set up a Bregman divergence. First, recall that a Bregman divergence (or Bregman distance) is a measure of the distance between two points, and it is defined in terms of a convex function. In particular, when the two points are probability distributions, then we have a statistical distance. Probably the most elementary Bregman distance is the (squared) Euclidean distance.

Let us start by considering the case , so is a function of . Let and be vectors in . The Bregman divergence with convex function is given by

where the right most term is

So , and equality holds if and only if .

In this work, we will focus on such divergences where the arguments are probability density functions (Stummer and Vajda, 2012). Hence, in the one dimensional case, with a convex function and a density function, define

(A.1)

and the divergence between and defined as

(A.2)

For example, if , which is convex in , then

which is the well known Kullback–Leibler divergence. The divergence arises when , with .

Note that we can write

and so we see that

is a score function, the Bregman score, and is local if satisfies . In this case the only solution is the log-score. As mentioned above, the log-score is the sole local and proper score function, that is depending on only. So, the Bregman divergence and the Bregman score confirm this, together with the results in Bernardo (1979). We will see this is also the case with order .

The extension to the Bregman divergence and the Bregman score that we discuss in this paper, follows from a two-dimensional Bregman divergence with arguments , where , and for some two-dimensional convex function . It is defined by

(A.3)

For example, if , which is easily shown to be convex, we get

known as the Fisher information divergence; see, for example, Villani (2008).

a.2 Divergences and scores

The two most well known measures of information are differential Shannon and Fisher. See, for example, MacKay (2003). Associated divergences are the Kullback–Leibler and Fisher, respectively, with corresponding score functions the logarithmic and Hyvärinnen. The connection between divergence, information and score functions can be understood from the following;

(A.4)

Here denotes divergence, information and score. From this it is clear that is minimized at . For (A.4) based on the Kullback–Leibler divergence, we have

For (A.4) based on Fisher information, we have

(A.5)

From (A.1) we get

so

More generally, using the two dimensional Bregman divergence in (A.3), and put it in (A.2), we obtain

where the score is

(A.6)

The score has been obtained by implementing integration by parts, and assuming vanishes at the boundary points. To ensure the score is local, we use the following condition on ,

(A.7)

Hence, the proposed class of score functions, which effectively is the class introduced in Parry et al. (2012), is given by

(A.8)

The derivation just presented is arguably much simpler to the one used in Parry et al. (2012), as we have avoided any variational analysis and the use of differential operators.

For a specific example, consider the class of convex functions, satisfying (A.7), given by

(A.9)

for some convex function . The convexity of implies the convexity of function in (A.9), as the following Lemma 1 shows.

Lemma 1.

The function is convex when is convex.

Proof.

The Hessian matrix corresponding to is seen to be

which can be shown to be positive definite when . Given that is assumed to be convex, then the condition is true. ∎

Given the above form for , we are now able to find the divergence, the information and the score. However, before proceeding, we first note that

That is, satisfies the condition in (A.7). In fact, we have

(A.10)

for all .

a.3 Higher order score functions

In this section we look at score functions using an arbitrary number of derivatives; that is, we allow . So let now be a convex function on dimensions:

The Bregman divergence can be written as

where the subscript indicates the order of differentiation, with, for example, and . Also, we have In this Section, to keep a readable notation, we set and . If we have

(A.11)

and the derivatives disappear at boundary values, using multiple integration by parts we get

Hence, the score function is given by

(A.12)

Note that, if we have and , we recover the log-score function, that is . If we have and , we recover the Hyvärinen score function.

A general form of convex function satisfying the requirement (A.11) is

where is a set of convex functions.

a.4 Connection with Parry et al. (2012)

In A.2, we have shown that it is possible to derive the class of score functions using convex functions and the Bregman divergence only. Here we show that the properties we have used, imply those used by Parry et al. (2012).

First we show that a function satisfying (A.7) implies that is –homogeneous.

Lemma 2.

Let be such that . Then, for any , it is that .

Proof.

Let . Thus, , which implies . ∎

Next we show that the class of score functions (A.8) and the condition (A.7), imply (1.3).

Lemma 3.

If a function satisfies the condition in (A.7), and a score is given by (A.8), then the score satisfies equation (1.3).

Proof.

The proof is a simple matter of algebra and calculus. It requires the key observation that

Given that we get

Since , we have . Further,

Combining all the above expressions, yields (1.3). ∎

Finally, in this Appendix, we show that if is –homogeneous, then is convex. The result is quite straightforward, and is achieved by showing that .

Here we make the connection with equation (39) in Parry et al. (2012) and the score function in (A.12). The main result is to show that if

then the relevant Euler–Lagrange equation is

To this end, define the operators, as in Parry et al. (2012)

Theorem A.1.

If and , then .

Proof.

To proof the theorem, we consider the properties of differential operations, as discussedi in Section 5 of Parry et al. (2012). In particular, the properties and .

Hence, , which implies that . This in turn implies ; i.e. . Finally, it is easy to see that if then . ∎

References

  • Berger and Pericchi (1996) Berger, J. O. and Pericchi, L. R. (1996). The intrinsic Bayes Factor for model selection and prediction. J. Amer. Statistic. Assoc. 91, 109–122.
  • Bernardo (1979) Bernardo, J. M. (1979). Expected information as expected utility (with discussion). Annals of Statistics, 7, 686–690.
  • Bregman (1967) Bregman, L. M. (1967). The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics, 7, 200–217.
  • Ehm and Gneiting (2012) Ehm, W. and Gneiting, T. (2012). Local proper scoring rules of order 2. Annals of Statistics 40, 609–637.
  • Escobar and West (1995) Escobar, M. and West, M. (1995). Bayesian prediction and density estimation. J. Amer. Statist. Assoc. 90, 577–588.
  • Grazian and Robert (2018) Grazian, C. and Robert, C. P. (2018). Jeffreys priors for mixture estimation : properties and alternatives. Computational Statistics & Data Analysis 121, 149–163.
  • Gelman (2006) Gelman A. (2006) Prior distributions for variance parameters in hierarchical models. Bayesian Analysis. 1, 515–533
  • Grazian et al. (2019) Grazian C., Villa, C. and Liseo, B. (2019) On a loss-based prior for the number of components in mixture models. Statistics and Probability Letters. 158, 1–7
  • Hyvärinen (2005) Hyvärinen, A. (2005). Estimation of non–normalized statistical models by score matching.

    Journal of Machine Learning Research

    , 6, 695–709.
  • Jeffreys (1961) Jeffreys, H. (1961) Theory of Probability. New York: Oxford University Press, 3rd edition.
  • Leisen at al. (2020) Leisen, F., Villa, C. and Walker, S. G. (2020) On a class of objective priors from scoring rules (with discussion). Bayesian Analysis 15, 1345–1423.
  • Lomax (1954) Lomax, K. S. (1954) Business Failures; Another example of the analysis of failure data. Journal of the American Statistical Association, 49, 847–852.
  • Kullback and Leibler (1951) Kullback, S and Leibler, R. A. (1951). On information and sufficiency. Annals of Mathematical Statistics 22, 79–86.
  • MacKay (2003) MacKay, D. J. C. (2003) Information Theory, Inference, and Learning Algorithms. Cambridge, University Press.
  • O’Hagan (1995) O’Hagan, A. (1995). Fractional Bayes Factors for model comparison. J. Royal Statist. Soc., B 57, 99–138.
  • Parry et al. (2012) Parry, M., Dawid, A. P. and Lauritzen S. (2012). Proper local scoring rules. Annals of Statistics 40, 561–592.
  • Richardson and Green (1997) Richardson and S., Green, P. (1997) On Bayesian analysis of mixtures with an unknown number of components (with discussion) J. Roy. Statist. Soc. Ser. B 59, 731–792
  • Spiegelhalter et al. (1994, 2003) Spiegelhalter, D.  J., Thomas, A., Best, N.  G., Gilks, W.  R., and Lunn, D. (1994, 2003)

    BUGS: Bayesian inference using Gibbs sampling.

    MRC Biostatistics Unit, Cambridge, England. www.mrc-bsu.cam.ac.uk/bugs/
  • Stummer and Vajda (2012) Stummer, W. and Vajda, I. (2012). On Bregman distances and divergences of probability measures. IEEE Transactions on Information Theory 58, 1277–1288.
  • Topsøe (2001) Topsøe, F. (2001) Basic concepts, identities and inequalities – the basic toolkit of information theory. Entropy 3, 162–190.
  • Villani (2008) Villani, C. (2008) Optimal Transport, Old and New. Springer-Verlag Berlin Heidelberg