On the marginal likelihood and cross-validation

05/21/2019 ∙ by Edwin Fong, et al. ∙ University of Oxford 6

In Bayesian statistics, the marginal likelihood, also known as the evidence, is used to evaluate model fit as it quantifies the joint probability of the data under the prior. In contrast, non-Bayesian models are typically compared using cross-validation on held-out data, either through k-fold partitioning or leave-p-out subsampling. We show that the marginal likelihood is formally equivalent to exhaustive leave-p-out cross-validation averaged over all values of p and all held-out test sets when using the log posterior predictive probability as the scoring rule. Moreover, the log posterior predictive is the only coherent scoring rule under data exchangeability. This offers new insight into the marginal likelihood and cross-validation and highlights the potential sensitivity of the marginal likelihood to the setting of the prior. We suggest an alternative approach using aggregate cross-validation following a preparatory training phase. Our work has connections to prequential analysis and intrinsic Bayes factors but is motivated through a different course.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Probabilistic model evaluation and selection is an important task in statistics and machine learning, particularly when multiple models are under initial consideration. In the non-Bayesian literature, models are typically compared using out-of-sample performance criteria such as cross-validation

(Geisser and Eddy, 1979; Shao, 1993; Vehtari and Lampinen, 2002), or predictive information (Watanabe, 2010). Computing the leave--out cross-validation score requires -choose- test set evaluations for data points, which in most cases is computationally unviable and hence approximations such as -fold cross-validation are often used instead (Geisser, 1975). A survey is provided by Arlot and Celisse (2010), and a Bayesian perspective on cross-validation by Vehtari and Ojanen (2012); Gelman et al. (2014).

In Bayesian statistics, the marginal likelihood or model evidence is the natural measure of model fit. For a model with likelihood function or sampling distribution parameterised by , a prior , and observations , the marginal likelihood or the prior predictive is defined as

(1)

The marginal likelihood can be used to calculate the posterior probability of the model given the data,

, as it is the probability of the data being generated under the prior when the model is correctly specified (Robert, 2007, Chapter 7)

. The ratio of marginal likelihoods between models is known as the Bayes factor that quantifies the prior to posterior odds on observing the data. The marginal likelihood can be difficult to compute if the likelihood is peaked with respect to the prior, although Monte Carlo solutions exist; see

Robert and Wraith (2009) for a survey. Under vague priors, the marginal likelihood may also be highly sensitive to the prior dispersion even if the posterior is not; a well known example is Lindley’s paradox (Lindley, 1957; O’Hagan and Forster, 2004; Robert, 2013). As a result, its approximations such as the Bayesian information criterion (Schwarz, 1978) or the deviance information criterion (Spiegelhalter et al., 2002) are widely used, see also Gelman et al. (2014).

For our work, it is useful to note from the property of probability distributions that the log marginal likelihood can be written as the sum of log conditionals,

(2)

where is the posterior predictive for , , and this representation is true for any permutation of the data indices.

While Bayesian inference formally assumes that the model space captures the truth, in the model misspecified or so called

-open scenario (Bernardo and Smith, 2009, Chapter 6) the log marginal likelihood can be simply interpreted as a predictive sequential, or prequential (Dawid, 1984), scoring rule of the form with score function . This interpretation of the log marginal likelihood as a predictive score (Kass and Raftery, 1995; Gneiting and Raftery, 2007; Bernardo and Smith, 2009, Chapter 6) has resulted in alternative scoring functions for Bayesian model selection (Dawid and Musio, 2014, 2015; Watson and Holmes, 2016; Shao et al., 2018), and provides insight into the relationship between the marginal likelihood and posterior predictive methods (Vehtari and Ojanen, 2012). Key et al. (1999) considered cross-validation from an -open perspective and introduced a mixture utility for model selection that trades off fidelity to data with predictive power.

2 Uniqueness of the marginal likelihood under coherent scoring

To begin, we prove that under an assumption of data exchangeability, the log posterior predictive is the only prequential scoring rule that guarantees coherent model evaluation. The coherence property under exchangeability, where the indices of the data points carry no information, refers to the principle that identical models on seeing the same data should be scored equally irrespective of data ordering.

In demonstrating the uniqueness of the log posterior predictive, it is useful to introduce the notion of a general Bayesian model (Bissiri et al., 2016), which is a framework for Bayesian updating without the requirement of a true model. Define a parameter of interest by

(3)

where is the unknown true sampling distribution giving rise to the data, and

is a loss function linking an observation

to the parameter . Bissiri et al. (2016) argue that after observing , a coherent update of beliefs about from a prior to the posterior exists and must take on the form

(4)

where is an additive loss function and is a loss scale parameter; see Holmes and Walker (2017); Lyddon et al. (2019) on the selection of . For and , we obtain traditional Bayesian updating without assuming the model is true for some value of . From (3), -open Bayesian inference is simply targeting the value of

that minimizes the Kullback-Leibler divergence between

and . The form (4) is uniquely implied by the assumptions in Theorem 1 of Bissiri et al. (2016), and we now focus on the coherence property of the update rule. An update function is coherent if, for some inputs , it satisfies

This coherence condition is natural under an assumption of exchangeability as we expect posterior inferences about to be unchanged whether we observe in any order or all at once, as it is in traditional Bayesian updating.

We now extend this coherence condition to general Bayesian model choice, where the goal is to evaluate the fit of the observed data under the general Bayesian model class with a prior . We treat as a parameter outside of the model specification, as there are principled methods to select it from the model, prior and data. We define the log posterior predictive score as

where is a continuous monotonically decreasing scoring function that transforms into a predictive score for a test point . We define the cumulative prequential log score as

where . The cumulative prequential log score sums the log posterior predictive score of each consecutive data point in a prequential manner, where a large score indicates that the model is predicting well. An intuitive choice for the scoring function might be the negative loss , but we will see that this violates coherency, as defined below.

Definition 1.

The model scoring function is coherent if it satisfies

(5)

for all , and , such that is invariant to the ordering or partitioning of the observations.

We now present our main result on the uniqueness of the choice of .

Proposition 1.

If the model scoring function is continuous, monotonically decreasing and coherent, then the unique choice of scoring rule is

where is the loss-scale in the general Bayesian posterior.

Proof.

The proof is given in A.1 of the Appendix. ∎

This holds irrespective of whether the model is true or not. More importantly for us is the corollary below.

Corollary 1.

The marginal likelihood is the unique coherent marginal score for Bayesian inference.

Proof.

Let and , and hence . ∎

The marginal likelihood arises naturally as the unique prequential scoring rule under coherent belief updating in the Bayesian framework. The coherence of the marginal likelihood implies an invariance to the permutation of the observations under exchangeability, including independent and identically distributed data, a property that is not shared by other prequential scoring rules, such as Dawid and Musio (2014); Grünwald and van Ommen (2017); Shao et al. (2018).

3 The marginal likelihood and cross-validation

3.1 Equivalence of the marginal likelihood and aggregate cross-validation

The leave--out cross-validation score is defined as

(6)

where denotes the ’th of -choose- possible held-out test sets, with the corresponding training set, such that , and records the average predictive score per datum. Although leave-one-out cross-validation is a popular choice, it was shown in Shao (1993) that it is asymptotically inconsistent for a linear model selection problem, and requires as for consistency. We will not go into further detail here but instead refer the reader to Arlot and Celisse (2010). Selecting a larger has the interpretation of penalizing complexity (Vehtari and Ojanen, 2012), as complex models will tend to over-fit to a small training set. However, the number of test set evaluations grows rapidly with and hence -fold cross-validation is often adopted for computational convenience.

From a Bayesian perspective it is natural to consider the log posterior predictive as the scoring function, , particularly as we have now shown that it is the only coherent scoring mechanism, which leads us to the following result.

Proposition 2.

The Bayesian marginal likelihood is equivalent to the aggregate leave--out cross-validation score using the log posterior predictive as the scoring rule, such that

(7)

with .

Proof.

This follows from the invariance of the marginal likelihood under arbitrary permutation of the sequence in (2). We provide more detailed explanations in A.2, A.3 of the Appendix. ∎

The Bayesian marginal likelihood is simply times the average leave--out cross-validation score, , where the scaling by is due to (6) being a per datum score. Bayesian models are evaluated through out-of-sample predictions on all possible held-out test sets whereas cross-validation with fixed only captures a snapshot of model performance. Evaluating the predictive performance on test sets would appear intractable for most applications, but we see through (7) and (1) that it is computable as a single integral.

3.2 Sensitivity to the prior and preparatory training

The representation of the marginal likelihood as an aggregate cross-validation score (7) provides insight into the sensitivity to the prior. The last term in the right hand side of (7) involves no training data, , which scores the model entirely on how well the analyst is able to specify the prior. In many situations, the analyst may not want this term to contribute to model evaluation. Moreover, there is tension between any desire to specify vague priors to safeguard their influence and the fact that diffuse priors can lead to an arbitrarily large and negative model score for real valued parameters from (7). It may seem inappropriate to penalize a model based on the subjective ability to specify the prior, or to compare models using a score that includes contributions from predictions made using only a handful of training points even with informative priors. For example, we see that 10% of terms contributing to the marginal likelihood come from out-of-sample predictions using, on average, less than 5% of available training data. This is related to the start-up problem in prequential analysis (Dawid, 1992).

A natural and obvious solution is to begin evaluating the model performance after a preparatory phase, for example using 10% or 50% of the data as preparatory training prior to testing. This leads to a Bayesian aggregate leave--out cross-validation score defined as

(8)

with a matched preparatory cross-validation score for . We suggest setting to leave out , or , where is the total number of model parameters, as reasonable default choices, but clearly this is situation specific. One may be interested in reporting both and , as the latter can be regarded as an evaluation of the prior, but we suggest that only is used for model evaluation from the arguments above. Although full coherency is now lost, we still have coherency conditioned on a preparatory training set, where permutation of the data within the training and test sets does not affect the score, and so we can write (8) as

(9)

This equivalence is derived in A.4 of the Appendix in a similar fashion to Proposition 2. This has precisely the form of the the log geometric intrinsic Bayes factor of Berger and Pericchi (1996) but motivated by a different route. The intrinsic Bayes factor was developed in an objective Bayesian setting (Berger and Pericchi, 2001), where improper priors cause indeterminacies in the evaluation of the marginal likelihood. The intrinsic Bayes factor remedies this with a partition of the data into , where is the minimum training sample used to convert an improper prior into a proper prior . In contrast, we set to provide preparatory training and can be subjective. Moreover, in modern applications we often have where intrinsic Bayes factors cannot be applied in their original form.

We can approximate (9) through Monte Carlo where the training data sets

are drawn uniformly at random, and for non-conjugate models the inner term must also be estimated, for example through

(10)

where samples

are obtained via Markov chain Monte Carlo. In

A.5 of the Appendix we provide some further details on efficient computation of (10).

4 Illustration for the normal linear model

We illustrate the use of Bayesian aggregate cross-validation in a polynomial regression example, where the -th polynomial model is defined as

We observe the data , and we place a fixed vague prior on the intercept term, , and for on the remaining coefficients. In our example, we have and the true model is , with known . For our prior, we vary the value of to investigate the impact of the prior tails. For each prior setting, we calculate and for models . In this example, is tractable, whereas requires a Monte Carlo average over tractable log posterior predictives. We report the mean over 10 runs of estimating with

random training/test splits. We calculate the Monte Carlo standard error over the 10 runs and report the maximum for each setting of

.

The results are shown in Table 1, where the highest scoring model is in bold for each setting of , and is normalized to be on the same scale as . Under the strong prior and the moderate prior , the marginal likelihood correctly identifies the true model, but when we increase to it heavily over-penalizes the more complex models and prefers . In fact, the magnitude of the marginal likelihood and the discrepancy just described can be made arbitrarily large by simply increasing , which should be guarded against when a modeller has weak prior beliefs. This issue is not observed with for the values of we consider. The vague prior does not impede the ability of to correctly identify the true model and the scores are stable within each column of .

In A.6 of the Appendix, we present graphical tools for exploring the aggregate cross-validation and the effect of the choice of on . In A.7 of the Appendix we provide an additional example using probit regression on the Pima Indian data set .

Model
0 -158.82 -153.80 -153.21 -153.06
1 -155.57 -150.39 -149.55 -149.27
2 -156.12 -150.94 -149.81 -149.38
0 -158.82 -153.80 -153.21 -153.06
1 -156.26 -150.77 -149.66 -149.34
2 -157.80 -151.90 -150.04 -149.50
0 -158.82 -153.80 -153.21 -153.06
1 -160.81 -150.91 -149.68 -149.35
2 -166.93 -152.30 -150.08 -149.53
Maximum standard error -0000.002 -0000.008 -0000.023
Table 1: Log marginal likelihoods and aggregate cross-validation scores for normal linear model

5 Discussion

We have shown that for coherence, the unique scoring rule for Bayesian model evaluation in either -open or -closed is provided by the log posterior predictive probability, and that the marginal likelihood is equivalent to an aggregate cross-validation score over all training-test data partitions. The coherence flows from the fact that the scoring rule and the Bayesian update both use the same information, namely the likelihood function, which is appropriate as the alternative would be to learn and score under different criteria. If we are interested in an alternative loss function to the log likelihood, we advocate a general Bayesian update (Bissiri et al., 2016; Lyddon et al., 2019) that targets the parameters minimising the expected loss, with models evaluated using the corresponding coherent aggregate cross-validation score.

Acknowledgements

We thank Lucian Chan and George Nicholson for their helpful feedback. Fong is funded by The Alan Turing Institute Doctoral Studentship. Holmes is supported by The Alan Turing Institute, the Health Data Research, U.K., the Li Ka Shing Foundation, the Medical Research Council, and the U.K. Engineering and Physical Sciences Research Council.

References

  • Arlot and Celisse [2010] S. Arlot and A. Celisse. A survey of cross-validation procedures for model selection. Statistics Surveys, 4:40–79, 2010. doi: 10.1214/09-SS054. URL https://doi.org/10.1214/09-SS054.
  • Berger and Pericchi [1996] J. O. Berger and L. R. Pericchi. The intrinsic Bayes factor for model selection and prediction. Technical Report 433, 1996.
  • Berger and Pericchi [2001] J. O. Berger and L. R. Pericchi. Objective Bayesian Methods for Model Selection: Introduction and Comparison, volume 38 of Lecture Notes–Monograph Series, pages 135–207. Institute of Mathematical Statistics, Beachwood, OH, 2001. doi: 10.1214/lnms/1215540968. URL https://doi.org/10.1214/lnms/1215540968.
  • Bernardo and Smith [2009] J. Bernardo and A. Smith. Bayesian Theory. Wiley Series in Probability and Statistics. Wiley, 2009. ISBN 9780470317716.
  • Bhattacharya et al. [2007] S. Bhattacharya, J. Haslett, et al. Importance re-sampling MCMC for cross-validation in inverse problems. Bayesian Analysis, 2(2):385–407, 2007.
  • Bissiri et al. [2016] P. G. Bissiri, C. C. Holmes, and S. G. Walker. A general framework for updating belief distributions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(5):1103–1130, nov 2016. ISSN 13697412. doi: 10.1111/rssb.12158.
  • Bornn et al. [2010] L. Bornn, A. Doucet, and R. Gottardo. An efficient computational approach for prior sensitivity analysis and cross-validation. Canadian Journal of Statistics, 38(1):47–64, 2010.
  • Dawid [1984] A. P. Dawid. Present Position and Potential Developments: Some Personal Views: Statistical Theory: The Prequential Approach. Journal of the Royal Statistical Society. Series A (General), 147(2):278, 1984. ISSN 00359238. doi: 10.2307/2981683.
  • Dawid [1992] A. P. Dawid. Prequential analysis, stochastic complexity and Bayesian inference. Bayesian statistics, 4:109–125, 1992.
  • Dawid and Musio [2014] A. P. Dawid and M. Musio. Theory and applications of proper scoring rules. 72:169–183, 2014. doi: 10.1007/s40300-014-0039-y.
  • Dawid and Musio [2015] A. P. Dawid and M. Musio. Bayesian model selection based on proper scoring rules. Bayesian Analysis, 10(2):479–499, 2015. doi: 10.1214/15-BA942A.
  • Geisser [1975] S. Geisser. The predictive sample reuse method with applications. Journal of the American Statistical Association, 70(350):320–328, 1975. ISSN 01621459. URL http://www.jstor.org/stable/2285815.
  • Geisser and Eddy [1979] S. Geisser and W. Eddy. A predictive approach to model selection. Journal of the American Statistical Association, 74:153–160, 03 1979. doi: 10.1080/01621459.1979.10481632.
  • Gelman et al. [2014] A. Gelman, J. Hwang, and A. Vehtari. Understanding predictive information criteria for Bayesian models. Statistics and Computing, 24:997–1016, 2014. doi: 10.1007/s11222-013-9416-2.
  • Gneiting and Raftery [2007] T. Gneiting and A. E. Raftery. Strictly proper scoring rules, prediction, and estimation. 2007. doi: 10.1198/016214506000001437.
  • Grünwald and van Ommen [2017] P. Grünwald and T. van Ommen. Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it. Bayesian Analysis, 12(4):1069–1103, 12 2017. doi: 10.1214/17-BA1085. URL https://doi.org/10.1214/17-BA1085.
  • Holmes and Walker [2017] C. C. Holmes and S. G. Walker. Assigning a value to a power likelihood in a general Bayesian model. Biometrika, 104(2):497–503, 03 2017. ISSN 0006-3444. doi: 10.1093/biomet/asx010. URL https://doi.org/10.1093/biomet/asx010.
  • Kass and Raftery [1995] R. E. Kass and A. E. Raftery. Bayes factors. Journal of the American Statistical Association, 90(430):773–795, 1995. doi: 10.1080/01621459.1995.10476572.
  • Key et al. [1999] J. Key, L. Pericchi, and A. Smith. Bayesian model choice: What and why? (with discussion). Bayesian Statistics, 5, 01 1999.
  • Lindley [1957] D. V. Lindley. A statistical paradox. Biometrika, 44(1-2):187–192, 06 1957. ISSN 0006-3444. doi: 10.1093/biomet/44.1-2.187. URL https://doi.org/10.1093/biomet/44.1-2.187.
  • Lyddon et al. [2019] S. P. Lyddon, C. C. Holmes, and S. G. Walker. General Bayesian updating and the loss-likelihood bootstrap. Biometrika, 03 2019. doi: 10.1093/biomet/asz006. URL https://doi.org/10.1093/biomet/asz006. to appear, DOI: 10.1093/biomet/asz006.
  • Marin and Robert [2018] J.-M. Marin and C. P. Robert. Importance sampling methods for Bayesian discrimination between embedded models. Technical report, 2018. URL https://arxiv.org/pdf/0910.2325.pdf.
  • O’Hagan and Forster [2004] A. O’Hagan and J. J. Forster. Kendall’s advanced theory of statistics, volume 2B: Bayesian inference, volume 2. Arnold, 2004.
  • Rischard et al. [2018] M. Rischard, P. E. Jacob, and N. Pillai. Unbiased estimation of log normalizing constants with applications to Bayesian cross-validation. arXiv preprint arXiv:1810.01382, 2018.
  • Robert [2007] C. P. Robert. The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation. Springer, 2nd edition, May 2007. ISBN 978-0-387-71598-8.
  • Robert [2013] C. P. Robert. On the Jeffreys-Lindley’s paradox. Technical report, 2013. URL https://arxiv.org/pdf/1303.5973.pdf.
  • Robert and Wraith [2009] C. P. Robert and D. Wraith. Computational methods for Bayesian model choice. In AIP conference proceedings, volume 1193, pages 251–262. AIP, 2009.
  • Schwarz [1978] G. Schwarz. Estimating the dimension of a model. The Annals of Statistics, 6(2):461–464, 03 1978. doi: 10.1214/aos/1176344136. URL https://doi.org/10.1214/aos/1176344136.
  • Shao [1993] J. Shao. Linear model selection by cross-validation. Journal of the American Statistical Association, 88(422):486–494, 1993. ISSN 01621459. URL http://www.jstor.org/stable/2290328.
  • Shao et al. [2018] S. Shao, P. E. Jacob, J. Ding, and V. Tarokh. Bayesian model comparison with the Hyvärinen score: computation and consistency. Technical report, 2018. URL https://arxiv.org/pdf/1711.00136.pdf.
  • Spiegelhalter et al. [2002] D. J. Spiegelhalter, N. G. Best, B. P. Carlin, and A. Van Der Linde. Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(4):583–639, 2002. doi: 10.1111/1467-9868.00353.
  • Vehtari and Lampinen [2002] A. Vehtari and J. Lampinen. Bayesian model assessment and comparison using cross-validation predictive densities. Technical report, 2002.
  • Vehtari and Ojanen [2012] A. Vehtari and J. Ojanen. A survey of Bayesian predictive methods for model assessment, selection and comparison. Statistics Surveys, 6:142–228, 2012. doi: 10.1214/12-SS102. URL https://doi.org/10.1214/12-SS102.
  • Vehtari et al. [2017] A. Vehtari, A. Gelman, and J. Gabry. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing, 27(5):1413–1432, 2017.
  • Watanabe [2010] S. Watanabe. Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning Research, 11:3571–3594, Dec. 2010. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=1756006.1953045.
  • Watson and Holmes [2016] J. Watson and C. C. Holmes. Approximate models and robust decisions. Statistical Science, 31(4):465–489, 2016.

Appendix A Appendix

a.1 Proof of Proposition 1

Proof.

We look at the case where , so the prior is parametrised by with . We let , denoting the observables as . We further denote and , and likewise and . We write as the updated obtained from the general Bayesian update (4). The function must then satisfy

(A.1.1)

for all and for all . If we let , then , so this simplifies to

As is continuous and monotonically decreasing, to satisfy (5) it must take on the form

(A.1.2)

for . We now explicitly write out the form of

(A.1.3)

If we plug (A.1.2), (A.1.3) into (A.1.1), we obtain

Expanding, cancelling terms, and simplifying we obtain

and so we must have or , where only the latter solution is non-trivial. We have thus shown that for , the unique non-trivial solution to (5) is

(A.1.4)

The remainder of the proof involves showing that this choice of satisfies (5) for all and all and . Subbing (A.1.4) into (5), we obtain

where for convenience we denote . ∎

a.2 Proof of Proposition 2

Proof.

Consider the matrix with elements , such that the ’th row of records the prequential sequence of log posterior predictives under the ’th of permutations of . By the property of conditional probabilities, we have that the row sums of are equal, for all , and hence

Within each column of , the values are invariant to the permutation of in the preceding columns under exchangeability. There are thus -choose- distinct training sets and choices for given the training set. For each column , we can then write

where . We have the result if we let . ∎

a.3 Alternative proof of Proposition 2

We first begin by showing the following proposition.

Proposition 3.

For a preparatory cross-validation score, , defined as the sum of cross-validation terms from leave--out to leave--out,

we have the following equivalence relationship

(A.3.1)

which states that is the average log marginal likelihood over all choices of the training set.

Proof.

To show this, we use a proof by induction. We see that (A.3.1) is trivially true for , as this is simply . Assuming (A.3.1) holds for some , we have

From the properties of conditional probability, we can write

(A.3.2)

Again, the marginal likelihood is invariant to the permutation of the sequence under data exchangeability, so we have to consider the repetitions in the partitions . For each of the choose unordered sequences , there are partitions into , so there are repetitions of each unordered in (A.3.2). We can thus write

and by induction we have (A.3.1). ∎

Proposition 2 then follows trivially by setting in Proposition 3.

a.4 Derivation of for Bayesian models

The following corollary follows easily from Propositions 2 and 3.

Corollary 2.

For the aggregate cross-validation score defined in (8), we have the following equivalence relationship

Proof.

We note that from their definitions and Proposition 2. From the permutation invariance of the marginal likelihood, we can write

(A.4.1)

By subtracting A.3.1 in Proposition 3 from A.4.1 and regarding each term in the summation, we have

a.5 Computing

To avoid the need for Markov chain Monte Carlo chains in (10), we can take advantage of the fact that the partial posteriors for different training sets will be similar, and utilize importance sampling [Bhattacharya et al., 2007, Vehtari et al., 2017] or sequential Monte Carlo [Bornn et al., 2010] to estimate the posterior predictives for computational savings. We also note that in (10) is a biased estimate, and Rischard et al. [2018]

provides unbiased estimators of

directly through unbiased Markov chain Monte Carlo and path sampling methods.

The arithmetic averaging over training/test splits may also be inherently unstable, as demonstrated by the following example. Suppose that

is a binary random variable which takes on either

or with equal probability, and we are attempting to estimate . For large , it is likely that approximately half of the values in are equal to 0 and the other half to 1. There will thus exist a permutation of the sequence such that almost all the first values are equal to 0, with the remaining almost all equal to 1. The model will then be certain that after observing the training set, and score the remaining points very poorly, giving a large negative log posterior predictive. This suggests that an arithmetic average may be unstable; the median or robust trimmed mean over permutations may be stabler alternatives.

The form in (9) relies on the conditional coherency of Bayesian updating and scoring. Without this, still exists as defined in (8), and can be directly estimated for example through

where and the training set is sampled uniformly at random conditioned on . This facilities alternative choices for the belief updating model and .

a.6 Visualization of aggregate cross-validation

Figure A.6.1: Leave--out cross-validation score against (left) and normalized aggregate cross-validation score against (right) for and ; the maximum standard error is 0.001 for and 0.005 for

A visualization of the effects of the training/preparatory data size is shown in Figure A.6.1 for . We omit and for clarity of the plot, as both are significantly more negative than the other values. On the left we see that the individual cross-validation term prefers the simplest model when the training set is very small as over-fitting is penalized, but as increases, the true model overtakes it. The model eventually overtakes the model too, and we see the discrepancy between and decrease as over-fitting is penalized less and less. This latter effect is demonstrative of how leave-one-out cross-validation under-penalizes complex models as argued in Shao [1993], and why a value of should be preferred. On the right, we observe a similar effect for the aggregate cross-validation score , but the discrepancy between and remains more noticeable for moderate as a cumulative sum of terms is being taken.

a.7 Illustration for the probit model

To demonstrate the aggregate cross-validation score in an intractable example, we carry out model selection in the Pima Indian benchmark model with a probit model. We observe binary random variables with associated -dimensional covariates , and the probit model is defined as

where

is the standard normal cumulative distribution function and

. As suggested in Marin and Robert [2018], we elicit a g-prior where is a vector of s and is the by matrix with rows .

The dataset consists of data points and we consider covariates consisting of glu, bp and ped, which correspond to plasma glucose concentration from an oral glucose test, diastolic blood pressure and diabetes pedigree function respectively. We compare the full model : (glu,bp,ped) with : (glu,bp) through and to test for significance of ped

. We standardize all covariates to have 0 mean and variance 1. We calculate

using importance sampling with a Gaussian proposal with samples. The proposal mean is set to the maximum likelihood estimate of and proposal covariance to the estimated covariance matrix of the maximum likelihood estimate as suggested in Marin and Robert [2018]. For , we estimate each posterior predictive in (10) with the same importance sampling scheme where we temper the proposal such that its covariance matrix is divided by . We also use proposal samples and average over random train/test splits. We carry out 10 runs of each and report the mean and maximum standard error as before.

We see in Table A.7.1 that for , the simpler model with ped omitted performs worse for both scores, and there is thus strong evidence for ped. However, when we set , we see that comparing models via the marginal likelihood suggests that ped is no longer significant, while the aggregate cross-validation score changes little with this increased variance of the prior. As a sanity check, we run a Gibbs sampler targeting for the two prior settings within the full model , and plot the marginal posterior of in Figure A.7.1. For reference, the posterior means of are and respectively. The posteriors of are indistinguishable for the two prior settings, with a significant mean for . This agrees well with the aggregate cross-validation score which is clearly robust to vague priors.

Model
(glu,bp,ped) -168.93 -165.87
(glu,bp) -170.00 -167.37
(glu,bp,ped) -173.10 -166.28
(glu,bp) -173.05 -167.64
Maximum standard error -0000.004 -000.02
Table A.7.1: Log marginal likelihoods and aggregate cross-validation score for probit model
Figure A.7.1: Marginal posterior density plots for for different prior scalings