Quantifying Observed Prior Impact

01/29/2020 ∙ by David E. Jones, et al. ∙ 0

We distinguish two questions (i) how much information does the prior contain? and (ii) what is the effect of the prior? Several measures have been proposed for quantifying effective prior sample size, for example Clarke [1996] and Morita et al. [2008]. However, these measures typically ignore the likelihood for the inference currently at hand, and therefore address (i) rather than (ii). Since in practice (ii) is of great concern, Reimherr et al. [2014] introduced a new class of effective prior sample size measures based on prior-likelihood discordance. We take this idea further towards its natural Bayesian conclusion by proposing measures of effective prior sample size that not only incorporate the general mathematical form of the likelihood but also the specific data at hand. Thus, our measures do not average across datasets from the working model, but condition on the current observed data. Consequently, our measures can be highly variable, but we demonstrate that this is because the impact of a prior can be highly variable. Our measures are Bayes estimates of meaningful quantities and well communicate the extent to which inference is determined by the prior, or framed differently, the amount of effort saved due to having prior information. We illustrate our ideas through a number of examples including a Gaussian conjugate model (continuous observations), a Beta-Binomial model (discrete observations), and a linear regression model (two unknown parameters). Future work on further developments of the methodology and an application to astronomy are discussed at the end.



There are no comments yet.


page 20

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.