How to Guide Decisions with Bayes Factors

10/19/2021 ∙ by Patrick Schwaferts, et al. ∙ Universität München 0

Some scientific research questions ask to guide decisions and others do not. By their nature frequentist hypothesis-tests yield a dichotomous test decision as result, rendering them rather inappropriate for latter types of research questions. Bayes factors, however, are argued to be both able to refrain from making decisions and to be employed in guiding decisions. This paper elaborates on how to use a Bayes factor for guiding a decision. In this regard, its embedding within the framework of Bayesian decision theory is delineated, in which a (hypothesis-based) loss function needs to be specified. Typically, such a specification is difficult for an applied scientist as relevant information might be scarce, vague, partial, and ambiguous. To tackle this issue, a robust, interval-valued specification of this loss function shall be allowed, such that the essential but partial information can be included into the analysis as is. Further, the restriction of the prior distributions to be proper distributions (which is necessary to calculate Bayes factors) can be alleviated if a decision is of interest. Both the resulting framework of hypothesis-based Bayesian decision theory with robust loss function and how to derive optimal decisions from already existing Bayes factors are depicted by user-friendly and straightforward step-by-step guides.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The result of a classic frequentist hypothesis test is a dichotomous test decision. However, scientific research questions are very versatile and there is not always the demand to guide a decision. By the their nature, frequentist hypothesis tests prohibit a statistical hypothesis-based analysis without making decisions. In that sense, a statistical framework that provides results without requiring an underlying (potentially artificially constructed) decision problem seems to be advantageous. The Bayes factor – a Bayesian quantity that is used for hypothesis comparisons (Jeffreys, 1961; Kass and Raftery, 1995; Gönen et al., 2005; Rouder et al., 2009) – is argued to do so, as it is typically interpreted as evidence quantification w.r.t. the contrasted hypotheses (see e.g. Morey et al., 2016) without requiring a decision to be made. In this regard, Rouder et al. (2018) state that “[r]efraining from making decisions strikes [them] as advantageous in most contexts.”

Naturally, the evidence (as quantified by the Bayes factor) might then be used to update beliefs in the considered hypotheses and subsequently to guide a respective decision. The essential point, however, is that the researcher might stop the analysis after calculating a Bayes factor without guiding a decision, e.g. if merely the evidence quantification is of interest. Then the result of the Bayesian analysis is the Bayes factor itself and not a decision. Yet, for those research situations that do indeed aim at guiding a decision, the Bayes factor might naturally be used to do so. The aim of this elaboration is to outline the decision theoretic framework in which Bayes factors are involved.

Further, it shall be acknowledged that the specification of the relevant quantities within such a decision theoretic framework as precise values might not always be possible for an applied scientist, as the available relevant information might be scarce, vague, partial, and ambiguous. To tackle this issue, also a robust version of the framework shall be outlined in which the applied researcher is allowed to specify the essential quantities less precisely as sets of values, such that the partial nature of the relevant information might be captured more accurately. Although such robust specifications might be possible for all essential quantities (see e.g. Schwaferts and Augustin, 2019, 2021), the present elaboration is restricted to a robustly specified inverval-valued loss function, as it is this quantity which characterizes the difference between a decision-theoretic and a non-decision-theoretic analysis, yet its precise specification is expected to bear serious difficulties for applied scientists.

The elaborations within this paper are structured as follows: After delineating the general (Section 2) and the hypothesis-based (Section 3) framework of Bayesian decision theory, its relation with Bayes factors is depicted (Section 4). To facilitate a more user-friendly employment of the hypothesis-based Bayesian decision theoretic framework, a robust interval-valued specification of the loss function was allowed (Section 5) and the restriction of the prior distributions to be proper can be alleviated (Section 6). Both the resulting framework (Secion 7.1) and how to derive optimal actions from existing Bayes factor values (Secion 7.2) are presented in respective step-by-step guides.

2 Bayesian Decision Theory

Within the framework of Bayesian decision theory (e.g. Berger, 1985; Robert, 2007), the objective is to decide between different actions. In accordance with the context of Bayes factors, only two actions shall be considered, namely and , being comprised within the action space .

The researcher plans to conduct an investigation that yields data , which is characterized by a parametric sampling distribution with parameter , where is the parameter space. Accordingly, the density of the data is .

In a Bayesian setting, a prior distribution on the parameter with density needs to be specified. This prior reflects the information (or belief or knowledge or uncertainty) about the parameter before the investigation is conducted.

In addition, also a loss function needs to be specified, which quantifies the “badness” of the consequences of deciding for the action if the parameter value is true. Usually, the exact shape of this loss function is inaccessible and hypothesis-based analyses are able to tackle this issue. These are depicted within the next section, but first – to delineate the ideal Bayesian solution – assume is fully known.

Now, after specifying the parametric sampling distribution, the prior, as well as the loss function, the investigation can be conducted and the data are observed. This allows to update the prior distribution via Bayes rule to the posterior distribution with density

(1)

There are plenty of resources available about how to obtain this posterior (e.g. Gelman et al., 2013; Kruschke, 2015), which reflects the information (or belief or knowledge or uncertainty) about the parameter after the investigation was conducted.

Based on this posterior distribution, it is possible to calculate the expected posterior loss for each action by integrating the loss function over the posterior density:

(2)

The optimal action has minimal expected posterior loss:

(3)

3 Hypothesis-Based Bayesian Decision Theory

As mentioned, typically, the loss function is not fully accessible as the essential information about it might be scarce, vague, partial, and ambiguous. A commonly employed solution is a hypothesis-based analysis: The researcher considers each possible parameter value and assesses which action should be preferred if this parameter value would be true. These considerations lead to two sets of parameters and for which the actions and should be preferred, respectively. These sets define the hypotheses

(4)

employed in conventional analyses, such as hypothesis tests or Bayes factors.

From the posterior density

it is possible to determine the posterior probabilities of the parameters sets

and , i.e. of the hypotheses and , by

(5)

respectively. The ratio of these beliefs

is referred to as posterior odds.

The underlying assumption of hypothesis-based analyses is that the loss values within these sets and are constant, respectively (see Figure 1). This assumption shall be referred to as simplification assumption and is inherent to a statistical analysis which considers statistical hypotheses and derives applied conclusions based on respective (hypothesis-based) results. In addition (without loss of generality), the loss values for deciding correctly (i.e. for if or for if ) can be set to . The resulting loss function is in regret form (depicted in Table 1) and has only two values to specify: if and if .

Table 1: Simplified Hypothesis-Based Loss Function.
Figure 1: Hypothesis-Based Loss Function. Assume , , and . The hypothesis-based loss function (y-axis) in regret form (see Table 1) in dependence of the parameter (x-axis) and the actions (left) and (right) is assumed to be constant within the sets and , respectively. This is an assumption (simplification assumption) inherent to a hypothesis-based statistical analysis which – at least implicitly – considers an underlying applied decision problem.

With this simplified loss function (Table 1), the expected posterior loss of each action can be calculated as

(6)
(7)

and the action with minimal expected posterior loss shall be selected.

Only the ratio is required to determine this optimal action. This ratio states how much worse it would be to decide for if

is true (type-I-error) than to decide for

if

is true (type-II-error), if deciding correctly has loss

. With the ratio of expected posterior losses

(8)

the optimal action is

(9)

For any action might be chosen.

4 Bayes Factors

By assessing even the prior distribution in the light of the hypotheses, it is possible to obtain the prior probabilities in the hypotheses (illustrated in Figure 

2, left):

(10)

Analogously, the ratio of these beliefs is referred to as prior odds.

In addition, the prior distribution can be restricted to each of the hypotheses, referred to as within-hypothesis priors (illustrated in Figure 2, middle and right), and the corresponding densities are

(11)
(12)

where if the statement is true and if is false.

The overall prior distribution can be decomposed (cp. Rouder et al., 2018) into the prior probabilities of the hypotheses and the within-hypothesis priors (Figure 2):

(13)
Figure 2: Prior Decomposition. Assume , ,

and a standard normal distribution for

. Left: The prior density is depicted as solid line. and can be calculated as respective areas under this density, depicted as light gray and dark gray, respectively. Middle: The within-hypothesis density as in equation (11) is depicted as solid line. Right: The within-hypothesis density as in equation (12) is depicted as solid line.

Instead of considering the overall prior distribution together with the hypotheses (which leads to the posterior odds, as in Section 3), the Bayes factor is obtained by considering only the within-hypothesis priors together with the hypotheses:

(14)

The posterior odds can then be calculated from the Bayes factor and the prior odds:

(15)

The optimal decision can now be obtained as in the previous section (Section 3) by considering the loss function.

5 Robust Loss Function

However, a precise specification of the value is typically not accessible, as essential information about the “badness” of the consequences of the decision are scarce, vague, partial, and ambiguous. Yet, this partial information needs to be included into the analysis, as ignoring it facilitates suboptimal decisions. A decision cannot be guided properly without considering its consequences.

This partial information about the loss can be captured less arbitrarily and more robustly by an interval than by a precise value (cp. e.g. Walley, 1991; Augustin et al., 2014). To do so, the researcher has to determine a lower bound and an upper bound for reasonable values (i.e. for the ratio of how much worse the type-I-error is compared to the type-II-error, if deciding correctly has a loss of ).

To perform a robust analysis (cp. also Ríos Insua and Ruggeri, 2012) with this interval-valued specification, it is possible to obtain and consider the optimal action for each value within this interval .

If the optimal action is the same for each within , then this action should be chosen. If not, the decision should be withheld, because the data or the information about the decision problem are not sufficient to unambiguously guide the decision.

Formally (see also Schwaferts and Augustin, 2019, 2020, 2021), the ratios of expected posterior losses need to be calculated for both the lower and upper bound, respectively:

(16)

Then, the optimal action is

(17)

For , the decision should be withheld.

6 Improper Priors

Furthermore, the calculation of Bayes factors comes along with a restriction (Jeffreys, 1961) on the prior distribution: It must be a proper distribution, i.e. the density has to integrate to :

(18)

In contrast, an improper prior distribution is characterized by a non-integrable function (e.g.  with being a constant, see Figure 3

, dotted line) and, technically, this prior distribution is no proper probability distribution. However, these improper priors are frequently allowed within Bayesian prior specifications, because they might lead to proper posterior distributions (see Figure 

3, solid line). In this case, the posterior odds can be calculated reasonably and a decision can be guided consistently.

The prior odds, however, might not be reasonable (e.g. with as in Figure 3). Accordingly, the Bayes factor (calculated via equation (15))

(19)

cannot be calculated reasonably due to its dependence on the prior odds. Therefore, Bayes factors require – in contrast to a Bayesian analysis in general – proper prior distributions. This is truly a limitation, as improper priors are frequently employed for representing non-knowledge or for letting the data speak for themselves (e.g. Gelman et al., 2013).

Figure 3: Improper Prior. Assume the model for

, with known variance

and unknown parameter , the hypotheses , . The function , with being an arbitrary constant, characterizes the improper prior distribution for (dotted line). For a sample of size with in-sample mean , the posterior distribution is proper (solid line), such that its density integrates to . The prior “beliefs” into the hypotheses are with and not reasonably interpretable (light gray and dark gray, respectively).

This issue is alleviated in hypothesis-based Bayesian decision theoretic accounts, as improper priors typically yield proper posterior odds. Accordingly, a researcher who is interested in guiding a decision might employ the decision theoretic framework directly without explicitly calculating the Bayes factor. Then, also improper priors might be employed.

Please note that it is still an ongoing debate whether non-knowledge can be formalized by a precise improper prior distribution and if so, which improper prior distribution shall be employed. Although the authors of this paper doubt that non-knowledge can be formalized by a precise prior distribution, even if it is improper (cp. e.g. Augustin et al., 2014), this issue shall not be addressed here. In general, it is important that the employed prior distribution matches with the available information (or non-information) about the phenomenon of interest, and this applies to every point of view within this debate. In this regard, the present elaboration emphasizes only that it is mathematically possible to employ improper priors if decisions are of interest, which is an advantage of the (more general) hypothesis-based Bayesian decision theoretic account over Bayes factors.

7 Step-By-Step Guides

7.1 Hypothesis-Based Bayesian Decision Theory

In order to apply this hypothesis-based Bayesian decision theoretic framework with robust loss function, a researcher might follow the following steps.

Step 1: Actions. First of all, the researcher needs to specify the actions. It is recommended to explicitly state and report these actions, e.g. by (this example is taken from Bartolucci et al., 2011)

  • do not administer aspirin to prevent myocardial infarction

  • administer aspirin to prevent myocardial infarction

If the researcher has difficulties stating the actions, maybe there is no decision to guide and a descriptive analysis might suffice (cp. also Cumming, 2014; Kruschke and Liddell, 2018).

Step 2: Sampling Distribution. Next, the researcher should provide a detailed description of the investigation and how it is characterized (i.e. the sampling distribution). It is recommended to also explicitly state the employed parameter and its interpretation. This is the basis for specifying the hypotheses.

Step 3: Prior Distribution. In the Bayesian setting, it is possible to include prior information (or belief or knowledge or uncertainty) into the analysis. In that, the researcher has to specify a prior distribution on the parameter. It is recommended to fully report the available prior information about the parameter and why this leads to the prior density .

Of course, this specification is far from being unambiguous. However, this is a fundamental issue inherent to every Bayesian analysis (not only Bayesian decision theoretic accounts) and solving this issue is not the intention of this elaboration. Nevertheless, solutions, such as sensitivity analyses (found in almost every Bayesian textbook, e.g. Gelman et al., 2013), exist. It is recommended at this step of the analysis to also state all other possible prior densities that are in accordance with the available prior information, as these serve as basis for a subsequent sensitivity analysis.

Naturally, also non-informative priors might be specified and they might also be improper (as long as they lead to proper posterior distributions).

Step 4: Assumption. If the researcher is unable to specify the loss function , then a hypothesis-based simplification as in Section 3 might be a solution. This simplification is an assumption on the loss function, namely that the loss function is constant within each of two parameter sets. If this assumption is not appropriate, it might lead to errors (which are inherent to every hypothesis-based analysis) and the researcher needs to be aware of this consequence. It is recommended to explicitly report that this assumption was made. Transparency is one of the basic principles in science (cp. Gelman and Hennig, 2017).

Step 5: Hypotheses. Now, the researcher has to consider each possible parameter value and assess which action should be preferred if this parameter value would be true. All parameters for which or should be preferred are comprised within the sets or , respectively. Certainly, there are parameter values that define the border between both sets and . It is recommended to explicitly state what these values mean in real-life and why they define reasonable borders between and .

Step 6: Errors. Deciding for if is the type-I-error and deciding for if is the type-II-error. Both errors should be delineated, as they serve as basis for specifying the ratio . It is recommended to explicitly state these errors and their consequences, e.g. by

  • Type-I-error: administer aspirin to prevent myocardial infarction, but the effect is negligible. Consequence: patients unnecessarily suffer side effects of aspirin.

  • Type-II-error: do not administer aspirin to prevent myocardial infarction, although it would have an effect. Consequence: some patients suffer a myocardial infarction, which could have been prevented.

Of course, this is only a schematic illustration and in real empirical studies these elaborations will be more comprehensive.

Step 7: Loss Magnitude. The researcher has to imagine that the “badness” of deciding correctly is . In this context, the researcher has to determine how much worse the type-I-error is compared to the type-II-error. This is the value . As a precise value for is difficult to determine, it might be easier to specify a range of plausible values for . It is recommended to report all considerations that lead to this specification.

Step 8: Investigation. Now, the investigation can be conducted and it is recommended to preregister111Study designs can be preregistered e.g. at www.cos.io/initiatives/prereg. the previous specifications, the design of the experiment, and the planned (decision theoretic) analysis of the data (cp. Nosek et al., 2018; Klein et al., 2018). Registered reports222Information about registered reports can be found e.g. at www.cos.io/rr. even allow to obtain a peer-review prior to collecting the data.

Step 9: Posterior Distribution. The observed data are used to obtain the posterior distribution as well as the posterior beliefs in the hypotheses and . There are countless references on how to do this (e.g. Gelman et al., 2013; Kruschke, 2015).

Step 10: Optimal Action. The researcher has to calculate and as in equation (16) to find the optimal action as in equation  17).

For , the decision should be withheld, because the data or the information about the decision problem are not sufficient to unambiguously guide the decision. In this case, a reasonable strategy might be to collect more data or to gather more information about the decision problem, especially about the consequences of the errors, to narrow down . However, it is recommended to transparently report that a decision was withheld at first and which subsequent steps were taken to obtain more information.

Step 11: Publish Data. Of course, other researchers might need the data to guide their decisions. It is to expect that they have different prior knowledge and that their decisions employ different hypotheses. Without having access to the data set (but only to the reported analysis), it might be difficult, or even impossible, for them to guide their decisions properly, emphasizing the importance of open science333Comprehensive information about open science are provided e.g. by the LMU Open Science Center: www.osc.uni-muenchen.de..

7.2 From Bayes Factors to Decisions

Sometimes, a researcher wants to use the results of a previous study to guide a decision. Assume a Bayes factor was already calculated and shall now be used to guide this decision.

Step A: Applicability of the Sampling Distribution. Confirm that the interpretation of the parameter is actually relevant for the decision of interest. If this is not the case, the available data (or Bayes factor) can hardly be used to guide the decision of interest.

Step B: Applicability of the Hypotheses. Certain specific hypotheses were assumed in order to calculate the Bayes factor. These need to match with the decision problem of interest. To assess this, the potential actions of the decision problem of interest need to be delineated as in Step 1 and the parameter sets and need to be obtained as in Step 5. These sets have to be equivalent to the hypotheses that were employed in the calculation of the Bayes factor. If this is not the case, it is recommended to not use this Bayes factor value and restart the decision theoretic account within the previous section (Section 7.1). In this regard, it is helpful if the data set, that was used to calculate the original Bayes factor, is fully accessible.

Step C: Applicability of the Prior Distribution. Confirm that the employed within-hypothesis prior distributions for calculating the Bayes factor match with the available information about the phenomenon of interest. If this is not the case, it is recommended to not use this Bayes factor value and restart the decision theoretic account within the previous section (Section 7.1). Again, to do so it is helpful if the data set, that was used to calculate the original Bayes factor, is fully accessible.

Step D: Prior Odds. As the calculation of the Bayes factor does not require the prior odds, only the within-hypothesis prior distributions, former need to be specified to guide a decision. In this regard, the researcher has to specify the prior probabilities of the hypotheses. Analogue to Step 3, as this is part of the Bayesian prior specification, it is recommended to fully report the available information about the parameter and why it leads to the prior probabilities of the hypotheses.

Step E: Loss Function. Specify the (interval-valued) loss function by following Steps 4, 6, and 7.

Step F: Posterior Odds. Use the available Bayes factor to calculate the posterior odds via equation (15).

Step G: Optimal Action. The optimal action can be derived as in Step 10.

8 Concluding Remarks

Statisticians and methodologists do – in general – not know all the different fields of applications and research contexts a statistical method will eventually be employed in. The scientific endeavor is extremely versatile and research problems might arise that have not been thought of before. In that, versatility of research methods is of paramount importance. While it might be considered as disadvantageous that frequentist hypothesis tests are restricted to a dichotomous decision context, it might similarly be considered as disadvantageous if Bayes factors are restricted to only an evidential, non-decision context. Fortunately, the mathematics underlying Bayes factors suggest their involvement in guiding decisions. In this regard, Bayes factors might be seen as evidential quantification or as a quantity in the context of guiding decisions, depending on the goals of the scientific investigation.

In order to use Bayes factors correctly when guiding decision, their embedding within the framework of Bayesian decision theory has to be considered. It is important that the research context as well as the decision problem are formalized appropriately. If misspecified, results inform past the research question. Naturally, the specification of essential quantities (such as the prior distribution, the hypotheses, or the loss function) is an applied problem and might be rather difficult for the applied scientist. In order to alleviate these issues, these quantities might be specified robustly as interval-valued or set-valued quantities. Then the researcher might consider a range or a set of plausible values, avoiding the necessity to (arbitrarily) commit to one single precise value. Within this elaboration only an interval-valued loss value was considered, as it keeps the calculations simple (compare Section 5) yet allows to include essential loss information (about the consequences of the decision) into the analysis. Naturally, also the prior distribution and the hypotheses might be included into the analysis as set-valued quantities (see e.g. Ebner et al., 2019). How to deal with set-valued quantities on a formal level is extensively elaborated on within the field of imprecise probabilities (see e.g. Walley, 1991; Augustin et al., 2014; Huntley et al., 2014).

In summary, this elaboration delineates the decision theoretic embedding of Bayes factors by outlining the framework of hypothesis-based Bayesian decision theory, supplemented by considerations about robust loss specifications and straightforward step-by-step guides. These guides try to help those applied scientists who want to guide decisions with Bayes factors.

References

  • Augustin et al. (2014) Augustin T., Coolen F.P., de Cooman G., and Troffaes M.C.M., editors (2014). Introduction to Imprecise Probabilities. John Wiley, Chichester. URL http://dx.doi.org/10.1002/9781118763117.
  • Bartolucci et al. (2011) Bartolucci A.A., Tendera M., and Howard G. (2011). Meta-analysis of multiple primary prevention trials of cardiovascular events using aspirin. The American Journal of Cardiology, 107(12):1796–1801. URL http://dx.doi.org/10.1016/j.amjcard.2011.02.325.
  • Berger (1985) Berger J.O. (1985). Statistical Decision Theory and Bayesian Analysis. Springer, New York, second edition. URL http://dx.doi.org/10.1007/978-1-4757-4286-2.
  • Cumming (2014) Cumming G. (2014). The new statistics: Why and how. Psychological Science, 25(1):7–29. URL http://dx.doi.org/10.1177/0956797613504966.
  • Ebner et al. (2019) Ebner L., Schwaferts P., and Augustin T. (2019). Robust Bayes factor for independent two-sample comparisons under imprecise prior information. In J. De Bock, C.P. de Campos, G. de Cooman, E. Quaeghebeur, and G. Wheeler, editors,

    Proceedings of the Eleventh International Symposium on Imprecise Probability: Theories and Applications

    , volume 103 of

    Proceedings of Machine Learning Research

    , pages 167–174. PMLR.
    URL http://proceedings.mlr.press/v103/ebner19a.html.
  • Gelman and Hennig (2017) Gelman A. and Hennig C. (2017). Beyond subjective and objective in statistics. Journal of the Royal Statistical Society: Series A (Statistics in Society), 180:967–1033. URL http://dx.doi.org/10.1111/rssa.12276.
  • Gelman et al. (2013) Gelman A., Stern H.S., Carlin J.B., Dunson D.B., Vehtari A., and Rubin D.B. (2013). Bayesian Data Analysis. Chapman & Hall. URL http://dx.doi.org/10.1201/9780429258411.
  • Gönen et al. (2005) Gönen M., Johnson W.O., Lu Y., and Westfall P.H. (2005).

    The Bayesian two-sample t test.

    The American Statistician, 59:252–257. URL http://dx.doi.org/10.1198/000313005X55233.
  • Huntley et al. (2014) Huntley N., Hable R., and Troffaes M.C.M. (2014). Decision making. In T. Augustin, F.P.A. Coolen, G. de Cooman, and M.C.M. Troffaes, editors, Introduction to Imprecise Probabilities, pages 190–206. John Wiley & Sons. URL http://dx.doi.org/10.1002/9781118763117.ch8.
  • Jeffreys (1961) Jeffreys H. (1961). Theory of Probability. Oxford University Press, Oxford, third edition.
  • Kass and Raftery (1995) Kass R.E. and Raftery A.E. (1995). Bayes factors. Journal of the American Statistical Association, 90:773–795. URL http://dx.doi.org/10.2307/2291091.
  • Klein et al. (2018) Klein O., Hardwicke T.E., Aust F., Breuer J., Danielsson H., Mohr A.H., IJzerman H., Nilsonne G., Vanpaemel W., Frank M.C., and Frank M.C. (2018). A practical guide for transparency in psychological science. Collabra: Psychology, 4(1):20(1–15). URL http://dx.doi.org/10.1525/collabra.158.
  • Kruschke (2015) Kruschke J.K. (2015). Doing Bayesian Data Analysis: A Tutorial With R, JAGS, and Stan. Academic Press, New York. URL http://dx.doi.org/10.1016/B978-0-12-405888-0.09999-2.
  • Kruschke and Liddell (2018) Kruschke J.K. and Liddell T.M. (2018).

    The Bayesian new statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    Psychonomic Bulletin & Review, 25(1):178–206. URL http://dx.doi.org/10.3758/s13423-016-1221-4.
  • Morey et al. (2016) Morey R.D., Romeijn J.W., and Rouder J.N. (2016). The philosophy of Bayes factors and the quantification of statistical evidence. Journal of Mathematical Psychology, 72:6–18. URL http://dx.doi.org/10.1016/j.jmp.2015.11.001.
  • Nosek et al. (2018) Nosek B.A., Ebersole C.R., DeHaven A.C., and Mellor D.T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11):2600–2606. URL http://dx.doi.org/10.1073/pnas.1708274114.
  • Ríos Insua and Ruggeri (2012) Ríos Insua D. and Ruggeri F., editors (2012). Robust Bayesian Analysis. Springer Science & Business Media. URL http://dx.doi.org/10.1007/978-1-4612-1306-2.
  • Robert (2007) Robert C. (2007). The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation. Springer, New York, second edition. URL http://dx.doi.org/10.1007/0-387-71599-1.
  • Rouder et al. (2018) Rouder J.N., Haaf J.M., and Vandekerckhove J. (2018). Bayesian inference for psychology, part iv: Parameter estimation and Bayes factors. Psychonomic Bulletin & Review, 25(1):102–113. URL http://dx.doi.org/10.3758/s13423-017-1420-7.
  • Rouder et al. (2009) Rouder J.N., Speckman P.L., Sun D., Morey R.D., and Iverson G. (2009).

    Bayesian t tests for accepting and rejecting the null hypothesis.

    Psychonomic Bulletin & Review, 16:225–237. URL http://dx.doi.org/10.3758/PBR.16.2.225.
  • Schwaferts and Augustin (2019) Schwaferts P. and Augustin T. (2019). Imprecise hypothesis-based Bayesian decision making with simple hypotheses. In J. De Bock, C.P. de Campos, G. de Cooman, E. Quaeghebeur, and G. Wheeler, editors, Proceedings of the Eleventh International Symposium on Imprecise Probability: Theories and Applications, volume 103 of Proceedings of Machine Learning Research, pages 338–345. PMLR. URL http://proceedings.mlr.press/v103/schwaferts19a.html.
  • Schwaferts and Augustin (2020) Schwaferts P. and Augustin T. (2020). Bayesian decisions using regions of practical equivalence (ROPE): Foundations. Technical Report 235, Ludwig-Maximilians-University Munich, Department of Statistics. URL http://dx.doi.org/10.5282/ubm/epub.74222.
  • Schwaferts and Augustin (2021) Schwaferts P. and Augustin T. (2021). Imprecise hypothesis-based Bayesian decision making with composite hypotheses. In A. Cano, J. De Bock, E. Miranda, and S. Moral, editors, Proceedings of the Twelveth International Symposium on Imprecise Probability: Theories and Applications, volume 147 of Proceedings of Machine Learning Research, page 280–288. PMLR. URL https://proceedings.mlr.press/v147/schwaferts21a.html.
  • Walley (1991) Walley P. (1991). Statistical Reasoning With Imprecise Probabilities. Chapman & Hall, London.