Hypothesis testing is widely used in scientific research. For a statistical model parameterized in parameter , the general formulation is:
Traditionally, a point null hypothesis takes to be a single point leading to the following
However, many researchers, such as Meehl (1978) and Cohen (1994), have argued that such formulation is not appropriate in most scientific researches because the parameter cannot be exactly specified to a single point . For example when comparing a new treatment with a standard one, the differences in the mean of an outcome variable, say , is almost never exactly 0. Therefore a more appropriate formulation is
where that satisfies represents practically negligible deviation from . For example, Lakens et al. (2018) and Lakens (2017) use this approach to study if there is a pre-specified meaningful treatment effect in gerontology and clinical trials. Morey and Rouder (2011) reviews and develops interval null hypothesis testing in the context of psychological research.
The frequentist approach to interval null hypothesis testing is to conduct an equivalence test (e.g. Lakens, 2017; Rogers et al., 1993; Wellek, 2010). The limitation of the frequentist hypothesis testing is well documented (Wasserstein et al., 2016). In particular, it can only quantify evidence against the default hypothesis but not the evidence for it. Recently, there have been renewed interests (Harms and Lakens, 2018; Lakens et al., 2018; Morey and Rouder, 2011; Kruschke, 2013) in using a Bayesian approach for tackling this interval null hypothesis problem so that and can be treated on a more equal basis. The standard Bayes approach is to use Bayes factor to quantify the relative support of the data for one hypothesis over the other. Standard references for Bayes factor include Kass and Wasserman (1995); Berger (2013); Berger and Pericchi (1996); Berger et al. (2001); Berger and Pericchi (2015). The special issue of Journal of Mathematical Psychology (June, 2016) provides in-depth and updated discussion of Bayes Factor.
Alternatively, Krushke and others (Kruschke, 2018, 2011, 2013, 2014; Carlin and Louis, 2008; Edwards and Berry, 2010; Freedman et al., 1984; Hobbs and Carlin, 2007) have championed a procedure called ROPE (region of practical equivalence), in which the interval null is treated as a region of practical equivalence. In this procedure, the 1- highest density interval of the posterior distribution of is constructed. If this interval falls completely in , either or , then is selected. Otherwise, the selection between and is declared uncertain.
The Bayes factor and ROPE have been treated as two different and distinctive procedures. In this paper, we provide a formal connection between them in Lemma 1 with two benefits. First, it helps to better understand and improve the the ROPE procedure. Secondly and more importantly, it leads to a simple and effective algorithm for computing Bayes factor in a wide range of problems by using posterior draws generated by standard Bayesian software programs such as WinBUGS (Lunn et al., 2000), JAGS (Kurt Hornik and Zeileis, 2003), and Stan (Carpenter et al., 2017). This circumvents the need for custom-made software to calculate marginal distributions for the Bayes factors.
2 Connection between Bayes factor and the posterior distribution
We first outline the Bayes factor approach in testing vs. . We need first to specify a prior of under , on , and a prior of under , on . Let be the likelihood. The marginal distribution of under is then,
Further assume that the prior probability foris and the prior probability for is
. It follows directly from Bayes Theorem that
The term is called the Bayes Factor, which quantifies the relative support in data for over independent of . Note that .
For interval null hypothesis, the Bayes factor and the posterior distribution of are closely related as shown in Lemma 1.
Assume that is the disjoint union of and and for and for . Let as defined before and define the combined prior of on to be
Let be the likelihood. Let be the posterior density of under this . Then
which is the first expression of the Lemma. The second expression on Bayes factor follows from this result and (1). ∎
Although Lemma 1 is straightforward, it does not appear to have been published before. The two important implications of this Lemma are discussed in the next two sections.
3 The ROPE Procedure
In ROPE procedure, a prior distribution is directly specified for parameter on . The posterior distribution of is then computed. Kruschke (2018) summarized the ROPE procedure as follows.
“Consider a ROPE around a null value of a parameter. If the HDI [Highest Density Interval] of the parameter distribution falls completely outside the ROPE then reject the null value, because the most credible values of the parameter are all not practically equivalent to the null value. If the HDI of the parameter distribution falls completely inside the ROPE then accept the null value for practical purposes, because the most credible values of the parameter are all practically equivalent to the null value. Otherwise remain undecided, because some of the most credible values are practically equivalent to the null while other of the most credible values are not.”
The highest density interval (HDI) is used in ROPE because it is the credible interval of shortest length/smallest volume. This approach has a strong intuitive appeal and is similar to the frequentist equivalence testing (e.g., Lakens (2017); Rogers et al. (1993); Wellek (2010)) in which a confidence interval is used in place of the highest density interval.
We now provide additional details of the ROPE procedure. First, ROPE declares undecided between and when the HDI is contained in neither nor . This, however, is not fully informative as it fails to distinguish between two different cases: (1) the HDI is almost contained in (or alternatively in ); (2) the HDI is spread more evenly between and . To improve on this, note that if any credible interval is contained in for or . Therefore, we propose to directly report with a decision rule to declare for when . For the two cases above, is close to for Case 1 but it is close to 0.5 for Case 2, which is more informative. In addition, connects ROPE directly to Bayes factor as seen in Lemma 1. Reporting was also proposed in Wellek (2010) but Kruschke (2018) seems to reject this idea in favor of using HDI. Our analysis provides additional support for using instead of HDI.
Second, Kruschke and others have touted the greater robustness of ROPE to prior . Here we reformulate the ROPE procedure in a mathematically equivalent way so that it more closely resembles a Bayes factor formulation. ROPE procedure starts with a single prior distribution, , on , but by truncating this distribution on and , respectively, we arrive at priors and along with , the prior probability of . More specifically, we have
This allows us to explore the greater robustness of the ROPE procedure over the Bayes factor approach. As an example, consider the hypotheses vs , and let . As , approaches and becomes more and more diffused on . It is well known from Lindley’s paradox that a very diffuse leads to increased support for over (Robert, 2014). However at the same time, , which compensates for the more diffused as seen from Equation 1. This accounts for the apparent greater robustness of the ROPE procedure. Although the greater robustness is generally desirable, this implicit tangling of , and can be difficult to comprehend for many statisticians. How to specify these quantities in a more meaningful way remains a topic for further research.
4 Computing Bayes factors using MCMC draws from the posterior distribution
Bayes factor usually requires custom software to compute the marginal distributions and through numerical integration. Coding such software can be a tedious and error-prone task. Lemma 1, however, provides a simple way to compute Bayes factor using standard Bayesian programs for drawing from posterior distribution such as Stan (Carpenter et al., 2017), JAGS (Kurt Hornik and Zeileis, 2003), and WinBUGS (Lunn et al., 2000). Let be draws from posterior distribution under prior in Equation (2), where is the Monte Carlo sample size. Then under general conditions,
There are several advantages of this approach of computing Bayes factor compared to custom software. First, it reduces programming cost considerably. Second, packages such as Stan (Carpenter et al., 2017), JAGS (Kurt Hornik and Zeileis, 2003), and WinBUGS (Lunn et al., 2000)
are well-tested, high quality software routines that are familiar to most statisticians performing Bayesian analyses. So it is more likely to get correct answers quickly. Third, people have gained considerable experience and understanding of the posterior distribution, and this approach facilitates the adoption of Bayes factor as an integrated part of Bayesian inference. The disadvantage is that it can be computationally less efficient than custom software specifically designed for Bayes factor.
Bayes factor does not depend on the value of . In fact, (3) holds for for any . However, based on our experience, the computational efficiency of drawing Monte Carlo samples from the posterior distribution can be greatly improved when is chosen so that the combined prior in Equation (2) is made more continuous at the boundary between and .
Note that Using MCMC draws to compute Bayes factor has been discussed previously. For example, Morey et al. (2011) discusses using improved Savage-Dickey method to compute Bayes for nested models. Weinberg et al. (2012)
in a more general setting, proposes the use of harmonic mean approximation and other methods for computing the marginal distributions. Our method, by utilizing the special structure of interval hypotheses, is much simpler.
We demonstrate our methods in the following two two examples.
4.1 Example 1: Two sample t-test
) consists of changes in systolic blood pressure over 12 weeks for 21 African-American men, 10 of whom took calcium supplements and the remaining 11 took placebo supplements. Testing for equality of means between these two groups with a two-sample t-test (assuming equality of variances) yields a t-statistic of 1.63 and a p-value of 0.12.
We now proceed to analyze the data using the interval null hypothesis Bayes factor. Let , and let . We are interested in evaluating the standardized effect size
In particular, we consider the following hypotheses:
for some pre-specified . To compute Bayes factor, we assume the following priors
where in is a parameter to be specified. Note that a larger corresponds to a prior distribution that is more deviated from . Let be the prior for hypothesis . The combined prior for is then (2) in Lemma 1. Given the above priors for , and and the likelihoods, we utilize Stan to generate draws from the posterior distribution of . We then use Equation (3
) for a consistent estimator of the Bayes factor.
We calculate Bayes factor of over for , which is generally considered a small effect size, over a range of values for in the prior for that reflects different deviations of from . The result is plotted in Figure 1. All these Bayes factors are between 1 and 1.4, indicating slightly more support of over .
4.2 Example 2: Meta analysis with a hierarchical Bayes model
Here we will consider meta analysis of a dataset originally published in Table 10 of Yusuf et al. (1985), which contains mortality data across 22 studies of patients who were treated with either a beta-blocker (treatment group) or placebo (control group) after experiencing a heart attack. The dataset is also reproduced and analyzed in (Gelman et al., 2013, Sec. 5.6). The data from each study forms a table and the variables for the study are summarized in Table 2.
For the study, let , , be the underlying probability of death for the beta-blocker and placebo group respectively, and let
be the corresponding log odds ratio; i.e.
We assume and the focus of our inference is , the mean of individual . In particular, we compare vs. . Our hierarchical Bayesian model is specified as follows:
For and , we draw MCMC samples from the posterior of using Stan and estimate and (Bayes Factor of over ) as in Lemma 1. The results are given in Table 3 for three different values of effect size . Note that here the Bayes factor shows strong support for over for whereas it shows a weak support of for . When is increased to 0.3, however, Bayes factor shows relatively strong support of over .
Stan and R code for Examples 1 and 2 are available at https://sites.google.com/site/jiangangliao.
In Summary, this papers establishes a formal connection between ROPE and Bayes factor for comparing interval hypotheses. This connection leads to better understanding of ROPE and provides a simple method to compute Bayes factor using MCMC draws from the posterior distribution.
- Berger (2013) Berger, J. O. (2013), Statistical decision theory and Bayesian analysis, 2nd edn Springer.
- Berger and Pericchi (1996) Berger, J. O., and Pericchi, L. R. (1996), “The intrinsic Bayes factor for model selection and prediction,” Journal of the American Statistical Association, 91(433), 109–122.
- Berger et al. (2001) Berger, J. O., Pericchi, L. R., Ghosh, J., Samanta, T., De Santis, F., Berger, J., and Pericchi, L. (2001), “Objective Bayesian methods for model selection: Introduction and comparison,” Lecture Notes-Monograph Series, pp. 135–207.
- Berger and Pericchi (2015) Berger, J., and Pericchi, L. (2015), “Bayes Factors,” Wiley StatsRef: Statistics Reference Online, https://doi.org/10.1002/9781118445112.stat00224.pub2.
- Carlin and Louis (2008) Carlin, B. P., and Louis, T. A. (2008), Bayesian Methods for Data Analysis, 3rd edn Chapman and Hall/CRC.
Carpenter et al. (2017)
Carpenter, B., Gelman, A., Hoffman, M., Lee, D., Goodrich, B., Betancourt, M.,
Brubaker, M., Guo, J., Li, P., and Riddell, A. (2017), “Stan: A probabilistic programming language,” Journal of Statistical Software, 76(1).
- Cohen (1994) Cohen, J. (1994), “The earth is round (),” American Psychologist, 49(12), 997–1003.
- Edwards and Berry (2010) Edwards, J. R., and Berry, J. W. (2010), “The presence of something or the absence of nothing: Increasing theoretical precision in management research,” Organizational Research Methods, 13(4), 668–689.
- Freedman et al. (1984) Freedman, L., Lowe, D., and Macaskill, P. (1984), “Stopping rules for clinical trials incorporating clinical opinion,” Biometrics, pp. 575–586.
- Gelman et al. (2013) Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2013), Bayesian Data Analysis, 3rd edn Chapman and Hall/CRC.
- Gönen et al. (2005) Gönen, M., Johnson, W. O., Lu, Y., and Westfall, P. H. (2005), “The Bayesian two-sample t test,” The American Statistician, 59(3), 252–257.
- Harms and Lakens (2018) Harms, C., and Lakens, D. (2018), “Making ‘null effects’ informative: statistical techniques and inferential frameworks,” J Clin Transl Res, 3(S2), 382–393.
- Hobbs and Carlin (2007) Hobbs, B. P., and Carlin, B. P. (2007), “Practical Bayesian design and analysis for drug and device clinical trials,” Journal of Biopharmaceutical Statistics, 18(1), 54–80.
Kass and Wasserman (1995)
Kass, R. E., and Wasserman, L. (1995), “A reference Bayesian test for nested hypotheses and its relationship to the Schwarz criterion,”Journal of the American Statistical Association, 90(431), 928–934.
- Kruschke (2014) Kruschke, J. (2014), Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan, 2nd edn Academic Press.
- Kruschke (2011) Kruschke, J. K. (2011), “Bayesian assessment of null values via parameter estimation and model comparison,” Perspectives on Psychological Science, 6(3), 299–312.
- Kruschke (2013) Kruschke, J. K. (2013), “Bayesian estimation supersedes the t test,” Journal of Experimental Psychology: General, 142(2), 573.
- Kruschke (2018) Kruschke, J. K. (2018), “Rejecting or accepting parameter values in Bayesian estimation,” Advances in Methods and Practices in Psychological Science, 1(2), 270–280.
- Kurt Hornik and Zeileis (2003) Kurt Hornik, F. L., and Zeileis, A., eds (2003), JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling, Proceedings of the 3rd International Workshop on Distributed Statistical Computing, https://www.r-project.org/conferences/DSC-2003/.
- Lakens (2017) Lakens, D. (2017), “Equivalence Tests: A Practical Primer for t Tests, Correlations, and Meta-Analyses,” Social Psychological and Personality Science, 8(4), 355–362.
- Lakens et al. (2018) Lakens, D., McLatchie, N., Isager, P. M., Scheel, A. M., and Dienes, Z. (2018), “Improving inferences about null effects with Bayes factors and equivalence tests,” The Journals of Gerontology: Series B, in press(doi:10.1093/geronb/gby065), 1–13.
Lunn et al. (2000)
Lunn, D. J., Thomas, A., Best, N., and Spiegelhalter, D.
(2000), “WinBUGS - A Bayesian modelling
framework: Concepts, structure, and extensibility,” Statistics and
Computing, 10(4), 325–337.
- Lyle et al. (1987) Lyle, R. M., Melby, C. L., Hyner, G. C., Edmondson, J. W., Miller, J. Z., and Weinberger, M. H. (1987), “Blood pressure and metabolic effects of calcium supplementation in normotensive white and black men,” JAMA, 257(13), 1772–1776.
- Meehl (1978) Meehl, P. E. (1978), “Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology.,” Journal of Consulting and Clinical Psychology, 46(4), 806.
- Morey and Rouder (2011) Morey, R. D., and Rouder, J. N. (2011), “Bayes factor approaches for testing interval null hypotheses,” Psychological Methods, 16(4), 406.
- Morey and Rouder (2018) Morey, R. D., and Rouder, J. N. (2018), BayesFactor: Computation of Bayes Factors for Common Designs, R package version 0.9.12-4.2 edn, https://CRAN.R-project.org/package=BayesFactor.
- Morey et al. (2011) Morey, R. D., Rouder, J. N., Pratte, M. S., and Speckman, P. L. (2011), “Using MCMC chain outputs to efficiently estimate Bayes factors,” Journal of Mathematical Psychology, 55(5), 368–378.
- Robert (2014) Robert, C. P. (2014), “On the Jeffreys-Lindley paradox,” Philosophy of Science, 81(2), 216–232.
- Rogers et al. (1993) Rogers, J. L., Howard, K. I., and Vessey, J. T. (1993), “Using significance tests to evaluate equivalence between two experimental groups,” Psychological Bulletin, 113(3), 553–565.
- Rouder et al. (2009) Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., and Iverson, G. (2009), “Bayesian t tests for accepting and rejecting the null hypothesis,” Psychonomic Bulletin & Review, 16(2), 225–237.
- Wang and Liu (2016) Wang, M., and Liu, G. (2016), “A simple two-sample Bayesian t-test for hypothesis testing,” The American Statistician, 70(2), 195–201.
- Wasserstein et al. (2016) Wasserstein, R. L., Lazar, N. A. et al. (2016), “The ASA’s statement on p-values: Context, process, and purpose,” The American Statistician, 70(2), 129–133.
- Weinberg et al. (2012) Weinberg, M. D. et al. (2012), “Computing the Bayes factor from a Markov chain Monte Carlo simulation of the posterior distribution,” Bayesian Analysis, 7(3), 737–770.
- Wellek (2010) Wellek, S. (2010), Testing Statistical Hypotheses of Equivalence and Noninferiority, 2nd edn Chapman and Hall/CRC.
- Yusuf et al. (1985) Yusuf, S., Peto, R., Lewis, J., Collins, R., and Sleight, P. (1985), “Beta blockade during and after myocardial infarction: An overview of the randomized trials,” Progress in Cardiovascular Diseases, 27(5), 335–371.