Put the odds on your side: a new measure for epidemiological associations

06/11/2018 ∙ by Olga A. Vsevolozhskaya, et al. ∙ National Institutes of Health 0

The odds ratio (OR) is a measure of effect size commonly used in observational research. OR reflects statistical association between a binary outcome, such as the presence of a health condition, and a binary predictor, such as an exposure to a pollutant. Statistical inference and interval estimation for OR are often performed on the logarithmic scale, due to asymptotic convergence of log(OR) to a normal distribution. Here, we propose a new normalized measure of effect size, γ', and derive its asymptotic distribution. We show that the new statistic, based on the γ' distribution, is more powerful than the traditional one for testing the hypothesis H_0: log(OR)=0. The new normalized effect size is termed `gamma prime' in the spirit of D', a normalized measure of genetic linkage disequilibrium, which ranges from -1 to 1 for a pair of genetic loci. The normalization constant for γ' is based on the maximum range of the standardized effect size, for which we establish a peculiar connection to the Laplace Limit Constant. Furthermore, while standardized effects are of little value on their own, we propose a powerful application, in which standardized effects are employed as an intermediate step in an approximate, yet accurate posterior inference for raw effect size measures, such as log(OR) and γ'.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Abstract

The odds ratio (OR) is a measure of effect size commonly used in observational research. OR reflects statistical association between a binary outcome, such as presence of a health condition, and a binary predictor, such as an exposure to a pollutant. Statistical inference and interval estimation for OR are often performed on the logarithmic scale, due to asymptotic convergence of log(OR) to normality. Here, we propose a new normalized measure of effect size, , and derive its asymptotic distribution. We show that the new statistic, based on the distribution, is more powerful than the traditional one for testing the hypothesis : log(OR)=0. The new normalized effect size is termed “gamma prime” in the spirit of , a normalized measure of genetic linkage disequilibrium, which ranges from -1 to 1 for a pair of genetic loci. The normalization constant for is based on the maximum range of the standardized effect size, for which we establish a peculiar connection to the Laplace Limit Constant. Furthermore, while standardized effects are of little value on their own, we propose a powerful application, in which standardized effects are employed as an intermediate step in an approximate, yet accurate posterior inference for raw effect size measures, such as log(OR) and . Keywords

: gamma prime coefficient; posterior interval estimation; standardized coefficient Odds ratio (OR) is an ubiquitous measure of effect size in medical and epidemiological research. Among its useful properties is invariance across sampling designs (e.g., case-control or prospective), and straightforward interpretation in terms of logistic regression coefficients. Moreover, the log transformed odds ratios, log(OR), have analytically attractive properties, for example, the asymptotic distribution of log(OR) rapidly converges to a normal distribution as the sample size increases. Given the estimated log of odds ratio,

, a common statistic used to assess significance of an association is:

(1)
(2)

where is the sum of the four cell counts in a table. The total sample size can be factored out and the statistic can be re-expressed as:

where is the sample proportion of cases,

is the estimated probability of exposure among cases, and

is the estimated probability of exposure among the controls. Thus, the asymptotically normal -statistic can be expressed as a product of the square root of the sample size () and the standardized log(OR) () in the following way:

In this paper, we derive the lower and the upper bounds on the possible values of the standardized effect size and show that can not exceed the Laplace Limit Constant (LLC). Although this result was previously stated by our group (1), details and the derivation were never provided. Then, using the LLC as a normalizing constant, we propose a new measure of the effect size, , that is varying between -1 and 1. We derive an asymptotic distribution of and show, via simulation experiments, that the new association statistic based on the distribution is more powerful than the traditional one based on Eq. (2). Finally, we show how the standardized log(OR) can be utilized to accurately approximate posterior inference for the raw effect size, such as log(OR) and the newly proposed .

Methods

Bounds for the standardized logarithm of odds ratio

We shall assume for now that epidemiological data is summarized by a 22 table as:

Exposure status
Disease status

where is the number of cases; is the number of controls; and the number of exposed subjects is . When sampling is random with respect to exposure, sample proportions and are estimates of the population probabilities of exposure among cases and among controls, respectively, and . Then, the effect of exposure on an outcome can be measured by the odds ratio, OR, defined as:

To study influence of various risk factors on the outcome, one can test the null hypothesis

, or equivalently

: log(OR)=0. The logarithmic transformation is advantageous because of the bounded and asymmetric nature of OR (it can not take negative values) and also due to the fact that the distribution of log(OR) quickly converges to normality. Then, the classical test statistic is defined as:

and is the sample proportion of cases, . The corresponding population parameter can be written as:

(3)

Conditionally on the value of OR, we can express variance (

) as a function of two variables ( and

) to emphasize that the standard deviation (

) will vary depending on the study design and population prevalence of exposure among cases (we note that can be expressed in terms of and OR as ). Alternatively, conditionally on the observed OR, one can express in terms of the exposure probability, , and risk of disease among exposed as:

with .

To obtain maximum possible value of the standardized , we first need to minimize , conditional on the OR value, with respect to its two parameters. For example, if we set the first partial derivative of Eq. (Bounds for the standardized logarithm of odds ratio) with respect to to zero, and solve the resulting equation in terms of , it follows that:

(5)

Further, setting the first partial derivative of Eq. (Bounds for the standardized logarithm of odds ratio) with respect to to zero and plugging in instead of results in

(6)

and

(7)

Now, substituting Eqs (6) and (7) into Eq. (5), we obtain . Similarly, operating with Eq. (3), we can express the minimum value in terms of as

(8)

where . Then, we can obtain an equivalent expression for as just we did for , . Using the conditional value of , the maximum standardized log(OR) is

(9)

Using the identity ,

(10)
(11)

Equation (11) depends only on the logarithm of odds ratio, but it is not monotone in it: reaches its maximum for log(OR) value at about 4.7987…,

Surprisingly, as log(OR) exceeds that value, the corresponding normalized coefficient, , starts to decrease. Further, although Equation (11) depends only on log(OR), its maximum can only be attained at the specific values of population parameters. Namely, (a) , (b) from Eq.(7), which implies , and (c) , which implies log(OR) = 121.354 and

Connection to the Laplace Limit Constant

It turns out that there is an interesting connection between the expression for and the famous Kepler Equation (KE) for orbital mechanics, . Geometric interpretations of , and are illustrated by Figure 1. Specifically, suppose that one is inside a circular orbit, rescaled to be the unit circle, at the position S denoted by “”. The shortest path to the orbit has length . A celestial body traveles the orbit from that point to point T. Given the area and distance , we want to determine the angle . These three values are related to one another by Kepler’s Equation. Planetary orbits are elliptical, so the actual orbit is along an ellipse inside of the unit circle. Still, the calculation of the eccentric anomaly, , is a crucial step in determining planet’s coordinates along its elliptical orbit at various time points.

KE is transcendental, i.e., with no algebraic solution in terms of and , and it has been studied extensively since it is central to celestial mechanics. Colwell (2) notes that “in virtually every decade from 1650 to the present” there have been papers devoted to the Kepler Equation in the book suitably named “Solving Kepler’s Equation over three centuries.” The solution to KE involves the condition equivalent to Eq. (11). Namely, the solution can be expressed as the power series in , provided and that , which is the “Laplace Limit Constant,” LLC (3). The detailed mathematical derivation of the connection between Eq. (11) and LLC is provided in “Supplemental Materials (S-1).”

The proposed normalized measure of effect size and its distribution

As we showed above, at any value of , the maximum of is

The bounded nature of (ranging between LLC and LLC) suggests a new normalized measure of effect size, , that has the range . The new statistic is appropriate as a measure of effect size where it is monotone in : within a very wide range of odds ratios: . For instance, Figure 2 shows that under the null hypothesis, the relationship between log(OR) and is almost linear, and under the alternative hypothesis, the relationship is close to linear and monotone, as long as (these are rounded to integer OR values before the LLC maximum is reached).

Although is derived by using the range of the standardized , it is not a standardized measure in the same sense as scaling by a standard deviation. It is rather analogous to a coefficient denoted by (4), which is commonly used in genetics to measure association between alleles at a pair of genetic loci (linkage disequilibrium, LD). is akin to in the sense that a raw measure of LD is divided by its maximum value (which is a function of allele frequencies) to yield the range.

Using the first order Taylor series approximation, we derive an asymptotic variance of , as well as one- and two-sided asymptotic test statistics, as follows:

which simplifies to

(12)

The asymptotic distributions for one- and two-sided statistics are

where is defined as before:

We show by simulation experiments that the null distribution of this new statistic reaches the asymptotic chi-square quicker than the commonly used and that the new statistic provides higher power under the alternative hypothesis.

We note that two other well-known transformations of the OR with the range from -1 to 1 are Yule’s coefficients: , the coefficient of colligation, and (5). Interestingly, using the identity , the statistic can be expressed as a function of :

(13)

Further, note that . The transformation (to , known as Fisher’s variance stabilizing transformation (6), is expected to improve the rate of asymptotic convergence to the normal distribution, thus we do not anticipate that the asymptotic test statistics based and would be competitive when compared to the statistic based on the . Nevertheless, we obtained approximate variances for and using the first order Taylor series approximation (the same type of approximation that yields the asymptotic variance for ) as follows:

Via simulations, we confirmed that the statistic for tends to be more conservative and less powerful than , while the statistic for is anti-conservative and reaches the nominal 5% size only around . However, these results are omitted here and we focus instead on comparisons of statistics based on and .

Approximate Bayesian inference

The rationale for using standardized coefficients (e.g., standardized log odds ratio) as measures of effect size in epidemiologic studies has been questioned and it has been suggested that standardized coefficients are insufficient summaries of effect size (7, 8)

. Nonetheless, we argue that the standardized effects can be utilized efficiently in their new application developed here, as tools for delivering approximate Bayesian inference. Specifically, we propose to employ standardization as an intermediate step that yields posterior inference for parameters of interest (such as

or ). The key to this approach is the observation that it is often straightforward to obtain an approximate posterior distribution for standardized effects () using a noncentral density as likelihood. Once such standardized posterior distribution is estimated, it can be converted to an approximate posterior distribution for a parameter of interest, .

Let denote the noncentrality parameter of the raw effect size density (for instance, or ). To obtain an approximate posterior distribution, one needs to specify a prior distribution for a raw measure of effect size, , as a binned frequency histogram, with a finite mixture of values (the mid-values of bins) and the corresponding probabilities, (the height of bins as percent values). For example, if the effect size is measured by =log(OR), such binned frequency histogram may be bell-shaped with a sizable spike around zero, indicating that the majority of risk effects are anticipated to be small. Alternatively, if the effect size is measured by log(OR), the frequency histogram may be L-shaped, with a spike of the mass again at about zero.

Next, we employ an approximation to a fully Bayesian analysis (which would have required a joint prior distribution for both and ), and “dress” the raw parameter, by plugging in the estimate of the standard deviation, to obtain values of and . Then, given the observed value of a test statistic , the posterior distribution of the standardized effect size will also be a finite mixture, calculated as:

(14)

where is the test statistic density with the non-centrality parameter . Once the posterior distribution for the standardized effect size (times ), is evaluated, one can approximate the posterior distribution for the raw parameter of interest by “undressing” it, i.e., multiplying by the sample standard deviation and scaling by the square root of the sample size. For example,

(15)

or

(16)

From this approximate posterior distribution, one can then obtain an effect size estimator as the posterior mean by taking a weighted sum (e.g.,

), construct posterior credible intervals, etc. ‘Approximation’ here refers to approximating a fully Bayesian modeling: our approach is a compromise between the frequentist and the Bayesian methodologies due to the usage of plug-in frequentist estimates for certain parameters. Although the posterior distribution for the raw effect size is approximate (due to plugging in a point sample estimate of the standard deviation), it is nevertheless highly accurate, as we demonstrate through our simulation experiments.

Results

Simulations: Frequentist properties

To investigate statistical properties of the proposed procedures in relation to the traditional -test, we now turn to simulation experiments (the details of the simulation setup are provided in “Supplemental Materials (S-2)”).

Our simulations are not intended to be comprehensive, and we specifically compared only the two statistics discussed in this report. The comparison is ‘apples to apples,’ i.e., between two similarly derived Wald test statistics, both based on transformations of OR as a measure of effect size. The basic model for OR, without stratification or covariates, follows from contingency table, and we note that performance of different tests for contingency table associations has been thoroughly investigated before (9, 10).

The Type-I error rates of the two tests, calculated under the null hypothesis of no effect,

, are shown in Table 1. For small number of cases, both statistical tests behave conservatively, but the size of the test based on is considerably closer to the nominal level of . As the number of cases increases, the size of both tests approaches the nominal level. Table 2 shows statistical power of the two tests for the different combinations of , its variance, and the number of cases. For all combinations of parameters considered, the -based test has higher statistical power than the -test, particularly for small sample sizes.

We further note that the power of these two-sided tests can be investigated analytically by plugging in the population parameters, and considering the ratio of and values, . Sample size and variance cancel out and their ratio becomes only a function of log(OR):

Figure 3 illustrates that for all odds ratio values within the interval, -statistics are at least as large as -statistics. Under the null (true log(OR) = 0), the ratio is one, and the two statistics are equivalent.

Simulations: Bayesian properties

Averaged across simulations, Table 3 reports (a) the true mean value of the raw parameter, , corresponding to the maximum observed test statistic; (b) the posterior expectation ; (c) the average for the frequentist estimator of gamma prime, ; and (d) the average probability to contain the true log(OR) value by the high posterior density interval.

On the one hand, Table 3 shows that posterior expectation is very close to the true average effect size value, even when posterior inference was performed using small sample sizes and extreme selection (i.e., the top-ranking result taken out of one million statistical tests). On the other hand, Table 3 illustrates that the frequentist estimator of is subject to the winner’s curse and grossly exaggerates the true magnitude of effect size. Finally, after comparing posterior convergence to the corresponding nominal levels, it is clear that the posterior interval’s performance is satisfactory.

Real data application

To illustrate a practical implementation of the proposed effect size measure, we calculated for six dietary intake risk factors associated with diabetes: whole grain (11), protein (12), alcohol consumption (13), fruits and berries (14), dietary magnesium (15), and dietary calcium (16). All reported associations used in this application reached nominal 5% statistical significance. Reports that examinine individual dietary factors in relation to the type 2 diabetes (T2D) are inconclusive (17, 18, 19) and to check robustness of these results, we converted the reported ORs to ’s and constructed posterior intervals for our new measure, assuming three different a priori levels of belief that a reported dietary factor is truly a risk of T2D. The a priori assumptions were set to (1) optimistic or 25% chance that the reported association is false, (2) even or 50% chance that the report is false, and (3) poor or 75% chance that the report is false. For real associations, we assumed that there is 5% a priori chance of encountering OR 2 (or OR ). This is the same assumption as we made in our simulations (S-2).

Table 4 summarizes our results. The first row of the table reports robustness of the whole grain intake association with T2D. The initially published frequentist estimate OR = 0.70 corresponds to the negative , indicating that greater whole grain intake may reduce risk of T2D (i.e., negative values of imply a protective effect). The posterior expectation of is similar to the frequentist estimate for all levels of a priori belief that the effect is genuine. Furthermore, regardless of levels of uncertainty in the protective effect of the whole grain, the upper bound of the 95% posterior interval did not cover 0, suggesting that the effect might be real or at least strong enough statistically to withstand substantial perturbations in prior assumptions.

The second row of Table 4 shows robustness of association between protein intake and T2D. The initial OR = 1.16 corresponds to the positive value of , suggesting that higher intake of dietary protein is associated with an increased risk of T2D. Posterior estimates for the dietary protein effect magnitude are not as high as the frequentist’s for all levels of a priori belief that the effect is genuine, but maintain posterior ‘significance’ under the optimistic a priori scenario (75% prior chance that the association is real). For the 50-50 or lower a priori chances, we can no longer conclude with confidence that dietary protein intake is associated with an elevated risk of T2D.

The remaining rows of Table 4 report results for alcohol, fruits and berries, dietary magnesium, and dietary calcium associations with T2D. At all levels of a priori skepticism/optimism regrading the nature of the reported associations, the posterior intervals for values cover zero, indicating that these findings do not have a strong statistical support.

It is useful to contrast posterior intervals with the frequentist (confidence) intervals, CIs. None of the reported CIs, except for the dietary calcium association, cover zero. Can one modify prior assumptions so that posterior intervals would match CIs? In other words, what are the implicit prior assumptions that govern the width of CIs? It turns out that simply lowering the prior

from 25% to 0% is insufficient, and one also needs to assume a substantially flatter distribution for chances of a true association. Specifically, to match the endpoints of the reported CIs, one needs to assume a 5% a priori chance of encountering an OR as large as 4 for dietary risk factors of T2D and simultaneously lower to about 0.5%. Although such prior assumptions are often described as non-informative or vague, they in fact correspond to a strong but unrealistic belief that observing large OR values is not much less likely than the small ones.

Discussion

In this article, we propose a new powerful transformation of the odds ratio to measure the magnitude of associations between risk factors and a binary outcome. The proposed test statistic for has competitive power compared to the traditional statistic for testing the null hypothesis that OR=1. Further, we introduce a simple and efficient approach for obtaining an approximate posterior distribution for and demonstrate its robustness to selection bias, a feature that aims to improve reliability of reported findings. Via simulations, we showed that -based test has better control of the Type I error rate over the traditional -test for small sample sizes. The power of our method is always at least as good as that of the traditional -test under the tested scenarios.

Our new measure is normalized by the Laplace Limit Constant to range between and 1, yet it should not be regarded as an approximate transformation to a correlation coefficient, nor does it behave as a standardized measure of effect size. That said, the usual standardized log(OR) can be approximately related to the standardized slope and to the correlation coefficient

in simple linear regression models. When both

and are binary, can be expressed as:

Standardized coefficients may be used in practice as a “scale-free” measure, however it has been suggested that their magnitude may not appropriately reflect relative importance of explanatory variables (7, 8). Since our is not obtained using a regular standardization technique (i.e., scaling by the standard deviation), we omit arguments for and against the use of standardized coefficients in statistical practice.

As we briefly discussed in the Methods Section, another two popular OR transformations with (-1, 1) range are Yule’s and coefficients. If one were to apply the transformation to either or , the result would be equal to times a constant. Thus, it should be expected that statistics that are directly based on or , (without the variance stabilizing transformation), would not be competitive to the one based on – the result that we confirmed via simulations. Curiously, although can be expressed as a function of (see Eq (13)), statistical power of the -test is higher than that of the -based test, in stark contrast to properties of test statistics directly based on sample variances of Yule’s coefficients. We further note that the relationship between Yule’s coefficients and is monotonic for all values of , while the relationship between and is monotonic only for log(OR) values varying between about and . Nonetheless, although the range of values admissible for transformation is limited to this range, it covers the majority of plausible values observed in practice.

We showed here how the standardized logarithm of odds ratio,

, can be utilized as a middle step towards an approximate posterior inference for a raw (non-standardized) parameter of interest. Assuming that the prior distribution for the raw parameter is known precisely, we checked performance of our method in terms of its resistance to the winner’s curse and robustness of estimation in the presence of multiple testing. Of course, exact knowledge of the prior distribution is improbable, however it is a very useful assumption to make for the purpose of checking the accuracy of methods performance in such an ideal scenario. Assuming that the prior is known, proper posterior estimates should not overstate the effect size when the top-ranking associations are selected out of a large number of results. As for practical implementations, although the exact prior distribution may not be known, it is possible for it to be specified realistically. We recognize that the problem of a reasonable prior choice may be challenging, but also note that this problem is not unique to our method and is ubiquitous within the Bayesian framework. Therefore, in practice, it is important to assess robustness of posterior estimates to changes in prior parameters. For instance, in our application, we varied the prior probability of encountering a true association and found that the association between whole grain consumption and T2D was quite resilient to the increase in the prior

. The association between alcohol, fruits and berries, dietary magnesium or calcium consumption and T2D, on the contrary, vanished even under optimistic prior assumptions (75% chance) about the frequency of real effects.

Finally, we note that using the proposed methodology, the posterior estimates can be calculated using only commonly reported statistics (such as

and its standard error), without requiring access to individual records. Within the proposed framework, any arbitrary prior distribution for a parameter that measures effect size can be easily accommodated in a form of a binned frequency table. The discretized nature of the prior does not preclude usage of continuous distributions, because modern statistical packages have facilities to finely chop continuous density functions, thus enabling one to obtain arbitrarily accurate approximations to continuous priors.

Acknowledgments

Author affiliation: Biostatistics Department, University of Kentucky, Lexington, Kentucky, USA (Olga A. Vsevolozhskaya)
Correspondence: Dmitri V. Zaykin, Senior Investigator at the Biostatistics and Computational Biology Branch, National Institute of Environmental Health Sciences, National Institutes of Health, P.O. Box 12233, Research Triangle Park, NC 27709, USA. Tel.: +1 (919) 541-0096 ; Fax: +1 (919) 541-4311. Email address: dmitri.zaykin@nih.gov

This research was supported in part by the Intramural Research Program of the NIH, National Institute of Environmental Health Sciences, NIEHS. The authors would like to thank Gabriel Ruiz for his contribution to finding the LLC bound(1) during a Summer of Discovery Internship at NIEHS.

References

  • (1) Vsevolozhskaya O, Ruiz G, Zaykin D. Bayesian prediction intervals for assessing p-value variability in prospective replication studies. Translational Psychiatry 2017;7(12):1271.
  • (2) Colwell P. Solving Kepler’s equation over three centuries. Richmond, Va.: Willmann-Bell, 1993.
  • (3) Plummer HCK. An introductory treatise on dynamical astronomy. University Press, 1960.
  • (4) Lewontin R. The interaction of selection and linkage. i. general considerations; heterotic models. Genetics 1964;49(1):49–67.
  • (5) Yule GU. On the methods of measuring association between two attributes. Journal of the Royal Statistical Society 1912;75(6):579–652.
  • (6) Fisher RA. Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population. Biometrika 1915;10(4):507–521.
  • (7) Greenland S, Schlesselman J, Criqui M. The fallacy of employing standardized regression-coefficients and correlations as measures of effect. American Journal of Epidemiology 1986;125(2):349–350.
  • (8) Greenland S, Maclure M, Schlesselman JJ, et al. Standardized regression coefficients: a further critique and review of some alternatives. Epidemiology 1991;:387–392.
  • (9) Larntz K. Small-sample comparisons of exact levels for chi-squared goodness-of-fit statistics. Journal of the American Statistical Association 1978;73(362):253–263.
  • (10) Zaykin DV, Pudovkin A, Weir BS. Correlation-based inference for linkage disequilibrium with multiple alleles. Genetics 2008;180(1):533–545.
  • (11) Fung TT, Hu FB, Pereira MA, et al. Whole-grain intake and the risk of type-ii diabetes: a prospective study in men. The American journal of clinical nutrition 2002;76(3):535–540.
  • (12) Tinker LF, Sarto GE, Howard BV, et al. Biomarker-calibrated dietary energy and protein intake associations with diabetes risk among postmenopausal women from the women’s health initiative. The American journal of clinical nutrition 2011;94(6):1600–1606.
  • (13) Cullmann M, Hilding A, Östenson CG. Alcohol consumption and risk of pre-diabetes and type-ii diabetes development in a swedish population. Diabetic Medicine 2012;29(4):441–452.
  • (14) Montonen J, Järvinen R, Heliövaara M, et al. Food consumption and the incidence of type-ii diabetes mellitus. European Journal of Clinical Nutrition 2005;59(3):441–448.
  • (15) Hruby A, Meigs JB, O’Donnell CJ, et al. Higher magnesium intake reduces risk of impaired glucose and insulin metabolism and progression from prediabetes to diabetes in middle-aged americans. Diabetes Care 2014;37(2):419–427.
  • (16) Van Dam RM, Hu FB, Rosenberg L, et al. Dietary calcium and magnesium, major food sources, and risk of type-ii diabetes in us black women. Diabetes care 2006;29(10):2238–2243.
  • (17) Hu FB, Manson JE, Stampfer MJ, et al. Diet, lifestyle, and the risk of type-ii diabetes mellitus in women. New England Journal of Medicine 2001;345(11):790–797.
  • (18) Liu S, Serdula M, Janket SJ, et al. A prospective study of fruit and vegetable intake and the risk of type-ii diabetes in women. Diabetes Care 2004;27(12):2993–2996.
  • (19) Song Y, Manson JE, Buring JE, et al. Dietary magnesium intake in relation to plasma insulin levels and risk of type-ii diabetes in women. Diabetes Care 2004;27(1):59–65.
  • (20) Haldane J. The estimation and significance of the logarithm of a ratio of frequencies. Annals of human genetics 1956;20(4):309–311.
  • (21) Anscombe FJ. On estimating binomial response relations. Biometrika 1956;43(3/4):461–464.
  • (22) Lawson R. Small sample confidence intervals for the odds ratio. Communications in Statistics-Simulation and Computation 2004;33(4):1095–1113.
  • (23) Agresti A.

    On logit confidence intervals for the odds ratio with small samples.

    Biometrics 1999;55(2):597–602.
  • (24) Zaykin DV, Meng Z, Ghosh SK. Interval estimation of genetic susceptibility for retrospective case-control studies. BMC genetics 2004;5(1):9.

Figure Legends

Figure 1: The Kepler equation: geometric interpretation. Given the knowledge of the area and the distance to the origin, , solve for the angle in .(2)
Figure 2: The relationship between log(OR) and . The figure illustrates that under the null hypothesis (log(OR) 0), the relationship between sample values of log(OR) and is approximately linear, and under the alternative hypothesis (log(OR) 0) the relationship is monotone in the interval .
Figure 3: The range of -ratio values for . The red line highlights log(OR) values, for which -based -statistic considerably exceeds -statistic. The blue rectangular highlights log(OR) values near the null hypothesis, for which the two statistics are similar to one another. Note that for all values of log(OR), -value never exceeds -value.

Tables

0.0290 0.0508
0.0381 0.0501
0.0432 0.0494
0.0464 0.0491
0.0476 0.0490
0.0485 0.0490
0.0497 0.0498
Table 1: The Type-I error rate by the number of cases ().
0.42 = 0.5 = 1 = 2
0.065 0.098 0.080 0.116 0.212 0.263 0.441 0.493
0.121 0.142 0.151 0.174 0.358 0.385 0.602 0.624
0.204 0.217 0.253 0.266 0.503 0.516 0.718 0.726
0.360 0.365 0.423 0.429 0.664 0.668 0.821 0.823
0.490 0.493 0.553 0.556 0.757 0.758 0.873 0.874
0.613 0.614 0.666 0.667 0.825 0.826 0.909 0.910
0.814 0.814 0.843 0.843 0.921 0.921 0.960 0.960
Fixed OR OR = 1.25 OR = 2 OR = 3 OR = 4
0.038 0.064 0.124 0.176 0.276 0.354 0.411 0.499
0.061 0.076 0.267 0.303 0.559 0.602 0.735 0.770
0.091 0.101 0.499 0.521 0.840 0.854 0.938 0.945
0.174 0.180 0.850 0.856 0.985 0.986 0.999 0.999
0.306 0.310 0.971 0.972 0.999 0.999 1 1
0.532 0.534 0.998 0.998 1 1 1 1
0.971 0.971 1 1 1 1 1 1
Table 2: Power of the tests by different levels of and , assuming that .
# of tests () Posterior coverage
10,000 500 0.46 0.46 0.55 94%
750 0.47 0.47 0.53 95%
1,000 0.48 0.48 0.52 95%
1,500 0.48 0.48 0.51 95%
100,000 500 0.52 0.53 0.62 93%
750 0.53 0.54 0.60 95%
1,000 0.54 0.55 0.60 95%
1,500 0.55 0.55 0.58 95%
500,000 500 0.56 0.57 0.67 92%
750 0.56 0.57 0.64 94%
1,000 0.58 0.58 0.63 94%
1,500 0.59 0.59 0.63 95%
1,000,000 500 0.58 0.59 0.69 91%
750 0.58 0.59 0.66 93%
1,000 0.61 0.61 0.66 95%
1,500 0.61 0.61 0.65 95%
Table 3: Average true value, , average posterior estimator, , and average frequentist estimator, , assuming for the top-ranking (maximum) observed statistic () selected out of tests. Averages refer to the mean value taken across simulation experiments

-2cm Dietary risk factor OR/, 95%CI for Posterior expectation and interval for Whole grain intake (11) 0.70/-0.13, (-0.21, -0.06) -0.13 (-0.20, -0.05) -0.13 (-0.20, -0.05) -0.12 (-0.21, -0.03) Protein intake(12) 1.16/0.06, (0.02, 0.09) 0.053 (0.01, 0.10) 0.05 (-0.00, 0.01) 0.04 (-0.00, 0.08) Alcohol consumption(13) 1.67/0.19, (0.04, 0.34) 0.15 (-0.00, 0.29) 0.13 (-0.00, 0.27) 0.10 (-0.05, 0.26) Fruits and berries(14) 0.69/-0.14, (-0.25, -0.03) -0.12 (-0.23, 0.00) -0.10 (-0.21, 0.00) -0.08 (-0.20, 0.04) Dietary magnesium(15) 0.49/-0.26, (-0.47, -0.03) -0.17 (-0.34, 0.01) -0.15 (-0.32, 0.03) -0.11 (-0.30, 0.09) Dietary calcium(16) 0.86/-0.06, (-0.11, 0.00) -0.043 (-0.10, 0.01) -0.03 (-0.09, 0.03) -0.01 (-0.08, 0.05)

Table 4: Posterior inference and robustness of results to prior assumptions: analysis of dietary risk factors associated with type-II diabetes.

Supplemental material

S-1 Connection to the Laplace Limit Constant (Continued)

To relate LLC to the bounds for standardized , let , . In terms of , the maximum of the standardized statistic is given by

(S-1)

Using basic relations for hyperbolic functions:

we can express and its first derivative as

(S-2)
(S-3)

To maximize the standardized log(OR), we need to set , which is equivalent to solving for . The solution is four times the solution to equation, which is 1.19967864… This implies the maximum log(OR) , and by substituting this value into Eq. (11) we obtain , the LLC. The solution to KE involves the condition equivalent to Eq. (11). Namely, the solution can be expressed as the power series in , provided and that , which is the LLC (3).

S-2 Simulation Setup

We used simulations to compare performance of the asymptotic test based on the statistic to the one based on the new statistic, . For simulation , the true was assumed to be either (a) fixed and equal to the same value across simulations, or (b) normally distributed around zero with the standard deviation . The parameter was chosen as , that is, we assumed that there is 5% a priori chance of encountering OR , and 5% chance of encountering OR .

The number of cases was (25, 50, 100, 250, 500, 1000, 5000) and the number of controls was generated as , where

is the nearest integer function. The probability of exposure among cases was drawn from a uniform distribution,

, and the corresponding probability of expose among controls was calculated as . Two-by-two table cell counts were obtained under the binomal sampling with added to each cell count:

The addition of to cell counts is known as the Haldane-Anscombe correction, commonly used to improve asymptotic convergence to normality of the test statistic for log(OR) (20, 21, 22, 23). It can also be shown that after this correction, the resulting variance estimator, , approximates the posterior variance estimator derived assuming Jeffreys’ prior, i.e., Beta(,) prior distribution for and (24).

For studying Bayesian properties, simulated data sets were obtained as described above, but with the following modifications. We sampled from a prior mixture distribution, where was equal to zero with probability , and with probability , the values were sampled as , discretized into 100 bins. The truncation parameter was set to =4.8, which corresponds to the maximum OR of about 121. In each simulation run (out of 10,000 in total for each setting), we performed , or tests and calculated posterior estimates for the top-ranking result based on the largest value of a -statistic out of tests.