The world of research has gone berserk: modeling the consequences of requiring "greater statistical stringency" for scientific publication
In response to growing concern about the reliability and reproducibility of published science, researchers have proposed adopting measures of greater statistical stringency, including suggestions to require larger sample sizes and to lower the highly criticized p<0.05 significance threshold. While pros and cons are vigorously debated, there has been little to no modeling of how adopting these measures might affect what type of science is published. In this paper, we develop a novel optimality model that, given current incentives to publish, predicts a researcher's most rational use of resources in terms of the number of studies to undertake, the statistical power to devote to each study, and the desirable pre-study odds to pursue. We then develop a methodology that allows one to estimate the reliability of published research by considering a distribution of preferred research strategies. Using this approach, we investigate the merits of adopting measures of `greater statistical stringency' with the goal of informing the ongoing debate.
READ FULL TEXT