Why multiple hypothesis test corrections provide poor control of false positives in the real world

08/10/2021
by   Stanley E. Lazic, et al.
0

Most scientific disciplines use significance testing to draw conclusions from experimental or observational data. This classical approach provides theoretical guarantees for controlling the number of false positives across a set of hypothesis tests, making it an appealing framework for scientists who wish to limit the number of false effects or associations that they claim exist. Unfortunately, these theoretical guarantees apply to few experiments and the actual false positive rate (FPR) is much higher than the theoretical rate. In real experiments, hypotheses are often tested after finding unexpected relationships or patterns, the data are analysed in several ways, analyses may be run repeatedly as data accumulate from new experimental runs, and publicly available data are analysed by many groups. In addition, the freedom scientists have to choose the error rate to control, the collection of tests to include in the adjustment, and the method of correction provides too much flexibility for strong error control. Even worse, methods known to provide poor control of the FPR such as Newman-Keuls and Fisher's Least Significant Difference are popular with researchers. As a result, adjusted p-values are too small, the incorrect conclusion is often reached, and reported results are less reproducible. Here, I show why the FPR is rarely controlled in any meaningful way and argue that a single well-defined FPR does not even exist.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2019

Kernel Stein Tests for Multiple Model Comparison

We address the problem of non-parametric multiple model comparison: give...
research
09/19/2018

Online control of the false discovery rate in biomedical research

Modern biomedical research frequently involves testing multiple related ...
research
01/19/2019

Custodes: Auditable Hypothesis Testing

We present Custodes: a new approach to solving the complex issue of prev...
research
04/03/2019

DiscreteFDR: An R package for controlling the false discovery rate for discrete test statistics

The simultaneous analysis of many statistical tests is ubiquitous in app...
research
02/20/2020

Familywise Error Rate Control by Interactive Unmasking

We propose a method for multiple hypothesis testing with familywise erro...
research
08/13/2022

Machine learning meets false discovery rate

Classical false discovery rate (FDR) controlling procedures offer strong...
research
09/04/2020

Defending the P-value

Attacks on the P-value are nothing new, but the recent attacks are incre...

Please sign up or login with your details

Forgot password? Click here to reset