Statistical Inference for Diagnostic Test Accuracy Studies with Multiple Comparisons

05/27/2021
by   Max Westphal, et al.
0

Diagnostic accuracy studies assess sensitivity and specificity of a new index test in relation to an established comparator or the reference standard. The development and selection of the index test is usually assumed to be conducted prior to the accuracy study. In practice, this is often violated, for instance if the choice of the (apparently) best biomarker, model or cutpoint is based on the same data that is used later for validation purposes. In this work, we investigate several multiple comparison procedures which provide family-wise error rate control for the emerging multiple testing problem. Due to the nature of the co-primary hypothesis problem, conventional approaches for multiplicity adjustment are too conservative for the specific problem and thus need to be adapted. In an extensive simulation study, five multiple comparison procedures are compared with regards to statistical error rates in least-favorable and realistic scenarios. This covers parametric and nonparamtric methods and one Bayesian approach. All methods have been implemented in the new open-source R package DTAmc which allows to reproduce all simulation results. Based on our numerical results, we conclude that the parametric approaches (maxT, Bonferroni) are easy to apply but can have inflated type I error rates for small sample sizes. The two investigated Bootstrap procedures, in particular the so-called pairs Bootstrap, allow for a family-wise error rate control in finite samples and in addition have a competitive statistical power.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset