False Discovery Rate Control Under Reduced Precision Computation

05/11/2018 ∙ by Hien D. Nguyen, et al. ∙ 0

The mitigation of false positives is an important issue when conducting multiple hypothesis testing. The most popular paradigm for false positives mitigation in high-dimensional applications is via the control of the false discovery rate (FDR). Multiple testing data from neuroimaging experiments can be very large, and reduced precision storage of such data is often required. We present a method for FDR control that is applicable in cases where only p-values or test statistics (with common and known null distribution) are available, and when those p-values or test statistics are encoded in a reduced precision format. Our method is based on an empirical-Bayes paradigm where the probit transformation of the p-values (called the z-scores) are modeled as a two-component mixture of normal distributions. Due to the reduced precision of the p-values or test statistics, the usual approach for fitting mixture models may not be feasible. We instead use a binned-data technique, which can be proved to consistently estimate the z-score distribution parameters under mild correlation assumptions, as is often the case in neuroimaging data. A simulation study shows that our methodology is competitive when compared with popular alternatives, especially with data in the presence of misspecification. We demonstrate the applicability of our methodology in practice via a brain imaging study of mice.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.