Whiteout: when do fixed-X knockoffs fail?

06/30/2021
by   Xiao Li, et al.
0

A core strength of knockoff methods is their virtually limitless customizability, allowing an analyst to exploit machine learning algorithms and domain knowledge without threatening the method's robust finite-sample false discovery rate control guarantee. While several previous works have investigated regimes where specific implementations of knockoffs are provably powerful, general negative results are more difficult to obtain for such a flexible method. In this work we recast the fixed-X knockoff filter for the Gaussian linear model as a conditional post-selection inference method. It adds user-generated Gaussian noise to the ordinary least squares estimator β̂ to obtain a "whitened" estimator β with uncorrelated entries, and performs inference using sgn(β_j) as the test statistic for H_j: β_j = 0. We prove equivalence between our whitening formulation and the more standard formulation involving negative control predictor variables, showing how the fixed-X knockoffs framework can be used for multiple testing on any problem with (asymptotically) multivariate Gaussian parameter estimates. Relying on this perspective, we obtain the first negative results that universally upper-bound the power of all fixed-X knockoff methods, without regard to choices made by the analyst. Our results show roughly that, if the leading eigenvalues of Var(β̂) are large with dense leading eigenvectors, then there is no way to whiten β̂ without irreparably erasing nearly all of the signal, rendering sgn(β_j) too uninformative for accurate inference. We give conditions under which the true positive rate (TPR) for any fixed-X knockoff method must converge to zero even while the TPR of Bonferroni-corrected multiple testing tends to one, and we explore several examples illustrating this phenomenon.

READ FULL TEXT
research
10/12/2022

On the testing of multiple hypothesis in sliced inverse regression

We consider the multiple testing of the general regression framework aim...
research
07/02/2020

A Scale-free Approach for False Discovery Rate Control in Generalized Linear Models

The generalized linear models (GLM) have been widely used in practice to...
research
11/21/2019

Controlling False Discovery Rate Using Gaussian Mirrors

Simultaneously finding multiple influential variables and controlling th...
research
07/15/2020

A Bayesian Multiple Testing Paradigm for Model Selection in Inverse Regression Problems

In this article, we propose a novel Bayesian multiple testing formulatio...
research
11/15/2017

A New Perspective on Robust M-Estimation: Finite Sample Theory and Applications to Dependence-Adjusted Multiple Testing

Heavy-tailed errors impair the accuracy of the least squares estimate, w...
research
03/20/2019

On approximate validation of models: A Kolmogorov-Smirnov based approach

Classical tests of fit typically reject a model for large enough real da...
research
10/16/2020

Power of FDR Control Methods: The Impact of Ranking Algorithm, Tampered Design, and Symmetric Statistic

As the power of FDR control methods for high-dimensional variable select...

Please sign up or login with your details

Forgot password? Click here to reset