Controlling the False Discovery Rate via knockoffs: is the +1 needed?

04/28/2022
by   Andrew Rajchert, et al.
0

Barber and Candès (2015) control of the FDR in feature selection relies on estimating the FDR by the number of knockoff wins +1 divided by the number of original wins. We show that the +1 is necessary in typical scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2023

Stepdown SLOPE for Controlled Feature Selection

Sorted L-One Penalized Estimation (SLOPE) has shown the nice theoretical...
research
07/21/2020

ADAGES: adaptive aggregation with stability for distributed feature selection

In this era of "big" data, not only the large amount of data keeps motiv...
research
08/05/2022

Feature Selection for Machine Learning Algorithms that Bounds False Positive Rate

The problem of selecting a handful of truly relevant variables in superv...
research
03/30/2021

Controlling the False Discovery Rate in Structural Sparsity: Split Knockoffs

Controlling the False Discovery Rate (FDR) in a variable selection proce...
research
07/14/2016

Estimating and Controlling the False Discovery Rate for the PC Algorithm Using Edge-Specific P-Values

The PC algorithm allows investigators to estimate a complete partially d...
research
08/01/2023

CoxKnockoff: Controlled Feature Selection for the Cox Model Using Knockoffs

Although there is a huge literature on feature selection for the Cox mod...
research
03/12/2019

ECKO: Ensemble of Clustered Knockoffs for multivariate inference on fMRI data

Continuous improvement in medical imaging techniques allows the acquisit...

Please sign up or login with your details

Forgot password? Click here to reset