Controlling the False Discovery Rate in Structural Sparsity: Split Knockoffs

03/30/2021
by   Yang Cao, et al.
0

Controlling the False Discovery Rate (FDR) in a variable selection procedure is critical for reproducible discoveries, which receives an extensive study in sparse linear models. However, in many scenarios, the sparsity constraint is not directly imposed on the parameters, but on a linear transformation of the parameters to be estimated. Examples can be found in total variations, wavelet transforms, fused LASSO, and trend filtering, etc. In this paper, we proposed a data adaptive FDR control in this structural sparsity setting, the Split Knockoff method. The proposed scheme relaxes the linear subspace constraint to its neighborhood, often known as variable splitting in optimization, that enjoys new statistical benefits. It yields orthogonal design and split knockoff matrices, that exhibit desired FDR control empirically in structural sparsity discovery, and improve the power of strong feature selection by enhancing the incoherence condition for model selection consistency. Yet, the split knockoff statistics fail to satisfy the exchangeability, a crucial property in the classical knockoff method for provable FDR control. To address this challenge, we introduce an almost supermartingale construction under a perturbation of exchangeability, that enables us to establish FDR control up to an arbitrarily small inflation that vanishes as the relaxed neighborhood enlarges. Simulation experiments show the effectiveness of split knockoffs with possible improvements over knockoffs in both FDR control and Power. An application to Alzheimer's Disease study with MRI data demonstrates that the split knockoff method can disclose important lesion regions in brains associated with the disease and connections between neighboring regions of high contrast variations during disease progression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2023

Stepdown SLOPE for Controlled Feature Selection

Sorted L-One Penalized Estimation (SLOPE) has shown the nice theoretical...
research
02/20/2020

False Discovery Rate Control via Data Splitting

Selecting relevant features associated with a given response variable is...
research
04/28/2022

Controlling the False Discovery Rate via knockoffs: is the +1 needed?

Barber and Candès (2015) control of the FDR in feature selection relies ...
research
04/16/2017

Boosting with Structural Sparsity: A Differential Inclusion Approach

Boosting as gradient descent algorithms is one popular method in machine...
research
07/17/2020

Leveraging both Lesion Features and Procedural Bias in Neuroimaging: An Dual-Task Split dynamics of inverse scale space

The prediction and selection of lesion features are two important tasks ...
research
11/13/2008

P-values for high-dimensional regression

Assigning significance in high-dimensional regression is challenging. Mo...
research
04/24/2019

S^2-LBI: Stochastic Split Linearized Bregman Iterations for Parsimonious Deep Learning

This paper proposes a novel Stochastic Split Linearized Bregman Iteratio...

Please sign up or login with your details

Forgot password? Click here to reset