A Novel Approach for Stable Selection of Informative Redundant Features from High Dimensional fMRI Data

06/27/2015
by   Yilun Wang, et al.
0

Feature selection is among the most important components because it not only helps enhance the classification accuracy, but also or even more important provides potential biomarker discovery. However, traditional multivariate methods is likely to obtain unstable and unreliable results in case of an extremely high dimensional feature space and very limited training samples, where the features are often correlated or redundant. In order to improve the stability, generalization and interpretations of the discovered potential biomarker and enhance the robustness of the resultant classifier, the redundant but informative features need to be also selected. Therefore we introduced a novel feature selection method which combines a recent implementation of the stability selection approach and the elastic net approach. The advantage in terms of better control of false discoveries and missed discoveries of our approach, and the resulted better interpretability of the obtained potential biomarker is verified in both synthetic and real fMRI experiments. In addition, we are among the first to demonstrate the robustness of feature selection benefiting from the incorporation of stability selection and also among the first to demonstrate the possible unrobustness of the classical univariate two-sample t-test method. Specifically, we show the robustness of our feature selection results in existence of noisy (wrong) training labels, as well as the robustness of the resulted classifier based on our feature selection results in the existence of data variation, demonstrated by a multi-center attention-deficit/hyperactivity disorder (ADHD) fMRI data.

READ FULL TEXT

page 6

page 7

page 12

research
10/26/2020

Fast-Ensembles of Minimum Redundancy Feature Selection

Finding relevant subspaces in very high-dimensional data is a challengin...
research
11/26/2022

Where to Pay Attention in Sparse Training for Feature Selection?

A new line of research for feature selection based on neural networks ha...
research
10/17/2014

Randomized Structural Sparsity via Constrained Block Subsampling for Improved Sensitivity of Discriminative Voxel Identification

In this paper, we consider voxel selection for functional Magnetic Reson...
research
03/03/2022

Parallel feature selection based on the trace ratio criterion

The growth of data today poses a challenge in management and inference. ...
research
07/18/2022

ManiFeSt: Manifold-based Feature Selection for Small Data Sets

In this paper, we present a new method for few-sample supervised feature...
research
06/08/2020

Interpretable Signal Analysis with Knockoffs Enhances Classification of Bacterial Raman Spectra

Interpretability is important for many applications of machine learning ...

Please sign up or login with your details

Forgot password? Click here to reset