Feature Selection with the R Package MXM: Discovering Statistically-Equivalent Feature Subsets

11/10/2016
by   Vincenzo Lagani, et al.
0

The statistically equivalent signature (SES) algorithm is a method for feature selection inspired by the principles of constrained-based learning of Bayesian Networks. Most of the currently available feature-selection methods return only a single subset of features, supposedly the one with the highest predictive power. We argue that in several domains multiple subsets can achieve close to maximal predictive accuracy, and that arbitrarily providing only one has several drawbacks. The SES method attempts to identify multiple, predictive feature subsets whose performances are statistically equivalent. Under that respect SES subsumes and extends previous feature selection algorithms, like the max-min parent children algorithm. SES is implemented in an homonym function included in the R package MXM, standing for mens ex machina, meaning 'mind from the machine' in Latin. The MXM implementation of SES handles several data-analysis tasks, namely classification, regression and survival analysis. In this paper we present the SES algorithm, its implementation, and provide examples of use of the SES function in R. Furthermore, we analyze three publicly available data sets to illustrate the equivalence of the signatures retrieved by SES and to contrast SES against the state-of-the-art feature selection method LASSO. Our results provide initial evidence that the two methods perform comparably well in terms of predictive accuracy and that multiple, equally predictive signatures are actually present in real world data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2020

FSinR: an exhaustive package for feature selection

Feature Selection (FS) is a key task in Machine Learning. It consists in...
research
01/26/2011

The influence of feature selection methods on accuracy, stability and interpretability of molecular signatures

Motivation: Biomarker discovery from high-dimensional data is a crucial ...
research
10/20/2021

PPFS: Predictive Permutation Feature Selection

We propose Predictive Permutation Feature Selection (PPFS), a novel wrap...
research
01/16/2013

Bayesian Classification and Feature Selection from Finite Data Sets

Feature selection aims to select the smallest subset of features for a s...
research
06/16/2022

Powershap: A Power-full Shapley Feature Selection Method

Feature selection is a crucial step in developing robust and powerful ma...
research
05/30/2017

Forward-Backward Selection with Early Dropping

Forward-backward selection is one of the most basic and commonly-used fe...

Please sign up or login with your details

Forgot password? Click here to reset