Massively-Parallel Feature Selection for Big Data

08/23/2017
by   Ioannis Tsamardinos, et al.
0

We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm for feature selection (FS) in Big Data settings (high dimensionality and/or sample size). To tackle the challenges of Big Data FS PFBP partitions the data matrix both in terms of rows (samples, training examples) as well as columns (features). By employing the concepts of p-values of conditional independence tests and meta-analysis techniques PFBP manages to rely only on computations local to a partition while minimizing communication costs. Then, it employs powerful and safe (asymptotically sound) heuristics to make early, approximate decisions, such as Early Dropping of features from consideration in subsequent iterations, Early Stopping of consideration of features within the same iteration, or Early Return of the winner in each iteration. PFBP provides asymptotic guarantees of optimality for data distributions faithfully representable by a causal network (Bayesian network or maximal ancestral graph). Our empirical analysis confirms a super-linear speedup of the algorithm with increasing sample size, linear scalability with respect to the number of features and processing cores, while dominating other competitive algorithms in its class.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/17/2019

Dropping forward-backward algorithms for feature selection

In this era of big data, feature selection techniques, which have long b...
research
10/14/2018

DPASF: A Flink Library for Streaming Data preprocessing

Data preprocessing techniques are devoted to correct or alleviate errors...
research
08/27/2020

Feature Selection from High-Dimensional Data with Very Low Sample Size: A Cautionary Tale

In classification problems, the purpose of feature selection is to ident...
research
03/03/2022

Parallel feature selection based on the trace ratio criterion

The growth of data today poses a challenge in management and inference. ...
research
04/16/2018

BELIEF: A distance-based redundancy-proof feature selection method for Big Data

With the advent of Big Data era, data reduction methods are highly deman...
research
01/31/2019

Distributed Correlation-Based Feature Selection in Spark

CFS (Correlation-Based Feature Selection) is an FS algorithm that has be...
research
10/08/2013

Distributed Coordinate Descent Method for Learning with Big Data

In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent me...

Please sign up or login with your details

Forgot password? Click here to reset