BELIEF: A distance-based redundancy-proof feature selection method for Big Data

04/16/2018
by   Sergio Ramírez-Gallego, et al.
0

With the advent of Big Data era, data reduction methods are highly demanded given its ability to simplify huge data, and ease complex learning processes. Concretely, algorithms that are able to filter relevant dimensions from a set of millions are of huge importance. Although effective, these techniques suffer from the "scalability" curse as well. In this work, we propose a distributed feature weighting algorithm, which is able to rank millions of features in parallel using large samples. This method, inspired by the well-known RELIEF algorithm, introduces a novel redundancy elimination measure that provides similar schemes to those based on entropy at a much lower cost. It also allows smooth scale up when more instances are demanded in feature estimations. Empirical tests performed on our method show its estimation ability in manifold huge sets --both in number of features and instances--, as well as its simplified runtime cost (specially, at the redundancy detection step).

READ FULL TEXT
research
10/13/2016

An Information Theoretic Feature Selection Framework for Big Data under Apache Spark

With the advent of extremely high dimensional datasets, dimensionality r...
research
10/06/2016

Parallel Large-Scale Attribute Reduction on Cloud Systems

The rapid growth of emerging information technologies and application pa...
research
05/17/2022

Unsupervised Features Ranking via Coalitional Game Theory for Categorical Data

Not all real-world data are labeled, and when labels are not available, ...
research
01/31/2019

Distributed Correlation-Based Feature Selection in Spark

CFS (Correlation-Based Feature Selection) is an FS algorithm that has be...
research
08/23/2017

Massively-Parallel Feature Selection for Big Data

We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm ...
research
01/08/2023

Analogical Relevance Index

Focusing on the most significant features of a dataset is useful both in...
research
01/12/2018

How Many Samples Required in Big Data Collection: A Differential Message Importance Measure

Information collection is a fundamental problem in big data, where the s...

Please sign up or login with your details

Forgot password? Click here to reset