Large-scale Online Feature Selection for Ultra-high Dimensional Sparse Data

09/27/2014
by   Yue Wu, et al.
0

Feature selection with large-scale high-dimensional data is important yet very challenging in machine learning and data mining. Online feature selection is a promising new paradigm that is more efficient and scalable than batch feature section methods, but the existing online approaches usually fall short in their inferior efficacy as compared with batch approaches. In this paper, we present a novel second-order online feature selection scheme that is simple yet effective, very fast and extremely scalable to deal with large-scale ultra-high dimensional sparse data streams. The basic idea is to improve the existing first-order online feature selection methods by exploiting second-order information for choosing the subset of important features with high confidence weights. However, unlike many second-order learning methods that often suffer from extra high computational cost, we devise a novel smart algorithm for second-order online feature selection using a MaxHeap-based approach, which is not only more effective than the existing first-order approaches, but also significantly more efficient and scalable for large-scale feature selection with ultra-high dimensional sparse data, as validated from our extensive experiments. Impressively, on a billion-scale synthetic dataset (1-billion dimensions, 1-billion nonzero features, and 1-million samples), our new algorithm took only 8 minutes on a single PC, which is orders of magnitudes faster than traditional batch approaches. <http://arxiv.org/abs/1409.7794>

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2020

BEAR: Sketching BFGS Algorithm for Ultra-High Dimensional Feature Selection in Sublinear Memory

We consider feature selection for applications in machine learning where...
research
03/30/2018

Online Regression with Model Selection

Online learning algorithms have a wide variety of applications in large ...
research
06/11/2020

The Backbone Method for Ultra-High Dimensional Sparse Machine Learning

We present the backbone method, a generic framework that enables sparse ...
research
04/02/2020

IVFS: Simple and Efficient Feature Selection for High Dimensional Topology Preservation

Feature selection is an important tool to deal with high dimensional dat...
research
06/12/2018

MISSION: Ultra Large-Scale Feature Selection using Count-Sketches

Feature selection is an important challenge in machine learning. It play...
research
08/21/2022

Scalable mRMR feature selection to handle high dimensional datasets: Vertical partitioning based Iterative MapReduce framework

While building machine learning models, Feature selection (FS) stands ou...
research
11/02/2021

Distributed Sparse Feature Selection in Communication-Restricted Networks

This paper aims to propose and theoretically analyze a new distributed s...

Please sign up or login with your details

Forgot password? Click here to reset