Feature and Variable Selection in Classification

02/10/2014
by   Aaron Karper, et al.
0

The amount of information in the form of features and variables avail- able to machine learning algorithms is ever increasing. This can lead to classifiers that are prone to overfitting in high dimensions, high di- mensional models do not lend themselves to interpretable results, and the CPU and memory resources necessary to run on high-dimensional datasets severly limit the applications of the approaches. Variable and feature selection aim to remedy this by finding a subset of features that in some way captures the information provided best. In this paper we present the general methodology and highlight some specific approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2022

Graph Convolutional Network-based Feature Selection for High-dimensional and Low-sample Size Data

Feature selection is a powerful dimension reduction technique which sele...
research
12/06/2017

Sparsity Regularization for classification of large dimensional data

Feature selection has evolved to be a very important step in several mac...
research
03/23/2020

Large-P Variable Selection in Two-Stage Models

Model selection in the large-P small-N scenario is discussed in the fram...
research
03/09/2023

A Lite Fireworks Algorithm with Fractal Dimension Constraint for Feature Selection

As the use of robotics becomes more widespread, the huge amount of visio...
research
09/04/2017

Random Subspace with Trees for Feature Selection Under Memory Constraints

Dealing with datasets of very high dimension is a major challenge in mac...
research
03/25/2014

Selective Factor Extraction in High Dimensions

This paper studies simultaneous feature selection and extraction in supe...
research
05/17/2019

Comparison of Machine Learning Models in Food Authentication Studies

The underlying objective of food authentication studies is to determine ...

Please sign up or login with your details

Forgot password? Click here to reset