Feature Elimination in Kernel Machines in moderately high dimensions

04/18/2013
by   Sayan Dasgupta, et al.
0

We develop an approach for feature elimination in statistical learning with kernel machines, based on recursive elimination of features.We present theoretical properties of this method and show that it is uniformly consistent in finding the correct feature space under certain generalized assumptions.We present four case studies to show that the assumptions are met in most practical situations and present simulation results to demonstrate performance of the proposed approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2020

Fibonacci and k-Subsecting Recursive Feature Elimination

Feature selection is a data mining task with the potential of speeding u...
research
01/22/2010

Classifying Network Data with Deep Kernel Machines

Inspired by a growing interest in analyzing network data, we study the p...
research
05/28/2022

Feature subset selection for kernel SVM classification via mixed-integer optimization

We study the mixed-integer optimization (MIO) approach to feature subset...
research
02/13/2013

Bucket Elimination: A Unifying Framework for Several Probabilistic Inference

Probabilistic inference algorithms for finding the most probable explana...
research
09/09/2011

On the Practical use of Variable Elimination in Constraint Optimization Problems: 'Still-life' as a Case Study

Variable elimination is a general technique for constraint processing. I...
research
04/16/2023

A general approach to asymptotic elimination of aggregation functions and generalized quantifiers

We consider a logic with truth values in the unit interval and which use...
research
05/29/2019

On Invariant Synthesis for Parametric Systems

We study possibilities for automated invariant generation in parametric ...

Please sign up or login with your details

Forgot password? Click here to reset