Fast SVM-based Feature Elimination Utilizing Data Radius, Hard-Margin, Soft-Margin

10/16/2012
by   Yaman Aksu, et al.
0

Margin maximization in the hard-margin sense, proposed as feature elimination criterion by the MFE-LO method, is combined here with data radius utilization to further aim to lower generalization error, as several published bounds and bound-related formulations pertaining to lowering misclassification risk (or error) pertain to radius e.g. product of squared radius and weight vector squared norm. Additionally, we propose additional novel feature elimination criteria that, while instead being in the soft-margin sense, too can utilize data radius, utilizing previously published bound-related formulations for approaching radius for the soft-margin sense, whereby e.g. a focus was on the principle stated therein as "finding a bound whose minima are in a region with small leave-one-out values may be more important than its tightness". These additional criteria we propose combine radius utilization with a novel and computationally low-cost soft-margin light classifier retraining approach we devise named QP1; QP1 is the soft-margin alternative to the hard-margin LO. We correct an error in the MFE-LO description, find MFE-LO achieves the highest generalization accuracy among the previously published margin-based feature elimination (MFE) methods, discuss some limitations of MFE-LO, and find our novel methods herein outperform MFE-LO, attain lower test set classification error rate. On several datasets that each both have a large number of features and fall into the `large features few samples' dataset category, and on datasets with lower (low-to-intermediate) number of features, our novel methods give promising results. Especially, among our methods the tunable ones, that do not employ (the non-tunable) LO approach, can be tuned more aggressively in the future than herein, to aim to demonstrate for them even higher performance than herein.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2015

F-SVM: Combination of Feature Transformation and SVM Learning via Convex Relaxation

The generalization error bound of support vector machine (SVM) depends o...
research
03/29/2020

On the Precise Error Analysis of Support Vector Machines

This paper investigates the asymptotic behavior of the soft-margin and h...
research
02/03/2020

On the Performance under Hard and Soft Bitwise Mismatched-Decoding

We investigated a suitable auxiliary channel setting and the gap between...
research
11/03/2018

Radius-margin bounds for deep neural networks

Explaining the unreasonable effectiveness of deep learning has eluded re...
research
10/12/2016

Generalization bound for kernel similarity learning

Similarity learning has received a large amount of interest and is an im...
research
06/26/2018

Regularity radius: Properties, approximation and a not a priori exponential algorithm

The radius of regularity sometimes spelled as the radius of nonsingulari...

Please sign up or login with your details

Forgot password? Click here to reset