But How Does It Work in Theory? Linear SVM with Random Features

09/12/2018
by   Anna Gilbert, et al.
2

We prove that, under low noise assumptions, the support vector machine with N≪ m random features (RFSVM) can achieve the learning rate faster than O(1/√(m)) on a training set with m samples when an optimized feature map is used. Our work extends the previous fast rate analysis of random features method from least square loss to 0-1 loss. We also show that the reweighted feature selection method, which approximates the optimized feature map, helps improve the performance of RFSVM in experiments on a synthetic data set.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2018

Mixed Integer Linear Programming for Feature Selection in Support Vector Machine

This work focuses on support vector machine (SVM) with feature selection...
research
09/18/2016

Probabilistic Feature Selection and Classification Vector Machine

Sparse Bayesian learning is one of the state-of- the-art machine learnin...
research
09/22/2021

Sharp Analysis of Random Fourier Features in Classification

We study the theoretical properties of random Fourier features classific...
research
02/28/2017

Learning rates for classification with Gaussian kernels

This paper aims at refined error analysis for binary classification usin...
research
04/21/2020

A novel embedded min-max approach for feature selection in nonlinear SVM classification

In recent years, feature selection has become a challenging problem in s...
research
10/12/2020

A Neurochaos Learning Architecture for Genome Classification

There has been empirical evidence of presence of non-linearity and chaos...
research
06/12/2020

Weston-Watkins Hinge Loss and Ordered Partitions

Multiclass extensions of the support vector machine (SVM) have been form...

Please sign up or login with your details

Forgot password? Click here to reset