Learning rates for classification with Gaussian kernels

02/28/2017
by   Shao-Bo Lin, et al.
0

This paper aims at refined error analysis for binary classification using support vector machine (SVM) with Gaussian kernel and convex loss. Our first result shows that for some loss functions such as the truncated quadratic loss and quadratic loss, SVM with Gaussian kernel can reach the almost optimal learning rate, provided the regression function is smooth. Our second result shows that, for a large number of loss functions, under some Tsybakov noise assumption, if the regression function is infinitely smooth, then SVM with Gaussian kernel can achieve the learning rate of order m^-1, where m is the number of samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2022

A new trigonometric kernel function for support vector machine

In the last few years, various types of machine learning algorithms, suc...
research
02/24/2017

Learning Rates for Kernel-Based Expectile Regression

Conditional expectiles are becoming an increasingly important tool in fi...
research
01/22/2021

Predicting Autism Spectrum Disorder Using Machine Learning Classifiers

Autism Spectrum Disorder (ASD) is on the rise and constantly growing. Ea...
research
02/15/2021

A generalized quadratic loss for SVM and Deep Neural Networks

We consider some supervised binary classification tasks and a regression...
research
10/06/2020

Memory and Computation-Efficient Kernel SVM via Binary Embedding and Ternary Model Coefficients

Kernel approximation is widely used to scale up kernel SVM training and ...
research
12/12/2016

Analysis and Optimization of Loss Functions for Multiclass, Top-k, and Multilabel Classification

Top-k error is currently a popular performance measure on large scale im...
research
09/12/2018

But How Does It Work in Theory? Linear SVM with Random Features

We prove that, under low noise assumptions, the support vector machine w...

Please sign up or login with your details

Forgot password? Click here to reset