Support Vector Regression: Risk Quadrangle Framework

12/18/2022
by   Anton Malandii, et al.
0

This paper investigates Support Vector Regression (SVR) in the context of the fundamental risk quadrangle paradigm. It is shown that both formulations of SVR, ε-SVR and ν-SVR, correspond to the minimization of equivalent regular error measures (Vapnik error and superquantile (CVaR) norm, respectively) with a regularization penalty. These error measures, in turn, give rise to corresponding risk quadrangles. By constructing the fundamental risk quadrangle, which corresponds to SVR, we show that SVR is the asymptotically unbiased estimator of the average of two symmetric conditional quantiles. Furthermore, the technique used for the construction of quadrangles serves as a powerful tool in proving the equivalence between ε-SVR and ν-SVR. Additionally, SVR is formulated as a regular deviation minimization problem with a regularization penalty by invoking Error Shaping Decomposition of Regression and the dual formulation of SVR in the risk quadrangle framework is derived.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2019

Learning Theory and Support Vector Machines - a primer

The main goal of statistical learning theory is to provide a fundamental...
research
09/10/2015

An Epsilon Hierarchical Fuzzy Twin Support Vector Regression

The research presents epsilon hierarchical fuzzy twin support vector reg...
research
05/12/2017

Iteratively-Reweighted Least-Squares Fitting of Support Vector Machines: A Majorization--Minimization Algorithm Approach

Support vector machines (SVMs) are an important tool in modern data anal...
research
02/23/2012

Support Vector Regression for Right Censored Data

We develop a unified approach for classification and regression support ...
research
05/21/2021

A Precise Performance Analysis of Support Vector Regression

In this paper, we study the hard and soft support vector regression tech...
research
07/07/2018

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions

Consider the following class of learning schemes: β̂ := _β ∑_j=1^n ℓ(x_j...
research
10/04/2018

Approximate Leave-One-Out for High-Dimensional Non-Differentiable Learning Problems

Consider the following class of learning schemes: β̂ := β∈C ∑_j=1^n ℓ(x_...

Please sign up or login with your details

Forgot password? Click here to reset