Minimizing The Misclassification Error Rate Using a Surrogate Convex Loss

06/27/2012
by   Shai Ben-David, et al.
0

We carefully study how well minimizing convex surrogate loss functions, corresponds to minimizing the misclassification error rate for the problem of binary classification with linear predictors. In particular, we show that amongst all convex surrogate losses, the hinge loss gives essentially the best possible bound, of all convex loss functions, for the misclassification error rate of the resulting linear predictor in terms of the best possible margin error rate. We also provide lower bounds for specific convex surrogates that show how different commonly used losses qualitatively differ from each other.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/28/2016

The Pessimistic Limits of Margin-based Losses in Semi-supervised Learning

We show that for linear classifiers defined by convex margin-based surro...
research
01/22/2023

Learning to Reject with a Fixed Predictor: Application to Decontextualization

We study the problem of classification with a reject option for a fixed ...
research
12/08/2021

The perils of being unhinged: On the accuracy of classifiers minimizing a noise-robust convex loss

van Rooyen et al. introduced a notion of convex loss functions being rob...
research
10/16/2018

Stochastic Negative Mining for Learning with Large Output Spaces

We consider the problem of retrieving the most relevant labels for a giv...
research
10/26/2021

Surrogate Regret Bounds for Polyhedral Losses

Surrogate risk minimization is an ubiquitous paradigm in supervised mach...
research
01/05/2021

A Symmetric Loss Perspective of Reliable Machine Learning

When minimizing the empirical risk in binary classification, it is a com...
research
06/07/2023

Loss Functions for Behavioral Game Theory

Behavioral game theorists all use experimental data to evaluate predicti...

Please sign up or login with your details

Forgot password? Click here to reset