The Pessimistic Limits of Margin-based Losses in Semi-supervised Learning

12/28/2016
by   Jesse H. Krijthe, et al.
0

We show that for linear classifiers defined by convex margin-based surrogate losses that are monotonically decreasing, it is impossible to construct any semi-supervised approach that is able to guarantee an improvement over the supervised classifier measured by this surrogate loss. For non-monotonically decreasing loss functions, we demonstrate safe improvements are possible.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2015

Optimal Binary Classifier Aggregation for General Losses

We address the problem of aggregating an ensemble of predictors with kno...
research
06/27/2012

Minimizing The Misclassification Error Rate Using a Surrogate Convex Loss

We carefully study how well minimizing convex surrogate loss functions, ...
research
07/13/2017

On Measuring and Quantifying Performance: Error Rates, Surrogate Loss, and an Example in SSL

In various approaches to learning, notably in domain adaptation, active ...
research
12/24/2015

The Lovász Hinge: A Novel Convex Surrogate for Submodular Losses

Learning with non-modular losses is an important problem when sets of pr...
research
05/24/2018

Learning Classifiers with Fenchel-Young Losses: Generalized Entropies, Margins, and Algorithms

We study in this paper Fenchel-Young losses, a generic way to construct ...
research
10/15/2020

Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

We propose a general framework for searching surrogate losses for mainst...
research
02/10/2020

Supervised Learning: No Loss No Cry

Supervised learning requires the specification of a loss function to min...

Please sign up or login with your details

Forgot password? Click here to reset