Surrogate Regret Bounds for Polyhedral Losses

by   Rafael Frongillo, et al.

Surrogate risk minimization is an ubiquitous paradigm in supervised machine learning, wherein a target problem is solved by minimizing a surrogate loss on a dataset. Surrogate regret bounds, also called excess risk bounds, are a common tool to prove generalization rates for surrogate risk minimization. While surrogate regret bounds have been developed for certain classes of loss functions, such as proper losses, general results are relatively sparse. We provide two general results. The first gives a linear surrogate regret bound for any polyhedral (piecewise-linear and convex) surrogate, meaning that surrogate generalization rates translate directly to target rates. The second shows that for sufficiently non-polyhedral surrogates, the regret bound is a square root, meaning fast surrogate generalization rates translate to slow rates for the target. Together, these results suggest polyhedral surrogates are optimal in many cases.



There are no comments yet.


page 1

page 2

page 3

page 4


Calibrated Surrogate Losses for Classification with Label-Dependent Costs

We present surrogate regret bounds for arbitrary surrogate losses in the...

On the Rates of Convergence from Surrogate Risk Minimizers to the Bayes Optimal Classifier

We study the rates of convergence from empirical surrogate risk minimize...

Unifying Lower Bounds on Prediction Dimension of Consistent Convex Surrogates

Given a prediction task, understanding when one can and cannot design a ...

Consistent Multilabel Ranking through Univariate Losses

We consider the problem of rank loss minimization in the setting of mult...

An Embedding Framework for Consistent Polyhedral Surrogates

We formalize and study the natural approach of designing convex surrogat...

Minimax Classification with 0-1 Loss and Performance Guarantees

Supervised classification techniques use training samples to find classi...

Risk Bounds and Calibration for a Smart Predict-then-Optimize Method

The predict-then-optimize framework is fundamental in practical stochast...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.