Surrogate Regret Bounds for Polyhedral Losses

10/26/2021
by   Rafael Frongillo, et al.
0

Surrogate risk minimization is an ubiquitous paradigm in supervised machine learning, wherein a target problem is solved by minimizing a surrogate loss on a dataset. Surrogate regret bounds, also called excess risk bounds, are a common tool to prove generalization rates for surrogate risk minimization. While surrogate regret bounds have been developed for certain classes of loss functions, such as proper losses, general results are relatively sparse. We provide two general results. The first gives a linear surrogate regret bound for any polyhedral (piecewise-linear and convex) surrogate, meaning that surrogate generalization rates translate directly to target rates. The second shows that for sufficiently non-polyhedral surrogates, the regret bound is a square root, meaning fast surrogate generalization rates translate to slow rates for the target. Together, these results suggest polyhedral surrogates are optimal in many cases.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/14/2010

Calibrated Surrogate Losses for Classification with Label-Dependent Costs

We present surrogate regret bounds for arbitrary surrogate losses in the...
02/11/2018

On the Rates of Convergence from Surrogate Risk Minimizers to the Bayes Optimal Classifier

We study the rates of convergence from empirical surrogate risk minimize...
02/16/2021

Unifying Lower Bounds on Prediction Dimension of Consistent Convex Surrogates

Given a prediction task, understanding when one can and cannot design a ...
06/27/2012

Consistent Multilabel Ranking through Univariate Losses

We consider the problem of rank loss minimization in the setting of mult...
07/17/2019

An Embedding Framework for Consistent Polyhedral Surrogates

We formalize and study the natural approach of designing convex surrogat...
10/15/2020

Minimax Classification with 0-1 Loss and Performance Guarantees

Supervised classification techniques use training samples to find classi...
08/19/2021

Risk Bounds and Calibration for a Smart Predict-then-Optimize Method

The predict-then-optimize framework is fundamental in practical stochast...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.