On the Consistency of AUC Pairwise Optimization

08/03/2012
by   Wei Gao, et al.
0

AUC (area under ROC curve) is an important evaluation criterion, which has been popularly used in many learning tasks such as class-imbalance learning, cost-sensitive learning, learning to rank, etc. Many learning approaches try to optimize AUC, while owing to the non-convexity and discontinuousness of AUC, almost all approaches work with surrogate loss functions. Thus, the consistency of AUC is crucial; however, it has been almost untouched before. In this paper, we provide a sufficient condition for the asymptotic consistency of learning approaches based on surrogate loss functions. Based on this result, we prove that exponential loss and logistic loss are consistent with AUC, but hinge loss is inconsistent. Then, we derive the q-norm hinge loss and general hinge loss that are consistent with AUC. We also derive the consistent bounds for exponential loss and logistic loss, and obtain the consistent bounds for many surrogate loss functions under the non-noise setting. Further, we disclose an equivalence between the exponential surrogate loss of AUC and exponential surrogate loss of accuracy, and one straightforward consequence of such finding is that AdaBoost and RankBoost are equivalent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/28/2021

Learning with Multiclass AUC: Theory and Algorithms

The Area under the ROC curve (AUC) is a well-known ranking metric for pr...
research
08/25/2015

AUC Optimisation and Collaborative Filtering

In recommendation systems, one is interested in the ranking of the predi...
research
04/16/2018

A Univariate Bound of Area Under ROC

Area under ROC (AUC) is an important metric for binary classification an...
research
06/29/2022

An Embedding Framework for the Design and Analysis of Consistent Polyhedral Surrogates

We formalize and study the natural approach of designing convex surrogat...
research
02/07/2018

Directly and Efficiently Optimizing Prediction Error and AUC of Linear Classifiers

The predictive quality of machine learning models is typically measured ...
research
01/27/2019

On Symmetric Losses for Learning from Corrupted Labels

This paper aims to provide a better understanding of a symmetric loss. F...
research
05/09/2023

Towards Understanding Generalization of Macro-AUC in Multi-label Learning

Macro-AUC is the arithmetic mean of the class-wise AUCs in multi-label l...

Please sign up or login with your details

Forgot password? Click here to reset