Threshold Choice Methods: the Missing Link

12/12/2011
by   Jose Hernandez-Orallo, et al.
0

Many performance metrics have been introduced for the evaluation of classification performance, with different origins and niches of application: accuracy, macro-accuracy, area under the ROC curve, the ROC convex hull, the absolute error, and the Brier score (with its decomposition into refinement and calibration). One way of understanding the relation among some of these metrics is the use of variable operating conditions (either in the form of misclassification costs or class proportions). Thus, a metric may correspond to some expected loss over a range of operating conditions. One dimension for the analysis has been precisely the distribution we take for this range of operating conditions, leading to some important connections in the area of proper scoring rules. However, we show that there is another dimension which has not received attention in the analysis of performance metrics. This new dimension is given by the decision rule, which is typically implemented as a threshold choice method when using scoring models. In this paper, we explore many old and new threshold choice methods: fixed, score-uniform, score-driven, rate-driven and optimal, among others. By calculating the loss of these methods for a uniform range of operating conditions we get the 0-1 loss, the absolute error, the Brier score (mean squared error), the AUC and the refinement loss respectively. This provides a comprehensive view of performance metrics as well as a systematic approach to loss minimisation, namely: take a model, apply several threshold choice methods consistent with the information which is (and will be) available about the operating condition, and compare their expected losses. In order to assist in this procedure we also derive several connections between the aforementioned performance metrics, and we highlight the role of calibration in choosing the threshold choice method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2011

Technical Note: Towards ROC Curves in Cost Space

ROC curves and cost curves are two popular ways of visualising classifie...
research
10/19/2020

Evaluating the incremental value of a new model: Area under the ROC curve or under the PR curve

Incremental value (IncV) evaluates the performance improvement from an e...
research
01/29/2021

Evaluating the Discrimination Ability of Proper Multivariate Scoring Rules

Proper scoring rules are commonly applied to quantify the accuracy of di...
research
07/30/2013

Likelihood-ratio calibration using prior-weighted proper scoring rules

Prior-weighted logistic regression has become a standard tool for calibr...
research
01/27/2019

On Symmetric Losses for Learning from Corrupted Labels

This paper aims to provide a better understanding of a symmetric loss. F...
research
09/12/2022

Analysis and Comparison of Classification Metrics

A number of different performance metrics are commonly used in the machi...
research
03/11/2013

Refinement revisited with connections to Bayes error, conditional entropy and calibrated classifiers

The concept of refinement from probability elicitation is considered for...

Please sign up or login with your details

Forgot password? Click here to reset