A plug-in approach to maximising precision at the top and recall at the top

04/09/2018
by   Dirk Tasche, et al.
0

For information retrieval and binary classification, we show that precision at the top (or precision at k) and recall at the top (or recall at k) are maximised by thresholding the posterior probability of the positive class. This finding is a consequence of a result on constrained minimisation of the cost-sensitive expected classification error which generalises an earlier related result from the literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2020

The Effect of Class Imbalance on Precision-Recall Curves

In this note I study how the precision of a classifier depends on the ra...
research
05/03/2015

Visualization of Tradeoff in Evaluation: from Precision-Recall & PN to LIFT, ROC & BIRD

Evaluation often aims to reduce the correctness or error characteristics...
research
03/19/2023

Two Kinds of Recall

It is an established assumption that pattern-based models are good at pr...
research
04/02/2020

Approximate Selection with Guarantees using Proxies

Due to the falling costs of data acquisition and storage, researchers an...
research
05/07/2015

Optimal Decision-Theoretic Classification Using Non-Decomposable Performance Metrics

We provide a general theoretical analysis of expected out-of-sample util...
research
10/11/2020

Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation

Commonly used evaluation measures including Recall, Precision, F-Measure...
research
07/02/2009

Bounding the Probability of Error for High Precision Recognition

We consider models for which it is important, early in processing, to es...

Please sign up or login with your details

Forgot password? Click here to reset