Thresholding Classifiers to Maximize F1 Score

02/08/2014
by   Zachary Chase Lipton, et al.
0

This paper provides new insight into maximizing F1 scores in the context of binary classification and also in the context of multilabel classification. The harmonic mean of precision and recall, F1 score is widely used to measure the success of a binary classifier when one class is rare. Micro average, macro average, and per instance average F1 scores are used in multilabel classification. For any classifier that produces a real-valued output, we derive the relationship between the best achievable F1 score and the decision-making threshold that achieves this optimum. As a special case, if the classifier outputs are well-calibrated conditional probabilities, then the optimal threshold is half the optimal F1 score. As another special case, if the classifier is completely uninformative, then the optimal behavior is to classify all examples as positive. Since the actual prevalence of positive examples typically is low, this behavior can be considered undesirable. As a case study, we discuss the results, which can be surprising, of applying this procedure when predicting 26,853 labels for Medline documents.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2019

Danish Stance Classification and Rumour Resolution

The Internet is rife with flourishing rumours that spread through microb...
research
01/16/2023

TEDB System Description to a Shared Task on Euphemism Detection 2022

In this report, we describe our Transformers for euphemism detection bas...
research
03/02/2018

A multi-instance deep neural network classifier: application to Higgs boson CP measurement

We investigate properties of a classifier applied to the measurements of...
research
12/30/2020

Similarity Classification of Public Transit Stations

We study the following problem: given two public transit station identif...
research
04/30/2023

The MCC approaches the geometric mean of precision and recall as true negatives approach infinity

The performance of a binary classifier is described by a confusion matri...
research
02/13/2017

Is a Data-Driven Approach still Better than Random Choice with Naive Bayes classifiers?

We study the performance of data-driven, a priori and random approaches ...
research
11/08/2019

Macro F1 and Macro F1

The 'macro F1' metric is frequently used to evaluate binary, multi-class...

Please sign up or login with your details

Forgot password? Click here to reset