Extending F_1 metric, probabilistic approach

10/21/2022
by   Mikolaj Sitarz, et al.
1

This article explores the extension of well-known F_1 score used for assessing the performance of binary classifiers. We propose the new metric using probabilistic interpretation of precision, recall, specificity, and negative predictive value. We describe its properties and compare it to common metrics. Then we demonstrate its behavior in edge cases of the confusion matrix. Finally, the properties of the metric are tested on binary classifier trained on the real dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/02/2020

A Heaviside Function Approximation for Neural Network Binary Classification

Neural network binary classifiers are often evaluated on metrics like ac...
research
06/19/2020

Classifier uncertainty: evidence, potential impact, and probabilistic treatment

Classifiers are often tested on relatively small data sets, which should...
research
09/14/2022

Meta Pattern Concern Score: A Novel Metric for Customizable Evaluation of Multi-classification

Classifiers have been widely implemented in practice, while how to evalu...
research
04/23/2019

Obtaining binary perfect codes out of tilings

A translation-support metric (TS-metric) is a metric which is translatio...
research
02/28/2018

Constrained Classification and Ranking via Quantiles

In most machine learning applications, classification accuracy is not th...
research
11/13/2022

Language Model Classifier Aligns Better with Physician Word Sensitivity than XGBoost on Readmission Prediction

Traditional evaluation metrics for classification in natural language pr...
research
06/01/2016

On the equivalence between Kolmogorov-Smirnov and ROC curve metrics for binary classification

Binary decisions are very common in artificial intelligence. Applying a ...

Please sign up or login with your details

Forgot password? Click here to reset