A new framework for optimal classifier design

05/07/2013
by   Matías Di Martino, et al.
0

The use of alternative measures to evaluate classifier performance is gaining attention, specially for imbalanced problems. However, the use of these measures in the classifier design process is still unsolved. In this work we propose a classifier designed specifically to optimize one of these alternative measures, namely, the so-called F-measure. Nevertheless, the technique is general, and it can be used to optimize other evaluation measures. An algorithm to train the novel classifier is proposed, and the numerical scheme is tested with several databases, showing the optimality and robustness of the presented classifier.

READ FULL TEXT
research
04/10/2015

Performance measures for classification systems with rejection

Classifiers with rejection are essential in real-world applications wher...
research
02/28/2021

A Minimax Probability Machine for Non-Decomposable Performance Measures

Imbalanced classification tasks are widespread in many real-world applic...
research
12/04/2010

Efficient Optimization of Performance Measures by Classifier Adaptation

In practical applications, machine learning algorithms are often needed ...
research
12/24/2015

Measuring pattern retention in anonymized data -- where one measure is not enough

In this paper, we explore how modifying data to preserve privacy affects...
research
11/19/2017

An Improved Oscillating-Error Classifier with Branching

This paper extends the earlier work on an oscillating error correction t...
research
04/15/2016

Delta divergence: A novel decision cognizant measure of classifier incongruence

Disagreement between two classifiers regarding the class membership of a...
research
06/08/2019

Lift Up and Act! Classifier Performance in Resource-Constrained Applications

Classification tasks are common across many fields and applications wher...

Please sign up or login with your details

Forgot password? Click here to reset