DeepAI AI Chat
Log In Sign Up

Refinement revisited with connections to Bayes error, conditional entropy and calibrated classifiers

by   Hamed Masnadi-Shirazi, et al.
Shiraz University

The concept of refinement from probability elicitation is considered for proper scoring rules. Taking directions from the axioms of probability, refinement is further clarified using a Hilbert space interpretation and reformulated into the underlying data distribution setting where connections to maximal marginal diversity and conditional entropy are considered and used to derive measures that provide arbitrarily tight bounds on the Bayes error. Refinement is also reformulated into the classifier output setting and its connections to calibrated classifiers and proper margin losses are established.


page 1

page 2

page 3

page 4


Properization: Constructing Proper Scoring Rules via Bayes Acts

Scoring rules serve to quantify predictive performance. A scoring rule i...

On the Kolmogorov Complexity of Binary Classifiers

We provide tight upper and lower bounds on the expected minimum Kolmogor...

On loss functions and regret bounds for multi-category classification

We develop new approaches in multi-class settings for constructing prope...

Generalized Fano-Type Inequality for Countably Infinite Systems with List-Decoding

This study investigates generalized Fano-type inequalities in the follow...

Threshold Choice Methods: the Missing Link

Many performance metrics have been introduced for the evaluation of clas...

Structure of Classifier Boundaries: Case Study for a Naive Bayes Classifier

Whether based on models, training data or a combination, classifiers pla...