Entropy methods for the confidence assessment of probabilistic classification models

03/28/2021
by   Gabriele N. Tornetta, et al.
0

Many classification models produce a probability distribution as the outcome of a prediction. This information is generally compressed down to the single class with the highest associated probability. In this paper, we argue that part of the information that is discarded in this process can be in fact used to further evaluate the goodness of models, and in particular the confidence with which each prediction is made. As an application of the ideas presented in this paper, we provide a theoretical explanation of a confidence degradation phenomenon observed in the complement approach to the (Bernoulli) Naive Bayes generative model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2021

A first approach to closeness distributions

Probabilistic graphical models allow us to encode a large probability di...
research
11/18/2020

On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective

The focal loss has demonstrated its effectiveness in many real-world app...
research
10/05/2019

On Tractable Computation of Expected Predictions

Computing expected predictions has many interesting applications in area...
research
12/25/2020

Using the Naive Bayes as a discriminative classifier

For classification tasks, probabilistic models can be categorized into t...
research
02/13/2013

Probabilistic Disjunctive Logic Programming

In this paper we propose a framework for combining Disjunctive Logic Pro...
research
09/24/2021

The Probabilistic Explanation of the Cohort Component Population Projection Method

The probabilistic explanation of the direct and reverse cohort component...
research
10/25/2022

Useful Confidence Measures: Beyond the Max Score

An important component in deploying machine learning (ML) in safety-crit...

Please sign up or login with your details

Forgot password? Click here to reset