Calibrating sufficiently

05/15/2021
by   Dirk Tasche, et al.
0

When probabilistic classifiers are trained and calibrated, the so-called grouping loss component of the calibration loss can easily be overlooked. Grouping loss refers to the gap between observable information and information actually exploited in the calibration exercise. We investigate the relation between grouping loss and the concept of sufficiency, identifying comonotonicity as a useful criterion for sufficiency. We revisit the probing reduction approach of Langford Zadrozny (2005) and find that it produces an estimator of probabilistic classifiers that reduces grouping loss. Finally, we discuss Brier curves as tools to support training and 'sufficient' calibration of probabilistic classifiers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2022

Beyond calibration: estimating the grouping loss of modern neural networks

Good decision making requires machine-learning models to provide trustwo...
research
09/15/2021

Making Heads and Tails of Models with Marginal Calibration for Sparse Tagsets

For interpreting the behavior of a probabilistic model, it is useful to ...
research
02/08/2023

On the Richness of Calibration

Probabilistic predictions can be evaluated through comparisons with obse...
research
08/08/2022

Statistical Properties of the Probabilistic Numeric Linear Solver BayesCG

We analyse the calibration of BayesCG under the Krylov prior, a probabil...
research
10/28/2019

Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration

Class probabilities predicted by most multiclass classifiers are uncalib...
research
09/08/2021

Estimating Expected Calibration Errors

Uncertainty in probabilistic classifiers predictions is a key concern wh...
research
08/12/2023

Optimal FIFO grouping in public transit networks

This technical report is about grouping vehicles (so-called trips) in pu...

Please sign up or login with your details

Forgot password? Click here to reset