Towards reliable and fair probabilistic predictions: field-aware calibration with neural networks

05/26/2019
by   Feiyang Pan, et al.
0

In machine learning, it is observed that probabilistic predictions sometimes disagree with averaged actual outcomes on certain subsets of data. This is also known as miscalibration that is responsible for unreliability and unfairness of practical machine learning systems. In this paper, we put forward an evaluation metric for calibration, coined field-level calibration error, that measures bias in predictions over the input fields that the decision maker concerns. We show that existing calibration methods perform poorly under our new metric. Specifically, after learning a calibration mapping over the validation dataset, existing methods have limited improvements in our error metric and completely fail to improve other non-calibration metrics such as the AUC score. We propose Neural Calibration, a new calibration method, which learns to calibrate by making full use of all input information over the validation set. We test our method on five large-scale real-world datasets. The results show that Neural Calibration significantly improves against uncalibrated predictions in all well-known metrics such as the negative log-likelihood, the Brier score, the AUC score, as well as our proposed field-level calibration error.

READ FULL TEXT
research
04/02/2019

Measuring Calibration in Deep Learning

The reliability of a machine learning model's confidence in its predicti...
research
09/30/2022

Variable-Based Calibration for Machine Learning Classifiers

The deployment of machine learning classifiers in high-stakes domains re...
research
12/15/2020

Mitigating bias in calibration error estimation

Building reliable machine learning systems requires that we correctly un...
research
02/22/2021

Localized Calibration: Metrics and Recalibration

Probabilistic classifiers output confidence scores along with their pred...
research
06/25/2023

TCE: A Test-Based Approach to Measuring Calibration Error

This paper proposes a new metric to measure the calibration error of pro...
research
09/06/2019

Master your Metrics with Calibration

Machine learning models deployed in real-world applications are often ev...
research
09/12/2023

Rethinking Evaluation Metric for Probability Estimation Models Using Esports Data

Probability estimation models play an important role in various fields, ...

Please sign up or login with your details

Forgot password? Click here to reset