Heterogeneous Calibration: A post-hoc model-agnostic framework for improved generalization

02/10/2022
by   David Durfee, et al.
0

We introduce the notion of heterogeneous calibration that applies a post-hoc model-agnostic transformation to model outputs for improving AUC performance on binary classification tasks. We consider overconfident models, whose performance is significantly better on training vs test data and give intuition onto why they might under-utilize moderately effective simple patterns in the data. We refer to these simple patterns as heterogeneous partitions of the feature space and show theoretically that perfectly calibrating each partition separately optimizes AUC. This gives a general paradigm of heterogeneous calibration as a post-hoc procedure by which heterogeneous partitions of the feature space are identified through tree-based algorithms and post-hoc calibration techniques are applied to each partition to improve AUC. While the theoretical optimality of this framework holds for any model, we focus on deep neural networks (DNNs) and test the simplest instantiation of this paradigm on a variety of open-source datasets. Experiments demonstrate the effectiveness of this framework and the future potential for applying higher-performing partitioning schemes along with more effective calibration techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2020

Post-hoc Uncertainty Calibration for Domain Drift Scenarios

We address the problem of uncertainty calibration. While standard deep n...
research
06/25/2021

Improving Uncertainty Calibration of Deep Neural Networks via Truth Discovery and Geometric Optimization

Deep Neural Networks (DNNs), despite their tremendous success in recent ...
research
06/23/2020

Post-hoc Calibration of Neural Networks

Calibration of neural networks is a critical aspect to consider when inc...
research
03/16/2022

On the Usefulness of the Fit-on-the-Test View on Evaluating Calibration of Classifiers

Every uncalibrated classifier has a corresponding true calibration map t...
research
07/01/2023

CMA-ES for Post Hoc Ensembling in AutoML: A Great Success and Salvageable Failure

Many state-of-the-art automated machine learning (AutoML) systems use gr...
research
06/19/2023

Scaling of Class-wise Training Losses for Post-hoc Calibration

The class-wise training losses often diverge as a result of the various ...
research
05/27/2021

Calibrating Over-Parametrized Simulation Models: A Framework via Eligibility Set

Stochastic simulation aims to compute output performance for complex mod...

Please sign up or login with your details

Forgot password? Click here to reset