Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification

02/15/2021
by   Yu Bai, et al.
8

Modern machine learning models with high accuracy are often miscalibrated – the predicted top probability does not reflect the actual accuracy, and tends to be over-confident. It is commonly believed that such over-confidence is mainly due to over-parametrization, in particular when the model is large enough to memorize the training data and maximize the confidence. In this paper, we show theoretically that over-parametrization is not the only reason for over-confidence. We prove that logistic regression is inherently over-confident, in the realizable, under-parametrized setting where the data is generated from the logistic model, and the sample size is much larger than the number of parameters. Further, this over-confidence happens for general well-specified binary classification problems as long as the activation is symmetric and concave on the positive part. Perhaps surprisingly, we also show that over-confidence is not always the case – there exists another activation function (and a suitable loss function) under which the learned classifier is under-confident at some probability values. Overall, our theory provides a precise characterization of calibration in realizable binary classification, which we verify on simulations and real data experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2019

A Tunable Loss Function for Binary Classification

We present α-loss, α∈ [1,∞], a tunable loss function for binary classifi...
research
02/09/2021

Classifier Calibration: with implications to threat scores in cybersecurity

This paper explores the calibration of a classifier output score in bina...
research
07/04/2021

Adaptive calibration for binary classification

This note proposes a way of making probability forecasting rules less se...
research
08/03/2023

Distribution-Free Inference for the Regression Function of Binary Classification

One of the key objects of binary classification is the regression functi...
research
06/10/2021

Linear Classifiers Under Infinite Imbalance

We study the behavior of linear discriminant functions for binary classi...
research
05/15/2023

Label Smoothing is Robustification against Model Misspecification

Label smoothing (LS) adopts smoothed targets in classification tasks. Fo...
research
11/11/2022

Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning

We introduce a family of interpretable machine learning models, with two...

Please sign up or login with your details

Forgot password? Click here to reset