Verified Uncertainty Calibration

09/23/2019
by   Ananya Kumar, et al.
2

Applications such as weather forecasting and personalized medicine demand models that output calibrated probability estimates - those representative of the true likelihood of a prediction. Most models are not calibrated out of the box but are recalibrated by post-processing model outputs. We find in this work that popular recalibration methods like Platt scaling and temperature scaling, are (i) less calibrated than reported and (ii) current techniques cannot estimate how miscalibrated they are. An alternative method, histogram binning, has measurable calibration error but is sample inefficient - it requires O(B/ϵ^2) samples, compared to O(1/ϵ^2) for scaling methods, where B is the number of distinct probabilities the model can output. To get the best of both worlds, we introduce the scaling-binning calibrator, which first fits a parametric function that acts like a baseline for variance reduction and then bins the function values to actually ensure calibration. This requires only O(1/ϵ^2 + B) samples. We then show that methods used to estimate calibration error are suboptimal - we prove that an alternative estimator introduced in the meteorological community requires fewer samples - samples proportional to √(B) instead of B. We validate our approach with multiclass calibration experiments on CIFAR-10 and ImageNet, where we obtain a 35 unlike scaling methods, guarantees on true calibration.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset