Calibrated Prediction Intervals for Neural Network Regressors
Ongoing developments in neural network models are continually advancing the state-of-the-art in terms of system accuracy. However, the predicted labels should not be regarded as the only core output; also important is a well calibrated estimate of the prediction uncertainty. Such estimates and their calibration is critical in relation to robust handling of out of distribution events not observed in training data. Despite their obvious aforementioned advantage in relation to accuracy, contemporary neural networks can, generally, be regarded as poorly calibrated and as such do not produce reliable output probability estimates. Further, while post-processing calibration solutions can be found in the relevant literature, these tend to be for systems performing classification. In this regard, we herein present a method for acquiring calibrated predictions intervals for neural network regressors by posing the regression task as a multi-class classification problem and applying one of three proposed calibration methods on the classifiers' output. Testing our method on two exemplar tasks - speaker age prediction and signal-to-noise ratio estimation - indicates both the suitability of the classification-based regression models and that post-processing by our proposed empirical calibration or temperature scaling methods yields well calibrated prediction intervals. The code for computing calibrated predicted intervals is publicly available.
READ FULL TEXT