1 Introduction
We want predictive models to produce reliable uncertainty estimates, such that better informed decisions can be made based on their predictions. This is especially important in safetycritical applications such as autonomous driving
Kendall and Gal (2017) and medical diagnosis Kompa et al. (2021). One common way of measuring the quality of predictive uncertainty is model calibration, which in the case of a classification task can be understood as how well the probabilities predicted by a model match its accuracy. For example, a wellcalibrated model should be correct 70% of the time, when it predicts a probability of 0.7 for the chosen class. Thus, for a class classification problem, we can define a perfectly calibrated model Wu and Gales (2021); Guo et al. (2017) as,(1) 
where are the model parameters, is an input and is the corresponding class label. is the class predicted by that model and is given by , and is its corresponding predicted probability, or the model confidence for input
. In the case of a CNN, probabilities will come from the softmax after the final fully connected layer (or logits).
It has been previously shown that modern convolutional neural networks (CNNs), such as ResNets He et al. (2016), exhibit poor calibration, and in fact tend to be overconfident Wu and Gales (2021); Guo et al. (2017). The model confidences are greater than the actual test accuracies achieved, making them unreliable.
On the other hand, neural network quantization is an optimization technique where weights and activations are stored in a lower bit numerical format compared to what they were trained in, e.g. int8 vs fp32 Jacob et al. (2017); Krishnamoorthi (2018); Nagel et al. (2021). This can allow significant reductions in computational and memory costs, resulting in lower latency and energy consumption. Quantization is thus often important for deploying CNNs on resource limited platforms such as mobile phones. In this paper we examine posttraining quantization (PTQ), where a fully trained full precision network is mapped to lower precision without further training. It has been noted that CNNs are robust to the noise introduced by quantization; however, to the best of our knowledge, research tends to focus on improving robustness Nagel et al. (2021); Alizadeh et al. (2020); Gholami et al. (2021) rather than explaining why architectures seem to be inherently robust. There has been other research into understanding how compression methods affect accuracy Hooker et al. (2021), however, it primarily focuses on the effect of pruning and does not consider predictive uncertainty.
In this paper we highlight a novel insight, that links the above two concepts (calibration, quantization) in deep learning together. We find that

confidence and calibration are closely linked to the robustness of model accuracy to posttraining quantization and,

overconfidence can potentially improve the robustness of accuracy.
2 How does calibration affect accuracy post quantization?
We can consider the effect of posttraining quantization on a CNN’s activations and ultimately outputs as adding noise to the original floating point values Alizadeh et al. (2020); Yun and Wong (2021). Thus, we can separate two questions that determine the change in accuracy after quantization:

How easy is it to change the predicted class?

Given the prediction has changed, how will the model accuracy be influenced?
In order for a prediction to change, for standard classification CNNs, the quantization noise needs to be sufficient to change the top logit. Intuitively, this suggests that less confident predictions will be easier to change, as their top logit will be closer to the other logits. We would also expect lower precision, and thus greater quantization noise, to result in more swapped top logits as well. We do not investigate the above in detail, as this would require accurately modelling the distribution of logits postquantization for different architectures conditional on the input. However, we simply state that our empirical results support the intuition presented above.
The second question can be directly linked to the calibration of the model, as calibration tells us about the accuracy of predictions at a given confidence. For example, considering a wellcalibrated model as defined in Equation 1, for a prediction with confidence the probability that it will be correct is also . This means that it is more likely to already be wrong. If the prediction is in fact incorrect then it will not cause a decrease in the overall accuracy if it changes, since it will either change to another incorrect class, or to the correct class.
It can now be seen how overconfidence, where the model is more confident than it is accurate, , may improve robustness to posttraining quantization. For predictions with , these are more likely to be correct in the first place, and so it would be better for these to stay the same post quantization. Thus, having a higher confidence would be beneficial. Conversely, for more easily swapped predictions with lower confidence, if the original accuracy of these is lower, then the change in accuracy post quantization will be less.
3 Experiments
We present experimental results for ResNet56 and ResNet50 He et al. (2016) trained on CIFAR100 Krizhevsky et al. and ImageNet Deng et al. (2009) respectively. Additional results for ResNet20 on CIFAR100 and MobileNetV2 Sandler et al. (2018) on ImageNet are available in Appendix A. CIFAR models were trained using the regime specified by He et al. (2016)
, whilst ImageNet models use pretrained weights available from PyTorch
^{2}^{2}2https://pytorch.org/vision/stable/models.html Paszke et al. (2019).For weights we use uniform perchannel symmetric quantization, where the quantization parameters are determined using minimum and maximum values. For activations we use uniform pertensor asymmetric quantization, where the quantization parameters are found using PyTorch’s default histogram based method that iteratively aims to minimize the mean squared error from quantization
Nagel et al. (2021); Paszke et al. (2019). Batchnorm layers are folded into the preceding convolutional layer. This is a relatively standard scheme, as our aim is not to achieve the best performance, but to examine behavior as quantization noise varies. Quantized inference is simulated in PyTorch using the existing backend for this purpose.^{3}^{3}3https://github.com/pytorch/pytorch/blob/master/torch/ao/quantization/fake_quantize.py3.1 Model calibration before quantization
Reliability curves plot accuracy against binned confidence Guo et al. (2017); NiculescuMizil and Caruana (2005), and so not only give an idea of the calibration error, but also of whether a model is over or under confident. Figure 1 shows reliability curves for floatingpoint models (prequantization), alongside histograms showing the distribution of confidence over the test datasets. It can be seen that ResNet56 on CIFAR100 is very overconfident on the test data, with accuracy much lower than confidence. ResNet50 on ImageNet is better calibrated, but still overconfident as well. ResNet56 is also more confident overall compared to ResNet50, although both models have a large proportion of their predictions with confidence near .
3.2 Model accuracy after quantization
Given knowledge of the calibration behavior of the networks (Figure 1), we can explain the behavior of accuracy as quantization noise increases. Figure 2 shows, going from the floating point model to a quantized one, the percentage of swapped/changed predictions, the change in error rate (accuracy), the ratio of the previous two values, and histograms of swapped predictions over confidence. These are tracked as the activation precision is held constant at 8 bits and the weight precision is decreased from 8 to 4 bits, allowing the quantization noise to be varied in a controlled manner. Note that the histograms are not normalized, so the area reflects the number of swapped predictions.
It can be seen for both models that as the quantization noise increases/precision decreases, at first, the predictions to be swapped are the lower confidence ones, supporting the previously presented intuition. Moreover, even though the proportion of swapped predictions is quite high, the increase in error rate is only a small proportion of this. This is reflected in the reliability curves in Figure 1, that show that low confidence predictions are also low accuracy, and supports the reasoning outlined in Section 2. Even though posttraining quantization may have caused a large number of predictions to change, the predictive accuracy of the models remains robust, as the majority of predictions that do change do not lead to an increase in error rate. As the weight precision is decreased further (and the quantization noise increases), only then do higher confidence predictions start to be swapped. The ratio of change in error rate to proportion swapped increases, and this again is reflected in Figure 1, where higher confidence predictions are shown to be more accurate.
It is not straightforward to directly compare the two models, as the distribution of quantization noise and how it relates to the number of bits used to represent the network will be different between them. However, we can still observe in Figure 2 that, as more predictions in the approximate interval are swapped, the ratio of change in error rate to proportion swapped increases much more quickly for ResNet50 compared to ResNet56. This can be related to Figure 1, where ResNet56 is much less accurate than it is confident in this interval, whilst ResNet50 is better calibrated. This supports the idea that overconfidence improves the robustness of accuracy to quantization.
4 Discussion
We have shown the novel insight that model calibration can help explain the robustness of CNN accuracy to posttraining quantization. Low confidence predictions are more easily swapped postquantization. However, if these predictions are low accuracy as well then the overall accuracy will not decrease by much. High confidence predictions will be more accurate, but more difficult to change. Moreover, we reason that overconfidence may improve robustness, as higher accuracy predictions will be more confident, and lower confidence predictions will be less accurate. We hope that this work can lay the groundwork for further analysis on the understanding of compressed neural networks. For example, further investigation should be done into how quantization precision affects noise on the logit level, as this work only examines behavior after the softmax.
This result raises a potential dilemma, which is not considered in current literature. As research is increasingly moving towards producing deep learning approaches with better estimates of predictive uncertainties, better calibrated models may consequently have less robust accuracies to quantization. Interestingly, our findings, as they relate to the intrinsic calibration of a trained model, do not affect methods that improve calibration posthoc, such as Temperature Scaling Guo et al. (2017) or Deep Ensembles Wu and Gales (2021); Lakshminarayanan et al. (2017). However, methods applied during training, such as using a soft calibration objective Karandikar et al. (2021) or label smoothing Müller et al. (2019) may be affected. Thus, a natural extension of this work would be to investigate posttraining quantization on models trained using these methods.
References
 Gradient l1 regularization for quantization robustness. ArXiv abs/2002.07520. Cited by: §1, §2.

Imagenet: a largescale hierarchical image database.
In
2009 IEEE conference on computer vision and pattern recognition
, pp. 248–255. Cited by: §3.  A survey of quantization methods for efficient neural network inference. ArXiv abs/2103.13630. Cited by: §1.

On calibration of modern neural networks.
In
Proceedings of the 34th International Conference on Machine Learning
, D. Precup and Y. W. Teh (Eds.), Proceedings of Machine Learning Research, Vol. 70, pp. 1321–1330. External Links: Link Cited by: §1, §3.1, §4.  Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: §1, §3.
 What do compressed deep neural networks forget?. External Links: 1911.05248 Cited by: §1.
 Quantization and training of neural networks for efficient integerarithmeticonly inference. External Links: 1712.05877 Cited by: §1.
 Soft calibration objectives for neural networks. External Links: 2108.00106 Cited by: §4.
 What uncertainties do we need in bayesian deep learning for computer vision?. External Links: 1703.04977 Cited by: §1.
 Second opinion needed: communicating uncertainty in medical machine learning. NPJ Digital Medicine 4. Cited by: §1.
 Quantizing deep convolutional networks for efficient inference: A whitepaper. CoRR abs/1806.08342. External Links: Link, 1806.08342 Cited by: Appendix A, §1.
 [12] () CIFAR10 (canadian institute for advanced research). . External Links: Link Cited by: §3.
 Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30, pp. . External Links: Link Cited by: §4.
 When does label smoothing help?. In NeurIPS, Cited by: §4.
 A white paper on neural network quantization. External Links: 2106.08295 Cited by: Appendix A, §1, §3.

Predicting good probabilities with supervised learning
. Proceedings of the 22nd international conference on Machine learning. Cited by: §3.1.  PyTorch: an imperative style, highperformance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlchéBuc, E. Fox, and R. Garnett (Eds.), pp. 8024–8035. External Links: Link Cited by: §3.
 MobileNetV2: inverted residuals and linear bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510–4520. Cited by: §3.
 Should ensemble members be calibrated?. CoRR abs/2101.05397. External Links: Link, 2101.05397 Cited by: §1, §4.
 Do all mobilenets quantize poorly? gaining insights into the effect of quantization on depthwise separable convolutional networks through the eyes of multiscale distributional dynamics. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2447–2456. Cited by: §2.
Appendix A Appendix
Additional experimental results are provided for ResNet20 on CIFAR100 and MobileNetV2 on ImageNet (Figures 3 and 4). ResNet20 behaves quite similarly to ResNet56, which is to be expected as they share the same architecture. MobileNetV2 is similarly calibrated to ResNet50, but is more fragile to quantization, which is a common observation about this architecture Krishnamoorthi (2018); Nagel et al. (2021). However, in Figure 4, its behavior is still consistent with the reasoning presented in this paper. There is just likely to be more quantization noise at the logit level for a given precision compared to ResNet50. Note that the figure is truncated for readability, as the change in values is significantly larger for lower precisions.
Comments
There are no comments yet.