An Underexplored Dilemma between Confidence and Calibration in Quantized Neural Networks

11/10/2021
by   Guoxuan Xia, et al.
18

Modern convolutional neural networks (CNNs) are known to be overconfident in terms of their calibration on unseen input data. That is to say, they are more confident than they are accurate. This is undesirable if the probabilities predicted are to be used for downstream decision making. When considering accuracy, CNNs are also surprisingly robust to compression techniques, such as quantization, which aim to reduce computational and memory costs. We show that this robustness can be partially explained by the calibration behavior of modern CNNs, and may be improved with overconfidence. This is due to an intuitive result: low confidence predictions are more likely to change post-quantization, whilst being less accurate. High confidence predictions will be more accurate, but more difficult to change. Thus, a minimal drop in post-quantization accuracy is incurred. This presents a potential conflict in neural network design: worse calibration from overconfidence may lead to better robustness to quantization. We perform experiments applying post-training quantization to a variety of CNNs, on the CIFAR-100 and ImageNet datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2023

Calibration of Neural Networks

Neural networks solving real-world problems are often required not only ...
research
12/03/2022

Make RepVGG Greater Again: A Quantization-aware Approach

The tradeoff between performance and inference speed is critical for pra...
research
05/16/2021

Is In-Domain Data Really Needed? A Pilot Study on Cross-Domain Calibration for Network Quantization

Post-training quantization methods use a set of calibration data to comp...
research
10/10/2021

Haar Wavelet Feature Compression for Quantized Graph Convolutional Networks

Graph Convolutional Networks (GCNs) are widely used in a variety of appl...
research
02/10/2022

Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment

To adopt convolutional neural networks (CNN) for a range of resource-con...
research
08/31/2021

Quantized convolutional neural networks through the lens of partial differential equations

Quantization of Convolutional Neural Networks (CNNs) is a common approac...
research
06/16/2020

CNN Acceleration by Low-rank Approximation with Quantized Factors

The modern convolutional neural networks although achieve great results ...

Please sign up or login with your details

Forgot password? Click here to reset