Detecting Misclassification Errors in Neural Networks with a Gaussian Process Model

10/05/2020
by   Xin Qiu, et al.
0

As neural network classifiers are deployed in real-world applications, it is crucial that their predictions are not just accurate, but trustworthy as well. One practical solution is to assign confidence scores to each prediction, then filter out low-confidence predictions. However, existing confidence metrics are not yet sufficiently reliable for this role. This paper presents a new framework that produces more reliable confidence scores for detecting misclassification errors. This framework, RED, calibrates the classifier's inherent confidence indicators and estimates uncertainty of the calibrated confidence scores using Gaussian Processes. Empirical comparisons with other confidence estimation methods on 125 UCI datasets demonstrate that this approach is effective. An experiment on a vision task with a large deep learning architecture further confirms that the method can scale up, and a case study involving out-of-distribution and adversarial samples shows potential of the proposed method to improve robustness of neural network classifiers more broadly in the future.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2017

Distance-based Confidence Score for Neural Network Classifiers

The reliable measurement of confidence in classifiers' predictions is ve...
research
05/28/2018

Confidence Prediction for Lexicon-Free OCR

Having a reliable accuracy score is crucial for real world applications ...
research
09/03/2020

Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings

Deep neural networks have been successful in diverse discriminative clas...
research
06/03/2019

Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel

Neural Networks (NNs) have been extensively used for a wide spectrum of ...
research
06/30/2020

Classification Confidence Estimation with Test-Time Data-Augmentation

Machine learning plays an increasingly significant role in many aspects ...
research
08/30/2022

Constraining Representations Yields Models That Know What They Don't Know

A well-known failure mode of neural networks corresponds to high confide...
research
05/11/2018

Confidence Modeling for Neural Semantic Parsing

In this work we focus on confidence modeling for neural semantic parsers...

Please sign up or login with your details

Forgot password? Click here to reset