A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks

10/07/2016
by   Dan Hendrycks, et al.
0

We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.

READ FULL TEXT
research
10/21/2020

Towards End-to-End Training of Automatic Speech Recognition for Nigerian Pidgin

Nigerian Pidgin remains one of the most popular languages in West Africa...
research
06/22/2020

Bayesian Neural Networks: An Introduction and Survey

Neural Networks (NNs) have provided state-of-the-art results for many ch...
research
06/08/2017

Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks

We consider the problem of detecting out-of-distribution images in neura...
research
08/20/2018

Out-of-Distribution Detection using Multiple Semantic Label Representations

Deep Neural Networks are powerful models that attained remarkable result...
research
05/20/2020

Investigation of Large-Margin Softmax in Neural Language Modeling

To encourage intra-class compactness and inter-class separability among ...
research
07/15/2021

Uncertainty-Aware Reliable Text Classification

Deep neural networks have significantly contributed to the success in pr...

Please sign up or login with your details

Forgot password? Click here to reset