SLOVA: Uncertainty Estimation Using Single Label One-Vs-All Classifier

06/28/2022
by   Bartosz Wójcik, et al.
0

Deep neural networks present impressive performance, yet they cannot reliably estimate their predictive confidence, limiting their applicability in high-risk domains. We show that applying a multi-label one-vs-all loss reveals classification ambiguity and reduces model overconfidence. The introduced SLOVA (Single Label One-Vs-All) model redefines typical one-vs-all predictive probabilities to a single label situation, where only one class is the correct answer. The proposed classifier is confident only if a single class has a high probability and other probabilities are negligible. Unlike the typical softmax function, SLOVA naturally detects out-of-distribution samples if the probabilities of all other classes are small. The model is additionally fine-tuned with exponential calibration, which allows us to precisely align the confidence score with model accuracy. We verify our approach on three tasks. First, we demonstrate that SLOVA is competitive with the state-of-the-art on in-distribution calibration. Second, the performance of SLOVA is robust under dataset shifts. Finally, our approach performs extremely well in the detection of out-of-distribution samples. Consequently, SLOVA is a tool that can be used in various applications where uncertainty modeling is required.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2020

Revisiting One-vs-All Classifiers for Predictive Uncertainty and Out-of-Distribution Detection in Neural Networks

Accurate estimation of predictive uncertainty in modern neural networks ...
research
09/09/2022

Estimating Multi-label Accuracy using Labelset Distributions

A multi-label classifier estimates the binary label state (relevant vs i...
research
03/17/2022

Confidence Calibration for Intent Detection via Hyperspherical Space and Rebalanced Accuracy-Uncertainty Loss

Data-driven methods have achieved notable performance on intent detectio...
research
10/28/2019

Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration

Class probabilities predicted by most multiclass classifiers are uncalib...
research
05/24/2023

SELFOOD: Self-Supervised Out-Of-Distribution Detection via Learning to Rank

Deep neural classifiers trained with cross-entropy loss (CE loss) often ...
research
03/07/2023

Predicted Embedding Power Regression for Large-Scale Out-of-Distribution Detection

Out-of-distribution (OOD) inputs can compromise the performance and safe...
research
06/09/2021

Understanding Softmax Confidence and Uncertainty

It is often remarked that neural networks fail to increase their uncerta...

Please sign up or login with your details

Forgot password? Click here to reset