SELFOOD: Self-Supervised Out-Of-Distribution Detection via Learning to Rank

05/24/2023
by   Dheeraj Mekala, et al.
0

Deep neural classifiers trained with cross-entropy loss (CE loss) often suffer from poor calibration, necessitating the task of out-of-distribution (OOD) detection. Traditional supervised OOD detection methods require expensive manual annotation of in-distribution and OOD samples. To address the annotation bottleneck, we introduce SELFOOD, a self-supervised OOD detection method that requires only in-distribution samples as supervision. We cast OOD detection as an inter-document intra-label (IDIL) ranking problem and train the classifier with our pairwise ranking loss, referred to as IDIL loss. Specifically, given a set of in-distribution documents and their labels, for each label, we train the classifier to rank the softmax scores of documents belonging to that label to be higher than the scores of documents that belong to other labels. Unlike CE loss, our IDIL loss function reaches zero when the desired confidence ranking is achieved and gradients are backpropagated to decrease probabilities associated with incorrect labels rather than continuously increasing the probability of the correct label. Extensive experiments with several classifiers on multiple classification datasets demonstrate the effectiveness of our method in both coarse- and fine-grained settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2021

S3: Supervised Self-supervised Learning under Label Noise

Despite the large progress in supervised learning with Neural Networks, ...
research
09/04/2018

Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers

As deep learning methods form a critical part in commercially important ...
research
07/13/2022

Semi-supervised Ranking for Object Image Blur Assessment

Assessing the blurriness of an object image is fundamentally important t...
research
03/02/2022

GSC Loss: A Gaussian Score Calibrating Loss for Deep Learning

Cross entropy (CE) loss integrated with softmax is an orthodox component...
research
09/01/2022

Federated Learning with Label Distribution Skew via Logits Calibration

Traditional federated optimization methods perform poorly with heterogen...
research
06/28/2022

SLOVA: Uncertainty Estimation Using Single Label One-Vs-All Classifier

Deep neural networks present impressive performance, yet they cannot rel...
research
08/23/2023

RankMixup: Ranking-Based Mixup Training for Network Calibration

Network calibration aims to accurately estimate the level of confidences...

Please sign up or login with your details

Forgot password? Click here to reset