Analysis of Confident-Classifiers for Out-of-distribution Detection

04/27/2019
by   Sachin Vernekar, et al.
0

Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors. In the context of OOD detection for image classification, one of the recent approaches proposes training a classifier called "confident-classifier" by minimizing the standard cross-entropy loss on in-distribution samples and minimizing the KL divergence between the predictive distribution of OOD samples in the low-density regions of in-distribution and the uniform distribution (maximizing the entropy of the outputs). Thus, the samples could be detected as OOD if they have low confidence or high entropy. In this paper, we analyze this setting both theoretically and experimentally. We conclude that the resulting confident-classifier still yields arbitrarily high confidence for OOD samples far away from the in-distribution. We instead suggest training a classifier by adding an explicit "reject" class for OOD samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2017

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

The problem of detecting whether a test sample is from in-distribution (...
research
06/29/2022

RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness

We show that the effectiveness of the well celebrated Mixup [Zhang et al...
research
12/01/2018

Building robust classifiers through generation of confident out of distribution examples

Deep learning models are known to be overconfident in their predictions ...
research
09/04/2018

Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers

As deep learning methods form a critical part in commercially important ...
research
11/03/2021

Shift Happens: Adjusting Classifiers

Minimizing expected loss measured by a proper scoring rule, such as Brie...
research
01/29/2023

Learning to reject meets OOD detection: Are all abstentions created equal?

Learning to reject (L2R) and out-of-distribution (OOD) detection are two...
research
08/18/2023

A Graph-based Stratified Sampling Methodology for the Analysis of (Underground) Forums

[Context] Researchers analyze underground forums to study abuse and cybe...

Please sign up or login with your details

Forgot password? Click here to reset