Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

11/26/2017
by   Kimin Lee, et al.
1

The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art deep neural networks are known to be highly overconfident in their predictions, i.e., do not distinguish in- and out-of-distributions. Recently, to handle this issue, several threshold-based detectors have been proposed given pre-trained neural classifiers. However, the performance of prior works highly depends on how to train the classifiers since they only focus on improving inference procedures. In this paper, we develop a novel training method for classifiers so that such inference algorithms can work better. In particular, we suggest two additional terms added to the original loss (e.g., cross entropy). The first one forces samples from out-of-distribution less confident by the classifier and the second one is for (implicitly) generating most effective training samples for the first one. In essence, our method jointly trains both classification and generative neural networks for out-of-distribution. We demonstrate its effectiveness using deep convolutional neural networks on various popular image datasets.

READ FULL TEXT

page 7

page 8

research
04/27/2019

Analysis of Confident-Classifiers for Out-of-distribution Detection

Discriminatively trained neural classifiers can be trusted, only when th...
research
10/09/2019

Out-of-distribution Detection in Classifiers via Generation

By design, discriminatively trained neural network classifiers produce r...
research
04/10/2022

Effective Out-of-Distribution Detection in Classifier Based on PEDCC-Loss

Deep neural networks suffer from the overconfidence issue in the open wo...
research
05/19/2022

Mitigating Neural Network Overconfidence with Logit Normalization

Detecting out-of-distribution inputs is critical for safe deployment of ...
research
09/09/2022

Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification

Machine learning methods must be trusted to make appropriate decisions i...
research
03/28/2018

Supervising Feature Influence

Causal influence measures for machine learnt classifiers shed light on t...
research
12/01/2018

Building robust classifiers through generation of confident out of distribution examples

Deep learning models are known to be overconfident in their predictions ...

Please sign up or login with your details

Forgot password? Click here to reset