Out-of-distribution Detection by Cross-class Vicinity Distribution of In-distribution Data

06/19/2022
by   Zhilin Zhao, et al.
0

Deep neural networks only learn to map in-distribution inputs to their corresponding ground truth labels in the training phase without differentiating out-of-distribution samples from in-distribution ones. This results from the assumption that all samples are independent and identically distributed without distributional distinction. Therefore, a pretrained network learned from the in-distribution samples treats out-of-distribution samples as in-distribution and makes high-confidence predictions on them in the test phase. To address this issue, we draw out-of-distribution samples from the vicinity distribution of training in-distribution samples for learning to reject the prediction on out-of-distribution inputs. A Cross-class Vicinity Distribution is introduced by assuming that an out-of-distribution sample generated by mixing multiple in-distribution samples does not share the same classes of its constituents. We thus improve the discriminability of a pretrained network by finetuning it with out-of-distribution samples drawn from the cross-class vicinity distribution, where each out-of-distribution input corresponds to a complementary label. Experiments on various in-/out-of-distribution datasets show that the proposed method significantly outperforms existing methods in improving the capacity of discriminating between in- and out-of-distribution samples.

READ FULL TEXT

page 1

page 9

research
06/19/2022

Gray Learning from Non-IID Data with Out-of-distribution Samples

The quality of the training data annotated by experts cannot be guarante...
research
12/13/2021

WOOD: Wasserstein-based Out-of-Distribution Detection

The training and test data for deep-neural-network-based classifiers are...
research
06/19/2022

Label and Distribution-discriminative Dual Representation Learning for Out-of-Distribution Detection

To classify in-distribution samples, deep neural networks learn label-di...
research
08/13/2021

CODEs: Chamfer Out-of-Distribution Examples against Overconfidence Issue

Overconfident predictions on out-of-distribution (OOD) samples is a thor...
research
10/19/2022

Distribution Shift Detection for Deep Neural Networks

To deploy and operate deep neural models in production, the quality of t...
research
06/26/2021

Midpoint Regularization: from High Uncertainty Training to Conservative Classification

Label Smoothing (LS) improves model generalization through penalizing mo...
research
06/17/2022

Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets

Deep neural networks usually perform poorly when the training dataset su...

Please sign up or login with your details

Forgot password? Click here to reset