Supervision Adaptation Balances In-Distribution Generalization and Out-of-Distribution Detection

06/19/2022
by   Zhilin Zhao, et al.
0

When there is a discrepancy between in-distribution (ID) samples and out-of-distribution (OOD) samples, deep neural networks trained on ID samples suffer from high-confidence prediction on OOD samples. This is primarily caused by unavailable OOD samples to constrain the networks in the training process. To improve the OOD sensitivity of deep networks, several state-of-the-art methods introduce samples from other real-world datasets as OOD samples to the training process and assign manually-determined labels to these OOD samples. However, they sacrifice the classification accuracy because the unreliable labeling of OOD samples would disrupt ID classification. To balance ID generalization and OOD detection, a major challenge to tackle is to make OOD samples compatible with ID ones, which is addressed by our proposed supervision adaptation method in this paper to define adaptive supervision information for OOD samples. First, by measuring the dependency between ID samples and their labels through mutual information, we reveal the form of the supervision information in terms of the negative probabilities of all classes. Second, after exploring the data correlations between ID and OOD samples by solving multiple binary regression problems, we estimate the supervision information to make ID classes more separable. We perform experiments on four advanced network architectures with two ID datasets and eleven OOD datasets to demonstrate the balancing effect of our supervision adaptation method in achieving both the ID classification ability and the OOD detection capacity.

READ FULL TEXT

page 10

page 11

research
09/21/2023

Meta OOD Learning for Continuously Adaptive OOD Detection

Out-of-distribution (OOD) detection is crucial to modern deep learning a...
research
01/07/2021

Bridging In- and Out-of-distribution Samples for Their Better Discriminability

This paper proposes a method for OOD detection. Questioning the premise ...
research
08/23/2021

Revealing Distributional Vulnerability of Explicit Discriminators by Implicit Generators

An explicit discriminator trained on observable in-distribution (ID) sam...
research
03/28/2022

Understanding out-of-distribution accuracies through quantifying difficulty of test samples

Existing works show that although modern neural networks achieve remarka...
research
08/27/2021

ProtoInfoMax: Prototypical Networks with Mutual Information Maximization for Out-of-Domain Detection

The ability to detect Out-of-Domain (OOD) inputs has been a critical req...
research
04/09/2022

Understanding, Detecting, and Separating Out-of-Distribution Samples and Adversarial Samples in Text Classification

In this paper, we study the differences and commonalities between statis...
research
11/02/2021

Out of distribution detection for skin and malaria images

Deep neural networks have shown promising results in disease detection a...

Please sign up or login with your details

Forgot password? Click here to reset