Unsupervised Out-of-Distribution Detection by Maximum Classifier Discrepancy

08/14/2019
by   Qing Yu, et al.
8

Since deep learning models have been implemented in many commercial applications, it is important to detect out-of-distribution (OOD) inputs correctly to maintain the performance of the models, ensure the quality of the collected data, and prevent the applications from being used for other-than-intended purposes. In this work, we propose a two-head deep convolutional neural network (CNN) and maximize the discrepancy between the two classifiers to detect OOD inputs. We train a two-head CNN consisting of one common feature extractor and two classifiers which have different decision boundaries but can classify in-distribution (ID) samples correctly. Unlike previous methods, we also utilize unlabeled data for unsupervised training and we use these unlabeled data to maximize the discrepancy between the decision boundaries of two classifiers to push OOD samples outside the manifold of the in-distribution (ID) samples, which enables us to detect OOD samples that are far from the support of the ID samples. Overall, our approach significantly outperforms other state-of-the-art methods on several OOD detection benchmarks and two cases of real-world simulation.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 9

research
06/30/2023

Exploration and Exploitation of Unlabeled Data for Open-Set Semi-Supervised Learning

In this paper, we address a complex but practical scenario in semi-super...
research
10/13/2022

Exploiting Mixed Unlabeled Data for Detecting Samples of Seen and Unseen Out-of-Distribution Classes

Out-of-Distribution (OOD) detection is essential in real-world applicati...
research
09/04/2018

Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers

As deep learning methods form a critical part in commercially important ...
research
08/26/2021

Semantically Coherent Out-of-Distribution Detection

Current out-of-distribution (OOD) detection benchmarks are commonly buil...
research
06/27/2022

Agreement-on-the-Line: Predicting the Performance of Neural Networks under Distribution Shift

Recently, Miller et al. showed that a model's in-distribution (ID) accur...
research
01/07/2021

Bridging In- and Out-of-distribution Samples for Their Better Discriminability

This paper proposes a method for OOD detection. Questioning the premise ...
research
09/27/2021

DOODLER: Determining Out-Of-Distribution Likelihood from Encoder Reconstructions

Deep Learning models possess two key traits that, in combination, make t...

Please sign up or login with your details

Forgot password? Click here to reset