Out-of-distribution Detection in Classifiers via Generation

10/09/2019
by   Sachin Vernekar, et al.
0

By design, discriminatively trained neural network classifiers produce reliable predictions only for in-distribution samples. For their real-world deployments, detecting out-of-distribution (OOD) samples is essential. Assuming OOD to be outside the closed boundary of in-distribution, typical neural classifiers do not contain the knowledge of this boundary for OOD detection during inference. There have been recent approaches to instill this knowledge in classifiers by explicitly training the classifier with OOD samples close to the in-distribution boundary. However, these generated samples fail to cover the entire in-distribution boundary effectively, thereby resulting in a sub-optimal OOD detector. In this paper, we analyze the feasibility of such approaches by investigating the complexity of producing such "effective" OOD samples. We also propose a novel algorithm to generate such samples using a manifold learning network (e.g., variational autoencoder) and then train an n+1 classifier for OOD detection, where the n+1^th class represents the OOD samples. We compare our approach against several recent classifier-based OOD detectors on MNIST and Fashion-MNIST datasets. Overall the proposed approach consistently performs better than the others.

READ FULL TEXT

page 3

page 5

research
11/26/2017

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

The problem of detecting whether a test sample is from in-distribution (...
research
12/22/2021

GAN Based Boundary Aware Classifier for Detecting Out-of-distribution Samples

This paper focuses on the problem of detecting out-of-distribution (ood)...
research
06/14/2021

iNNformant: Boundary Samples as Telltale Watermarks

Boundary samples are special inputs to artificial neural networks crafte...
research
11/15/2022

Heatmap-based Out-of-Distribution Detection

Our work investigates out-of-distribution (OOD) detection as a neural ne...
research
12/01/2018

Improving robustness of classifiers by training against live traffic

Deep learning models are known to be overconfident in their predictions ...
research
05/04/2021

Out-of-distribution Detection and Generation using Soft Brownian Offset Sampling and Autoencoders

Deep neural networks often suffer from overconfidence which can be partl...
research
01/29/2023

Learning to reject meets OOD detection: Are all abstentions created equal?

Learning to reject (L2R) and out-of-distribution (OOD) detection are two...

Please sign up or login with your details

Forgot password? Click here to reset