WOOD: Wasserstein-based Out-of-Distribution Detection

12/13/2021
by   Yinan Wang, et al.
0

The training and test data for deep-neural-network-based classifiers are usually assumed to be sampled from the same distribution. When part of the test samples are drawn from a distribution that is sufficiently far away from that of the training samples (a.k.a. out-of-distribution (OOD) samples), the trained neural network has a tendency to make high confidence predictions for these OOD samples. Detection of the OOD samples is critical when training a neural network used for image classification, object detection, etc. It can enhance the classifier's robustness to irrelevant inputs, and improve the system resilience and security under different forms of attacks. Detection of OOD samples has three main challenges: (i) the proposed OOD detection method should be compatible with various architectures of classifiers (e.g., DenseNet, ResNet), without significantly increasing the model complexity and requirements on computational resources; (ii) the OOD samples may come from multiple distributions, whose class labels are commonly unavailable; (iii) a score function needs to be defined to effectively separate OOD samples from in-distribution (InD) samples. To overcome these challenges, we propose a Wasserstein-based out-of-distribution detection (WOOD) method. The basic idea is to define a Wasserstein-distance-based score that evaluates the dissimilarity between a test sample and the distribution of InD samples. An optimization problem is then formulated and solved based on the proposed score function. The statistical learning bound of the proposed method is investigated to guarantee that the loss value achieved by the empirical optimizer approximates the global optimum. The comparison study results demonstrate that the proposed WOOD consistently outperforms other existing OOD detection methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2022

Out-of-distribution Detection by Cross-class Vicinity Distribution of In-distribution Data

Deep neural networks only learn to map in-distribution inputs to their c...
research
04/10/2022

Effective Out-of-Distribution Detection in Classifier Based on PEDCC-Loss

Deep neural networks suffer from the overconfidence issue in the open wo...
research
12/20/2021

Energy-bounded Learning for Robust Models of Code

In programming, learning code representations has a variety of applicati...
research
09/17/2020

An Algorithm to Attack Neural Network Encoder-based Out-Of-Distribution Sample Detector

Deep neural network (DNN), especially convolutional neural network, has ...
research
06/04/2023

Active Inference-Based Optimization of Discriminative Neural Network Classifiers

Commonly used objective functions (losses) for a supervised optimization...
research
12/01/2018

Improving robustness of classifiers by training against live traffic

Deep learning models are known to be overconfident in their predictions ...
research
09/28/2017

Distance-based Confidence Score for Neural Network Classifiers

The reliable measurement of confidence in classifiers' predictions is ve...

Please sign up or login with your details

Forgot password? Click here to reset