An Algorithm to Attack Neural Network Encoder-based Out-Of-Distribution Sample Detector

09/17/2020
by   Liang Liang, et al.
18

Deep neural network (DNN), especially convolutional neural network, has achieved superior performance on image classification tasks. However, such performance is only guaranteed if the input to a trained model is similar to the training samples, i.e., the input follows the probability distribution of the training set. Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless. Classification-based methods have been proposed for OOD detection; however, in this study we show that this type of method is theoretically ineffective and practically breakable because of dimensionality reduction in the model. We also show that Glow likelihood-based OOD detection is ineffective as well. Our analysis is demonstrated on five open datasets, including a COVID-19 CT dataset. At last, we present a simple theoretical solution with guaranteed performance for OOD detection.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 6

page 7

research
10/19/2022

Training set cleansing of backdoor poisoning by self-supervised representation learning

A backdoor or Trojan attack is an important type of data poisoning attac...
research
06/28/2021

Dataset Bias Mitigation Through Analysis of CNN Training Scores

Training datasets are crucial for convolutional neural network-based alg...
research
02/05/2018

A Method for Restoring the Training Set Distribution in an Image Classifier

Convolutional Neural Networks are a well-known staple of modern image cl...
research
01/08/2019

Comparing Sample-wise Learnability Across Deep Neural Network Models

Estimating the relative importance of each sample in a training set has ...
research
06/04/2023

Active Inference-Based Optimization of Discriminative Neural Network Classifiers

Commonly used objective functions (losses) for a supervised optimization...
research
12/13/2021

WOOD: Wasserstein-based Out-of-Distribution Detection

The training and test data for deep-neural-network-based classifiers are...
research
12/16/2022

An unfolding method based on conditional Invertible Neural Networks (cINN) using iterative training

The unfolding of detector effects is crucial for the comparison of data ...

Please sign up or login with your details

Forgot password? Click here to reset