Out-of-Distribution Detection with Distance Guarantee in Deep Generative Models

02/09/2020
by   Yufeng Zhang, et al.
0

Recent research has shown that it is challenging to detect out-of-distribution (OOD) data in deep generative models including flow-based models and variational autoencoders (VAEs). In this paper, we prove a theorem that, for a well-trained flow-based model, the distance between the distribution of representations of an OOD dataset and prior can be large enough, as long as the distance between the distributions of the training dataset and the OOD dataset is large enough. Furthermore, our observation shows that, for flow-based model and VAE with factorized prior, the representations of OOD datasets are more correlated than that of the training dataset. Based on our theorem and observation, we propose detecting OOD data according to the total correlation of representations in flow-based model and VAE. Experimental results show that our method can achieve nearly 100% AUROC for all the widely used benchmarks and has robustness against data manipulation. While the state-of-the-art method performs not better than random guessing for challenging problems and can be fooled by data manipulation in almost all cases.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 17

page 18

page 19

page 20

page 28

page 31

page 32

page 33

12/14/2018

Learning Latent Subspaces in Variational Autoencoders

Variational autoencoders (VAEs) are widely used deep generative models c...
09/30/2020

RG-Flow: A hierarchical and explainable flow model based on renormalization group and sparse prior

Flow-based generative models have become an important class of unsupervi...
01/24/2019

Deep Generative Learning via Variational Gradient Flow

We propose a general framework to learn deep generative models via Varia...
07/20/2021

ByPE-VAE: Bayesian Pseudocoresets Exemplar VAE

Recent studies show that advanced priors play a major role in deep gener...
10/22/2018

Do Deep Generative Models Know What They Don't Know?

A neural network deployed in the wild may be asked to make predictions f...
07/14/2021

Understanding Failures in Out-of-Distribution Detection with Deep Generative Models

Deep generative models (DGMs) seem a natural fit for detecting out-of-di...
08/19/2021

Efficient remedies for outlier detection with variational autoencoders

Deep networks often make confident, yet incorrect, predictions when test...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.