DOI: Divergence-based Out-of-Distribution Indicators via Deep Generative Models

08/12/2021
by   Wenxiao Chen, et al.
0

To ensure robust and reliable classification results, OoD (out-of-distribution) indicators based on deep generative models are proposed recently and are shown to work well on small datasets. In this paper, we conduct the first large collection of benchmarks (containing 92 dataset pairs, which is 1 order of magnitude larger than previous ones) for existing OoD indicators and observe that none perform well. We thus advocate that a large collection of benchmarks is mandatory for evaluating OoD indicators. We propose a novel theoretical framework, DOI, for divergence-based Out-of-Distribution indicators (instead of traditional likelihood-based) in deep generative models. Following this framework, we further propose a simple and effective OoD detection algorithm: Single-shot Fine-tune. It significantly outperforms past works by 5 8 in AUROC, and its performance is close to optimal. In recent, the likelihood criterion is shown to be ineffective in detecting OoD. Single-shot Fine-tune proposes a novel fine-tune criterion to detect OoD, by whether the likelihood of the testing sample is improved after fine-tuning a well-trained model on it. Fine-tune criterion is a clear and easy-following criterion, which will lead the OoD domain into a new stage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2020

How to fine-tune deep neural networks in few-shot learning?

Deep learning has been widely used in data-intensive applications. Howev...
research
11/02/2021

OSOA: One-Shot Online Adaptation of Deep Generative Models for Lossless Compression

Explicit deep generative models (DGMs), e.g., VAEs and Normalizing Flows...
research
06/07/2019

Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality

Recent work has shown that deep generative models can assign higher like...
research
10/22/2018

Do Deep Generative Models Know What They Don't Know?

A neural network deployed in the wild may be asked to make predictions f...
research
07/14/2021

Understanding Failures in Out-of-Distribution Detection with Deep Generative Models

Deep generative models (DGMs) seem a natural fit for detecting out-of-di...
research
06/15/2021

Divergence Frontiers for Generative Models: Sample Complexity, Quantization Level, and Frontier Integral

The spectacular success of deep generative models calls for quantitative...
research
11/04/2020

Can We Trust Deep Speech Prior?

Recently, speech enhancement (SE) based on deep speech prior has attract...

Please sign up or login with your details

Forgot password? Click here to reset