Do Deep Generative Models Know What They Don't Know?

10/22/2018
by   Eric Nalisnick, et al.
6

A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the model density from flow-based models, VAEs and PixelCNN cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. We focus our analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flow models to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature, which shows that such behavior is more general and not just restricted to the pairs of datasets used in our experiments. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution, until their behavior on out-of-distribution inputs is better understood.

READ FULL TEXT

page 5

page 17

page 18

page 19

research
06/07/2019

Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality

Recent work has shown that deep generative models can assign higher like...
research
02/16/2021

Hierarchical VAEs Know What They Don't Know

Deep generative models have shown themselves to be state-of-the-art dens...
research
05/25/2020

Network Bending: Manipulating The Inner Representations of Deep Generative Models

We introduce a new framework for interacting with and manipulating deep ...
research
05/25/2016

Asymptotically exact inference in differentiable generative models

Many generative models can be expressed as a differentiable function of ...
research
08/12/2021

DOI: Divergence-based Out-of-Distribution Indicators via Deep Generative Models

To ensure robust and reliable classification results, OoD (out-of-distri...
research
09/25/2019

Input complexity and out-of-distribution detection with likelihood-based generative models

Likelihood-based generative models are a promising resource to detect ou...
research
02/09/2020

Out-of-Distribution Detection with Distance Guarantee in Deep Generative Models

Recent research has shown that it is challenging to detect out-of-distri...

Please sign up or login with your details

Forgot password? Click here to reset