Deep Generative Models Strike Back! Improving Understanding and Evaluation in Light of Unmet Expectations for OoD Data

11/12/2019
by   John Just, et al.
0

Advances in deep generative and density models have shown impressive capacity to model complex probability density functions in lower-dimensional space. Also, applying such models to high-dimensional image data to model the PDF has shown poor generalization, with out-of-distribution data being assigned equal or higher likelihood than in-sample data. Methods to deal with this have been proposed that deviate from a fully unsupervised approach, requiring large ensembles or additional knowledge about the data, not commonly available in the real-world. In this work, the previously offered reasoning behind these issues is challenged empirically, and it is shown that data-sets such as MNIST fashion/digits and CIFAR10/SVHN are trivially separable and have no overlap on their respective data manifolds that explains the higher OoD likelihood. Models like masked autoregressive flows and block neural autoregressive flows are shown to not suffer from OoD likelihood issues to the extent of GLOW, PixelCNN++, and real NVP. A new avenue is also explored which involves a change of basis to a new space of the same dimension with an orthonormal unitary basis of eigenvectors before modeling. In the test data-sets and models, this aids in pushing down the relative likelihood of the contrastive OoD data set and improve discrimination results. The significance of the density of the original space is maintained, while invertibility remains tractable. Finally, a look to the previous generation of generative models in the form of probabilistic principal component analysis is inspired, and revisited for the same data-sets and shown to work really well for discriminating anomalies based on likelihood in a fully unsupervised fashion compared with pixelCNN++, GLOW, and real NVP with less complexity and faster training. Also, dimensionality reduction using PCA is shown to improve anomaly detection in generative models.

READ FULL TEXT
research
11/15/2019

Likelihood Assignment for Out-of-Distribution Inputs in Deep Generative Models is Sensitive to Prior Distribution Choice

Recent work has shown that deep generative models assign higher likeliho...
research
06/07/2019

Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality

Recent work has shown that deep generative models can assign higher like...
research
01/06/2020

Granular Learning with Deep Generative Models using Highly Contaminated Data

An approach to utilize recent advances in deep generative models for ano...
research
12/07/2020

Perfect density models cannot guarantee anomaly detection

Thanks to the tractability of their likelihood, some deep generative mod...
research
11/02/2018

Anomaly Detection for imbalanced datasets with Deep Generative Models

Many important data analysis applications present with severely imbalanc...
research
05/30/2023

One-Line-of-Code Data Mollification Improves Optimization of Likelihood-based Generative Models

Generative Models (GMs) have attracted considerable attention due to the...
research
11/30/2022

Denoising Deep Generative Models

Likelihood-based deep generative models have recently been shown to exhi...

Please sign up or login with your details

Forgot password? Click here to reset