Probabilistic Auto-Encoder
We introduce the Probabilistic Auto-Encoder (PAE), a generative model with a lower dimensional latent space that is based on an Auto-Encoder which is interpreted probabilistically after training using a Normalizing Flow. The PAE combines the advantages of an Auto-Encoder, i.e. it is fast and easy to train and achieves small reconstruction error, with the desired properties of a generative model, such as high sample quality and good performance in downstream tasks. Compared to a VAE and its common variants, the PAE trains faster, reaches lower reconstruction error and achieves state of the art samples without parameter fine-tuning or annealing schemes. We demonstrate that the PAE is further a powerful model for performing the downstream tasks of outlier detection and probabilistic image reconstruction: 1) Starting from the Laplace approximation to the marginal likelihood, we identify a PAE-based outlier detection metric which achieves state of the art results in Out-of-Distribution detection outperforming other likelihood based estimators. 2) Using posterior analysis in the PAE latent space we perform high dimensional data inpainting and denoising with uncertainty quantification.
READ FULL TEXT