Perceptual Generative Autoencoders

06/25/2019
by   Zijun Zhang, et al.
5

Modern generative models are usually designed to match target distributions directly in the data space, where the intrinsic dimensionality of data can be much lower than the ambient dimensionality. We argue that this discrepancy may contribute to the difficulties in training generative models. We therefore propose to map both the generated and target distributions to the latent space using the encoder of a standard autoencoder, and train the generator (or decoder) to match the target distribution in the latent space. The resulting method, perceptual generative autoencoder (PGA), is then incorporated with a maximum likelihood or variational autoencoder (VAE) objective to train the generative model. With maximum likelihood, PGAs generalize the idea of reversible generative models to unrestricted neural network architectures and arbitrary latent dimensionalities. When combined with VAEs, PGAs can generate sharper samples than vanilla VAEs. Compared to other autoencoder-based generative models using simple priors, PGAs achieve state-of-the-art FID scores on CIFAR-10 and CelebA.

READ FULL TEXT

page 6

page 7

research
06/05/2018

Training Generative Reversible Networks

Generative models with an encoding component such as autoencoders curren...
research
06/12/2019

Copulas as High-Dimensional Generative Models: Vine Copula Autoencoders

We propose a vine copula autoencoder to construct flexible generative mo...
research
06/10/2021

Score-based Generative Modeling in Latent Space

Score-based generative models (SGMs) have recently demonstrated impressi...
research
10/06/2020

NCP-VAE: Variational Autoencoders with Noise Contrastive Priors

Variational autoencoders (VAEs) are one of the powerful likelihood-based...
research
06/17/2021

Learning Perceptual Manifold of Fonts

Along the rapid development of deep learning techniques in generative mo...
research
05/30/2019

One-element Batch Training by Moving Window

Several deep models, esp. the generative, compare the samples from two d...
research
05/23/2018

Cramer-Wold AutoEncoder

We propose a new generative model, Cramer-Wold Autoencoder (CWAE). Follo...

Please sign up or login with your details

Forgot password? Click here to reset