Benefiting Deep Latent Variable Models via Learning the Prior and Removing Latent Regularization

07/07/2020
by   Rogan Morrow, et al.
0

There exist many forms of deep latent variable models, such as the variational autoencoder and adversarial autoencoder. Regardless of the specific class of model, there exists an implicit consensus that the latent distribution should be regularized towards the prior, even in the case where the prior distribution is learned. Upon investigating the effect of latent regularization on image generation our results indicate that in the case where a sufficiently expressive prior is learned, latent regularization is not necessary and may in fact be harmful insofar as image quality is concerned. We additionally investigate the benefit of learned priors on two common problems in computer vision: latent variable disentanglement, and diversity in image-to-image translation.

READ FULL TEXT

page 12

page 13

page 16

research
10/04/2019

Stacked Wasserstein Autoencoder

Approximating distributions over complicated manifolds, such as natural ...
research
06/02/2022

Indeterminacy in Latent Variable Models: Characterization and Strong Identifiability

Most modern latent variable and probabilistic generative models, such as...
research
01/20/2023

Opaque prior distributions in Bayesian latent variable models

We review common situations in Bayesian latent variable models where the...
research
12/12/2017

GibbsNet: Iterative Adversarial Inference for Deep Graphical Models

Directed latent variable models that formulate the joint distribution as...
research
08/17/2021

SPMoE: Generate Multiple Pattern-Aware Outputs with Sparse Pattern Mixture of Experts

Many generation tasks follow a one-to-many mapping relationship: each in...
research
09/08/2022

Representing Camera Response Function by a Single Latent Variable and Fully Connected Neural Network

Modelling the mapping from scene irradiance to image intensity is essent...
research
02/12/2019

Density Estimation and Incremental Learning of Latent Vector for Generative Autoencoders

In this paper, we treat the image generation task using the autoencoder,...

Please sign up or login with your details

Forgot password? Click here to reset