Cauchy-Schwarz Regularized Autoencoder

01/06/2021
by   Linh Tran, et al.
34

Recent work in unsupervised learning has focused on efficient inference and learning in latent variables models. Training these models by maximizing the evidence (marginal likelihood) is typically intractable. Thus, a common approximation is to maximize the Evidence Lower BOund (ELBO) instead. Variational autoencoders (VAE) are a powerful and widely-used class of generative models that optimize the ELBO efficiently for large datasets. However, the VAE's default Gaussian choice for the prior imposes a strong constraint on its ability to represent the true posterior, thereby degrading overall performance. A Gaussian mixture model (GMM) would be a richer prior, but cannot be handled efficiently within the VAE framework because of the intractability of the Kullback-Leibler divergence for GMMs. We challenge the adoption of the VAE framework on this specific point in favor of one with an analytical solution for Gaussian mixture prior. To perform efficient inference for GMM priors, we introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs. This new objective allows us to incorporate richer, multi-modal priors into the auto-encoding framework.We provide empirical studies on a range of datasets and show that our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.

READ FULL TEXT

page 10

page 11

page 12

research
09/14/2018

Variational Autoencoder with Implicit Optimal Priors

The variational autoencoder (VAE) is a powerful generative model that ca...
research
10/26/2018

Resampled Priors for Variational Autoencoders

We propose Learned Accept/Reject Sampling (LARS), a method for construct...
research
11/25/2019

Improving VAE generations of multimodal data through data-dependent conditional priors

One of the major shortcomings of variational autoencoders is the inabili...
research
06/16/2019

Fixing Gaussian Mixture VAEs for Interpretable Text Generation

Variational auto-encoder (VAE) with Gaussian priors is effective in text...
research
08/11/2023

Learning Distributions via Monte-Carlo Marginalization

We propose a novel method to learn intractable distributions from their ...
research
10/24/2022

On the failure of variational score matching for VAE models

Score matching (SM) is a convenient method for training flexible probabi...
research
10/05/2020

Bigeminal Priors Variational auto-encoder

Variational auto-encoders (VAEs) are an influential and generally-used c...

Please sign up or login with your details

Forgot password? Click here to reset