Enhanced Variational Inference with Dyadic Transformation

01/30/2019
by   Sarin Chandy, et al.
0

Variational autoencoder is a powerful deep generative model with variational inference. The practice of modeling latent variables in the VAE's original formulation as normal distributions with a diagonal covariance matrix limits the flexibility to match the true posterior distribution. We propose a new transformation, dyadic transformation (DT), that can model a multivariate normal distribution. DT is a single-stage transformation with low computational requirements. We demonstrate empirically on MNIST dataset that DT enhances the posterior flexibility and attains competitive results compared to other VAE enhancements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2019

Neural Gaussian Copula for Variational Autoencoder

Variational language models seek to estimate the posterior of latent var...
research
07/10/2020

Self-Reflective Variational Autoencoder

The Variational Autoencoder (VAE) is a powerful framework for learning p...
research
06/08/2023

Unscented Autoencoder

The Variational Autoencoder (VAE) is a seminal approach in deep generati...
research
02/18/2022

Unsupervised Multiple-Object Tracking with a Dynamical Variational Autoencoder

In this paper, we present an unsupervised probabilistic model and associ...
research
08/11/2023

Hawkes Processes with Delayed Granger Causality

We aim to explicitly model the delayed Granger causal effects based on m...
research
10/27/2022

Model Order Selection with Variational Autoencoding

Classical methods for model order selection often fail in scenarios with...
research
02/19/2018

Distribution Matching in Variational Inference

The difficulties in matching the latent posterior to the prior, balancin...

Please sign up or login with your details

Forgot password? Click here to reset