Revisiting Structured Variational Autoencoders

05/25/2023
by   Yixiu Zhao, et al.
0

Structured variational autoencoders (SVAEs) combine probabilistic graphical model priors on latent variables, deep neural networks to link latent variables to observed data, and structure-exploiting algorithms for approximate posterior inference. These models are particularly appealing for sequential data, where the prior can capture temporal dependencies. However, despite their conceptual elegance, SVAEs have proven difficult to implement, and more general approaches have been favored in practice. Here, we revisit SVAEs using modern machine learning tools and demonstrate their advantages over more general alternatives in terms of both accuracy and efficiency. First, we develop a modern implementation for hardware acceleration, parallelization, and automatic differentiation of the message passing algorithms at the core of the SVAE. Second, we show that by exploiting structure in the prior, the SVAE learns more accurate models and posterior distributions, which translate into improved performance on prediction tasks. Third, we show how the SVAE can naturally handle missing data, and we leverage this ability to develop a novel, self-supervised training approach. Altogether, these results show that the time is ripe to revisit structured variational autoencoders.

READ FULL TEXT
research
03/20/2016

Composing graphical models with neural networks for structured representations and fast inference

We propose a general modeling and inference framework that composes prob...
research
09/12/2022

Amortised Inference in Structured Generative Models with Explaining Away

A key goal of unsupervised learning is to go beyond density estimation a...
research
07/19/2022

Forget-me-not! Contrastive Critics for Mitigating Posterior Collapse

Variational autoencoders (VAEs) suffer from posterior collapse, where th...
research
06/29/2021

Diffusion Priors In Variational Autoencoders

Among likelihood-based approaches for deep generative modelling, variati...
research
06/01/2019

Cooperative neural networks (CoNN): Exploiting prior independence structure for improved classification

We propose a new approach, called cooperative neural networks (CoNN), wh...
research
04/23/2022

SIReN-VAE: Leveraging Flows and Amortized Inference for Bayesian Networks

Initial work on variational autoencoders assumed independent latent vari...
research
05/18/2018

GumBolt: Extending Gumbel trick to Boltzmann priors

Boltzmann machines (BMs) are appealing candidates for powerful priors in...

Please sign up or login with your details

Forgot password? Click here to reset