SIReN-VAE: Leveraging Flows and Amortized Inference for Bayesian Networks

04/23/2022
by   Jacobie Mouton, et al.
0

Initial work on variational autoencoders assumed independent latent variables with simple distributions. Subsequent work has explored incorporating more complex distributions and dependency structures: including normalizing flows in the encoder network allows latent variables to entangle non-linearly, creating a richer class of distributions for the approximate posterior, and stacking layers of latent variables allows more complex priors to be specified for the generative model. This work explores incorporating arbitrary dependency structures, as specified by Bayesian networks, into VAEs. This is achieved by extending both the prior and inference network with graphical residual flows - residual flows that encode conditional independence by masking the weight matrices of the flow's residual blocks. We compare our model's performance on several synthetic datasets and show its potential in data-sparse settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/13/2023

Using VAEs to Learn Latent Variables: Observations on Applications in cryo-EM

Variational autoencoders (VAEs) are a popular generative model used to a...
research
01/16/2013

Model Criticism of Bayesian Networks with Latent Variables

The application of Bayesian networks (BNs) to cognitive assessment and i...
research
01/10/2013

Incorporating Expressive Graphical Models in Variational Approximations: Chain-Graphs and Hidden Variables

Global variational approximation methods in graphical models allow effic...
research
10/01/2018

Probabilistic Meta-Representations Of Neural Networks

Existing Bayesian treatments of neural networks are typically characteri...
research
11/02/2021

Recursive Bayesian Networks: Generalising and Unifying Probabilistic Context-Free Grammars and Dynamic Bayesian Networks

Probabilistic context-free grammars (PCFGs) and dynamic Bayesian network...
research
04/26/2022

Flow-Adapter Architecture for Unsupervised Machine Translation

In this work, we propose a flow-adapter architecture for unsupervised NM...
research
05/25/2023

Revisiting Structured Variational Autoencoders

Structured variational autoencoders (SVAEs) combine probabilistic graphi...

Please sign up or login with your details

Forgot password? Click here to reset