A Universal Marginalizer for Amortized Inference in Generative Models

11/02/2017
by   Laura Douglas, et al.
0

We consider the problem of inference in a causal generative model where the set of available observations differs between data instances. We show how combining samples drawn from the graphical model with an appropriate masking function makes it possible to train a single neural network to approximate all the corresponding conditional marginal distributions and thus amortize the cost of inference. We further demonstrate that the efficiency of importance sampling may be improved by basing proposals on the output of the neural network. We also outline how the same network can be used to generate samples from an approximate joint posterior via a chain decomposition of the graph.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

11/12/2018

Universal Marginalizer for Amortised Inference and Embedding of Generative Models

Probabilistic graphical models are powerful tools which allow us to form...
03/25/2018

Network archaeology: phase transition in the recoverability of network history

Network growth processes can be understood as generative models of the s...
10/16/2019

Universal Marginaliser for Deep Amortised Inference for Probabilistic Programs

Probabilistic programming languages (PPLs) are powerful modelling tools ...
09/06/2017

CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training

We propose an adversarial training procedure for learning a causal impli...
01/10/2013

Recognition Networks for Approximate Inference in BN20 Networks

We propose using recognition networks for approximate inference inBayesi...
10/08/2020

Uncertainty in Neural Processes

We explore the effects of architecture and training objective choice on ...
09/12/2019

Learning Bayes' theorem with a neural network for gravitational-wave inference

We wish to achieve the Holy Grail of Bayesian inference with deep-learni...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.