Learning undirected models via query training

12/05/2019
by   Miguel Lázaro-Gredilla, et al.
0

Typical amortized inference in variational autoencoders is specialized for a single probabilistic query. Here we propose an inference network architecture that generalizes to unseen probabilistic queries. Instead of an encoder-decoder pair, we can train a single inference network directly from data, using a cost function that is stochastic not only over samples, but also over queries. We can use this network to perform the same inference tasks as we would in an undirected graphical model with hidden variables, without having to deal with the intractable partition function. The results can be mapped to the learning of an actual undirected model, which is a notoriously hard problem. Our network also marginalizes nuisance variables as required. We show that our approach generalizes to unseen probabilistic queries on also unseen test data, providing fast and flexible inference. Experiments show that this approach outperforms or matches PCD and AdVIL on 9 benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2020

Query Training: Learning and inference for directed and undirected graphical models

Probabilistic graphical models (PGMs) provide a compact representation o...
research
02/11/2016

Variational Inference for Sparse and Undirected Models

Undirected graphical models are applied in genomics, protein structure p...
research
05/27/2011

Variational Cumulant Expansions for Intractable Distributions

Intractable distributions present a common difficulty in inference withi...
research
10/04/2019

A Dichotomy for Homomorphism-Closed Queries on Probabilistic Graphs

We study the problem of probabilistic query evaluation (PQE) over probab...
research
07/04/2012

Piecewise Training for Undirected Models

For many large undirected models that arise in real-world applications, ...
research
12/13/2014

Multi-Context Models for Reasoning under Partial Knowledge: Generative Process and Inference Grammar

Arriving at the complete probabilistic knowledge of a domain, i.e., lear...
research
07/23/2019

Hallucinating Beyond Observation: Learning to Complete with Partial Observation and Unpaired Prior Knowledge

We propose a novel single-step training strategy that allows convolution...

Please sign up or login with your details

Forgot password? Click here to reset