
Neural Variational Inference and Learning in Undirected Graphical Models
Many problems in machine learning are naturally expressed in the languag...
read it

Black Box Variational Inference
Variational inference has become a widely used method to approximate pos...
read it

Stochastic gradient variational Bayes for gamma approximating distributions
While stochastic variational inference is relatively well known for scal...
read it

Likelihood Almost Free Inference Networks
Variational inference for latent variable models is prevalent in various...
read it

Deep Gaussian Markov random fields
Gaussian Markov random fields (GMRFs) are probabilistic graphical models...
read it

SamplingFree Variational Inference of Bayesian Neural Nets
We propose a new Bayesian Neural Net (BNN) formulation that affords vari...
read it

VBALD  Variational Bayesian Approximation of Log Determinants
Evaluating the log determinant of a positive definite matrix is ubiquito...
read it
Adversarial Variational Inference and Learning in Markov Random Fields
Markov random fields (MRFs) find applications in a variety of machine learning areas, while the inference and learning of such models are challenging in general. In this paper, we propose the Adversarial Variational Inference and Learning (AVIL) algorithm to solve the problems with a minimal assumption about the model structure of an MRF. AVIL employs two variational distributions to approximately infer the latent variables and estimate the partition function, respectively. The variational distributions, which are parameterized as neural networks, provide an estimate of the negative log likelihood of the MRF. On one hand, the estimate is in an intuitive form of approximate contrastive free energy. On the other hand, the estimate is a minimax optimization problem, which is solved by stochastic gradient descent in an alternating manner. We apply AVIL to various undirected generative models in a fully blackbox manner and obtain better results than existing competitors on several real datasets.
READ FULL TEXT
Comments
There are no comments yet.