Truncated Variational Expectation Maximization

10/10/2016
by   Jörg Lücke, et al.
0

We derive a novel variational expectation maximization approach based on truncated variational distributions. The truncated distributions are proportional to exact posteriors in a subset of a discrete state space and equal zero otherwise. In contrast to factored variational approximations or Gaussian approximations, truncated approximations neither assume posterior independence nor mono-modal posteriors. The novel variational approach is closely related to Expectation Truncation (Lücke and Eggert, 2010) - a preselection based EM approximation. It shares with Expectation Truncation the central idea of truncated distributions and the application domain of discrete hidden variables. In contrast to Expectation Truncation we here show how truncated distributions can be included into the theoretical framework of variational EM approximations. A fully variational treatment of truncated distributions then allows for derivations of novel general and mathematically grounded results, which in turn can be used to formulate novel efficient algorithms for parameter optimization of probabilistic data models. Apart from showing that truncated distributions are fully consistent with the variational free-energy framework, we find the free-energy that corresponds to truncated distributions to be given by compact and efficiently computable expressions, while update equations for model parameters (M-steps) remain in their standard form. Furthermore, expectation values w.r.t. truncated distributions are given in a generic form. Based on these observations, we show how an efficient and easily applicable meta-algorithm can be formulated that guarantees a monotonic increase of the free-energy. More generally, the obtained variational framework developed here links variational E-steps to discrete optimization, and it provides a theoretical basis to tightly couple sampling and variational approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/16/2017

k-Means is a Variational EM Approximation of Gaussian Mixture Models

We show that k-means (Lloyd's algorithm) is equivalent to a variational ...
research
11/15/2012

A Truncated EM Approach for Spike-and-Slab Sparse Coding

We study inference and learning based on a sparse coding model with `spi...
research
12/21/2017

Truncated Variational Sampling for "Black Box" Optimization of Generative Models

We investigate the optimization of two generative models with binary hid...
research
02/13/2023

GFlowNet-EM for learning compositional latent variable models

Latent variable models (LVMs) with discrete compositional latents are an...
research
06/02/2016

Nonlinear Statistical Learning with Truncated Gaussian Graphical Models

We introduce the truncated Gaussian graphical model (TGGM) as a novel fr...
research
06/17/2020

Analytical Probability Distributions and EM-Learning for Deep Generative Networks

Deep Generative Networks (DGNs) with probabilistic modeling of their out...
research
11/27/2020

Direct Evolutionary Optimization of Variational Autoencoders With Binary Latents

Discrete latent variables are considered important for real world data, ...

Please sign up or login with your details

Forgot password? Click here to reset