From the Expectation Maximisation Algorithm to Autoencoded Variational Bayes

10/23/2020
by   Graham W. Pulford, et al.
0

Although the expectation maximisation (EM) algorithm was introduced in 1970, it remains somewhat inaccessible to machine learning practitioners due to its obscure notation, terse proofs and lack of concrete links to modern machine learning techniques like autoencoded variational Bayes. This has resulted in gaps in the AI literature concerning the meaning of such concepts like “latent variables” and “variational lower bound,” which are frequently used but often not clearly explained. The roots of these ideas lie in the EM algorithm. We first give a tutorial presentation of the EM algorithm for estimating the parameters of a K-component mixture density. The Gaussian mixture case is presented in detail using K-ary scalar hidden (or latent) variables rather than the more traditional binary valued K-dimenional vectors. This presentation is motivated by mixture modelling from the target tracking literature. In a similar style to Bishop's 2009 book, we present variational Bayesian inference as a generalised EM algorithm stemming from the variational (or evidential) lower bound, as well as the technique of mean field approximation (or product density transform). We continue the evolution from EM to variational autoencoders, developed by Kingma Welling in 2014. In so doing, we establish clear links between the EM algorithm and its variational counterparts, hence clarifying the meaning of “latent variables.” We provide a detailed coverage of the “reparametrisation trick” and focus on how the AEVB differs from conventional variational Bayesian inference. Throughout the tutorial, consistent notational conventions are used. This unifies the narrative and clarifies the concepts. Some numerical examples are given to further illustrate the algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2022

Learning Shared Kernel Models: the Shared Kernel EM algorithm

Expectation maximisation (EM) is an unsupervised learning method for est...
research
10/09/2017

α-Variational Inference with Statistical Guarantees

We propose a variational approximation to Bayesian posterior distributio...
research
12/25/2017

On Statistical Optimality of Variational Bayes

The article addresses a long-standing open problem on the justification ...
research
05/27/2016

Variational Bayesian Inference for Hidden Markov Models With Multivariate Gaussian Output Distributions

Hidden Markov Models (HMM) have been used for several years in many time...
research
04/16/2017

k-Means is a Variational EM Approximation of Gaussian Mixture Models

We show that k-means (Lloyd's algorithm) is equivalent to a variational ...
research
03/29/2018

Copula Variational Bayes inference via information geometry

Variational Bayes (VB), also known as independent mean-field approximati...
research
07/04/2011

A Variational Bayes Approach to Decoding in a Phase-Uncertain Digital Receiver

This paper presents a Bayesian approach to symbol and phase inference in...

Please sign up or login with your details

Forgot password? Click here to reset