Learning Shared Kernel Models: the Shared Kernel EM algorithm

05/15/2022
by   Graham W. Pulford, et al.
0

Expectation maximisation (EM) is an unsupervised learning method for estimating the parameters of a finite mixture distribution. It works by introducing "hidden" or "latent" variables via Baum's auxiliary function Q that allow the joint data likelihood to be expressed as a product of simple factors. The relevance of EM has increased since the introduction of the variational lower bound (VLB): the VLB differs from Baum's auxiliary function only by the entropy of the PDF of the latent variables Z. We first present a rederivation of the standard EM algorithm using data association ideas from the field of multiple target tracking, using K-valued scalar data association hypotheses rather than the usual binary indicator vectors. The same method is then applied to a little known but much more general type of supervised EM algorithm for shared kernel models, related to probabilistic radial basis function networks. We address a number of shortcomings in the derivations that have been published previously in this area. In particular, we give theoretically rigorous derivations of (i) the complete data likelihood; (ii) Baum's auxiliary function (the E-step) and (iii) the maximisation (M-step) in the case of Gaussian shared kernel models. The subsequent algorithm, called shared kernel EM (SKEM), is then applied to a digit recognition problem using a novel 7-segment digit representation. Variants of the algorithm that use different numbers of features and different EM algorithm dimensions are compared in terms of mean accuracy and mean IoU. A simplified classifier is proposed that decomposes the joint data PDF as a product of lower order PDFs over non-overlapping subsets of variables. The effect of different numbers of assumed mixture components K is also investigated. High-level source code for the data generation and SKEM algorithm is provided.

READ FULL TEXT
research
05/31/2022

Improvements to Supervised EM Learning of Shared Kernel Models by Feature Space Partitioning

Expectation maximisation (EM) is usually thought of as an unsupervised l...
research
10/23/2020

From the Expectation Maximisation Algorithm to Autoencoded Variational Bayes

Although the expectation maximisation (EM) algorithm was introduced in 1...
research
08/09/2019

Régularisation dans les Modèles Linéaires Généralisés Mixtes avec effet aléatoire autorégressif

We address regularised versions of the Expectation-Maximisation (EM) alg...
research
12/02/2018

GAN-EM: GAN based EM learning framework

Expectation maximization (EM) algorithm is to find maximum likelihood so...
research
01/22/2016

On the Latent Variable Interpretation in Sum-Product Networks

One of the central themes in Sum-Product networks (SPNs) is the interpre...
research
01/30/2015

Downscaling Microwave Brightness Temperatures Using Self Regularized Regressive Models

A novel algorithm is proposed to downscale microwave brightness temperat...
research
08/09/2019

Regularising Generalised Linear Mixed Models with an autoregressive random effect

We address regularised versions of the Expectation-Maximisation (EM) alg...

Please sign up or login with your details

Forgot password? Click here to reset