Improvements to Supervised EM Learning of Shared Kernel Models by Feature Space Partitioning

05/31/2022
by   Graham W. Pulford, et al.
0

Expectation maximisation (EM) is usually thought of as an unsupervised learning method for estimating the parameters of a mixture distribution, however it can also be used for supervised learning when class labels are available. As such, EM has been applied to train neural nets including the probabilistic radial basis function (PRBF) network or shared kernel (SK) model. This paper addresses two major shortcomings of previous work in this area: the lack of rigour in the derivation of the EM training algorithm; and the computational complexity of the technique, which has limited it to low dimensional data sets. We first present a detailed derivation of EM for the Gaussian shared kernel model PRBF classifier, making use of data association theory to obtain the complete data likelihood, Baum's auxiliary function (the E-step) and its subsequent maximisation (M-step). To reduce complexity of the resulting SKEM algorithm, we partition the feature space into R non-overlapping subsets of variables. The resulting product decomposition of the joint data likelihood, which is exact when the feature partitions are independent, allows the SKEM to be implemented in parallel and at R^2 times lower complexity. The operation of the partitioned SKEM algorithm is demonstrated on the MNIST data set and compared with its non-partitioned counterpart. It eventuates that improved performance at reduced complexity is achievable. Comparisons with standard classification algorithms are provided on a number of other benchmark data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2022

Learning Shared Kernel Models: the Shared Kernel EM algorithm

Expectation maximisation (EM) is an unsupervised learning method for est...
research
11/25/2020

Feature space approximation for kernel-based supervised learning

We propose a method for the approximation of high- or even infinite-dime...
research
02/07/2017

Truncated Variational EM for Semi-Supervised Neural Simpletrons

Inference and learning for probabilistic generative networks is often ve...
research
09/26/2020

An Adaptive EM Accelerator for Unsupervised Learning of Gaussian Mixture Models

We propose an Anderson Acceleration (AA) scheme for the adaptive Expecta...
research
08/09/2019

Régularisation dans les Modèles Linéaires Généralisés Mixtes avec effet aléatoire autorégressif

We address regularised versions of the Expectation-Maximisation (EM) alg...
research
12/06/2016

A Probabilistic Framework for Deep Learning

We develop a probabilistic framework for deep learning based on the Deep...
research
07/07/2021

Probabilistic partition of unity networks: clustering based deep approximation

Partition of unity networks (POU-Nets) have been shown capable of realiz...

Please sign up or login with your details

Forgot password? Click here to reset