Neural Network based Explicit Mixture Models and Expectation-maximization based Learning

07/31/2019
by   Dong Liu, et al.
5

We propose two neural network based mixture models in this article. The proposed mixture models are explicit in nature. The explicit models have analytical forms with the advantages of computing likelihood and efficiency of generating samples. Computation of likelihood is an important aspect of our models. Expectation-maximization based algorithms are developed for learning parameters of the proposed models. We provide sufficient conditions to realize the expectation-maximization based learning. The main requirements are invertibility of neural networks that are used as generators and Jacobian computation of functional form of the neural networks. The requirements are practically realized using a flow-based neural network. In our first mixture model, we use multiple flow-based neural networks as generators. Naturally the model is complex. A single latent variable is used as the common input to all the neural networks. The second mixture model uses a single flow-based neural network as a generator to reduce complexity. The single generator has a latent variable input that follows a Gaussian mixture distribution. We demonstrate efficiency of proposed mixture models through extensive experiments for generating samples and maximum likelihood based classification.

READ FULL TEXT
research
08/19/2019

Quantum Expectation-Maximization for Gaussian Mixture Models

The Expectation-Maximization (EM) algorithm is a fundamental tool in uns...
research
08/11/2017

Neural Expectation Maximization

Many real world tasks such as reasoning and physical interaction require...
research
10/13/2019

Powering Hidden Markov Model by Neural Network based Generative Models

Hidden Markov model (HMM) has been successfully used for sequential data...
research
11/26/2014

Fisher Vectors Derived from Hybrid Gaussian-Laplacian Mixture Models for Image Annotation

In the traditional object recognition pipeline, descriptors are densely ...
research
11/24/2014

Noise Benefits in Expectation-Maximization Algorithms

This dissertation shows that careful injection of noise into sample data...
research
02/20/2019

Mixture Models for Diverse Machine Translation: Tricks of the Trade

Mixture models trained via EM are among the simplest, most widely used a...
research
03/28/2022

Learning Sparse Mixture Models

This work approximates high-dimensional density functions with an ANOVA-...

Please sign up or login with your details

Forgot password? Click here to reset