Gaussian Mixture Models

What are Gaussian Mixture Models?

Gaussian mixture models are probabilistic models that use unsupervised learning to categorize new data based only on the normal distribution of the subpopulations. This model assumes all data points are a finite mixture of Gaussian (normal) distributions with unknown parameters. Similar to k-means clustering, this technique evaluates the covariance structure of the data and the centers of the latent Gaussians to assign classification.

There’s no need to label the data first, since the model will learn which subpopulation each data point belongs to automatically as it groups the distributions.

How do Gaussian Mixture Models Work?

In most cases, expectation maximization is used to create gaussian mixture models, which is a three-step process. The general goal is to alternate between fixed values (E-step) and maximum likelihood estimates of the non-fixed values (M-step) until both values match.

  1. E step: Calculate the expected value of the component assignments for each data point given the model parameters.

  2. M step: Update all weights and values to maximize the expectations calculated in the E step with respect to the model parameters.

  3. This process repeats iteratively until the values don’t change (convergence), which yields a maximum likelihood estimate.


These mixture models can also be used for feature extraction, such as in natural language processing, or tracking multiple objects in a video sequence by using the number of mixture components and their means to predict object locations at each frame.

Please sign up or login with your details

Forgot password? Click here to reset