Expectation Maximization

What is Expectation Maximization?

Expectation maximization (EM) is an algorithm that finds the best estimates for model parameters when a dataset is missing information or has hidden latent variables. While this technique can be used to determine the maximum likelihood function, or the “best fit” model for a set of data, EM takes things a step further and works on incomplete data sets. This is achieved by inserting random values for the missing data points, and then estimating a second set of data. The new dataset is used to refine the guesses added to the first, with the process repeating until the algorithm’s termination criterion are met.

What’s the Difference Between the Maximum Likelihood Function and Expectation Maximization?

While both the Maximum Likelihood Estimation (MLE) and Expectation Maximization algorithms can determine “best-fit” parameters, both take significantly different approaches. MLE needs to know all parameters first to construct a model, which is usually more accurate than EM, but it cannot work if information is missing. EM can guess the missing parameter values and tweak the model as needed with a few additional steps:

  1. An initial value is generated for the model’s parameters and assigned a probability distribution, known as the “Expected” distribution.

  2. Newly observed data is fed into the model.

  3. Using differential equations and conditional probability, the probability distribution from the expected distribution are tweaked to increase “best fit” likelihood.

  4. These steps are repeated until the expected distribution doesn’t change from the observed distribution, known as reaching stability.

Please sign up or login with your details

Forgot password? Click here to reset