Benefits of over-parameterization with EM

10/26/2018
by   Ji Xu, et al.
2

Expectation Maximization (EM) is among the most popular algorithms for maximum likelihood estimation, but it is generally only guaranteed to find its stationary points of the log-likelihood objective. The goal of this article is to present theoretical and empirical evidence that over-parameterization can help EM avoid spurious local optima in the log-likelihood. We consider the problem of estimating the mean vectors of a Gaussian mixture model in a scenario where the mixing weights are known. Our study shows that the global behavior of EM, when one uses an over-parameterized model in which the mixing weights are treated as unknown, is better than that when one uses the (correct) model with the mixing weights fixed to the known values. For symmetric Gaussians mixtures with two components, we prove that introducing the (statistically redundant) weight parameters enables EM to find the global maximizer of the log-likelihood starting from almost any initial mean parameters, whereas EM without this over-parameterization may very often fail. For other Gaussian mixtures, we provide empirical evidence that shows similar behavior. Our results corroborate the value of over-parameterization in solving non-convex optimization problems, previously observed in other domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2022

EM's Convergence in Gaussian Latent Tree Models

We study the optimization landscape of the log-likelihood function and t...
research
11/04/2014

Expectation-Maximization for Learning Determinantal Point Processes

A determinantal point process (DPP) is a probabilistic model of set dive...
research
10/12/2011

Improving parameter learning of Bayesian nets from incomplete data

This paper addresses the estimation of parameters of a Bayesian network ...
research
06/27/2012

Convergence of the EM Algorithm for Gaussian Mixtures with Unbalanced Mixing Coefficients

The speed of convergence of the Expectation Maximization (EM) algorithm ...
research
06/30/2020

Sinkhorn EM: An Expectation-Maximization algorithm based on entropic optimal transport

We study Sinkhorn EM (sEM), a variant of the expectation maximization (E...
research
01/03/2016

A Unified Approach for Learning the Parameters of Sum-Product Networks

We present a unified approach for learning the parameters of Sum-Product...
research
04/21/2021

Understanding and Accelerating EM Algorithm's Convergence by Fair Competition Principle and Rate-Verisimilitude Function

Why can the Expectation-Maximization (EM) algorithm for mixture models c...

Please sign up or login with your details

Forgot password? Click here to reset