Statistical guarantees for the EM algorithm: From population to sample-based analysis

08/09/2014
by   Sivaraman Balakrishnan, et al.
0

We develop a general framework for proving rigorous guarantees on the performance of the EM algorithm and a variant known as gradient EM. Our analysis is divided into two parts: a treatment of these algorithms at the population level (in the limit of infinite data), followed by results that apply to updates based on a finite set of samples. First, we characterize the domain of attraction of any global maximizer of the population likelihood. This characterization is based on a novel view of the EM updates as a perturbed form of likelihood ascent, or in parallel, of the gradient EM updates as a perturbed form of standard gradient ascent. Leveraging this characterization, we then provide non-asymptotic guarantees on the EM and gradient EM algorithms when applied to a finite set of samples. We develop consequences of our general theory for three canonical examples of incomplete-data problems: mixture of Gaussians, mixture of regressions, and linear regression with covariates missing completely at random. In each case, our theory guarantees that with a suitable initialization, a relatively small number of EM (or gradient EM) steps will yield (with high probability) an estimate that is within statistical error of the MLE. We provide simulations to confirm this theoretically predicted behavior.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2020

Differentially Private (Gradient) Expectation Maximization Algorithm with Statistical Guarantees

(Gradient) Expectation Maximization (EM) is a widely used algorithm for ...
research
09/01/2016

Ten Steps of EM Suffice for Mixtures of Two Gaussians

The Expectation-Maximization (EM) algorithm is a widely used method for ...
research
05/22/2020

Instability, Computational Efficiency and Statistical Accuracy

Many statistical estimators are defined as the fixed point of a data-dep...
research
12/27/2015

Statistical and Computational Guarantees for the Baum-Welch Algorithm

The Hidden Markov Model (HMM) is one of the mainstays of statistical mod...
research
11/27/2015

Regularized EM Algorithms: A Unified Framework and Statistical Guarantees

Latent variable models are a fundamental modeling tool in machine learni...
research
06/09/2015

Stagewise Learning for Sparse Clustering of Discretely-Valued Data

The performance of EM in learning mixtures of product distributions ofte...
research
02/14/2017

Practical Learning of Predictive State Representations

Over the past decade there has been considerable interest in spectral al...

Please sign up or login with your details

Forgot password? Click here to reset