Ten Steps of EM Suffice for Mixtures of Two Gaussians

09/01/2016
by   Constantinos Daskalakis, et al.
0

The Expectation-Maximization (EM) algorithm is a widely used method for maximum likelihood estimation in models with latent variables. For estimating mixtures of Gaussians, its iteration can be viewed as a soft version of the k-means clustering algorithm. Despite its wide use and applications, there are essentially no known convergence guarantees for this method. We provide global convergence guarantees for mixtures of two Gaussians with known covariance matrices. We show that the population version of EM, where the algorithm is given access to infinitely many samples from the mixture, converges geometrically to the correct mean vectors, and provide simple, closed-form expressions for the convergence rate. As a simple illustration, we show that, in one dimension, ten steps of the EM algorithm initialized at infinity result in less than 1% error estimation of the means. In the finite sample regime, we show that, under a random initialization, Õ(d/ϵ^2) samples suffice to compute the unknown vectors to within ϵ in Mahalanobis distance, where d is the dimension. In particular, the error rate of the EM based estimator is Õ(√(d n)) where n is the number of samples, which is optimal up to logarithmic factors.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset