EM Algorithm is Sample-Optimal for Learning Mixtures of Well-Separated Gaussians

02/02/2020
by   Jeongyeol Kwon, et al.
17

We consider the problem of spherical Gaussian Mixture models with k ≥ 3 components when the components are well separated. A fundamental previous result established that separation of Ω(√(log k)) is necessary and sufficient for identifiability of the parameters with polynomial sample complexity (Regev and Vijayaraghavan, 2017). We show that Õ(kd/ϵ^2) samples suffice, closing the gap from polynomial to linear, and thus giving the first sample-optimal upper bound for the parameter estimation of well-separated Gaussian mixtures (up to logarithmic factors). We accomplish this by proving a new result for the Expectation-Maximization (EM) algorithm: we show that EM converges locally, under separation Ω(√(log k)). The previous best-known guarantee required Ω(√(k)) separation (Yan, et al., 2017). Unlike prior work, our results do not assume or use prior knowledge of the (potentially different) mixing weights or variances of the Gaussian components. Furthermore, our results show that the finite-sample error of EM does not depend on non-universal quantities such as pairwise distances between means of Gaussian components.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/09/2018

Statistical Convergence of the EM Algorithm on Gaussian Mixture Models

We study the convergence behavior of the Expectation Maximization (EM) a...
11/21/2017

Parameter Estimation in Gaussian Mixture Models with Malicious Noise, without Balanced Mixing Coefficients

We consider the problem of estimating means of two Gaussians in a 2-Gaus...
08/28/2019

Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in O(√(n)) iterations

We analyze the classical EM algorithm for parameter estimation in the sy...
01/03/2021

Improved Convergence Guarantees for Learning Gaussian Mixture Models by EM and Gradient EM

We consider the problem of estimating the parameters a Gaussian Mixture ...
12/10/2021

Beyond Parallel Pancakes: Quasi-Polynomial Time Guarantees for Non-Spherical Gaussian Mixtures

We consider mixtures of k≥ 2 Gaussian components with unknown means and ...
09/26/2020

An Adaptive EM Accelerator for Unsupervised Learning of Gaussian Mixture Models

We propose an Anderson Acceleration (AA) scheme for the adaptive Expecta...
08/05/2015

A MAP approach for ℓ_q-norm regularized sparse parameter estimation using the EM algorithm

In this paper, Bayesian parameter estimation through the consideration o...