EM Converges for a Mixture of Many Linear Regressions

05/28/2019
by   Jeongyeol Kwon, et al.
0

We study the convergence of the Expectation-Maximization (EM) algorithm for mixtures of linear regressions with an arbitrary number k of components. We show that as long as signal-to-noise ratio (SNR) is more than Õ(k^2), well-initialized EM converges to the true regression parameters. Previous results for k ≥ 3 have only established local convergence for the noiseless setting, i.e., where SNR is infinitely large. Our results establish a near optimal statistical error rate of Õ(σ√(k^2 d/n)) for (sample-splitting) finite-sample EM with k components, where d is dimension, n is the number of samples, and σ is the variance of noise. In particular, our results imply exact recovery as σ→ 0, in contrast to most previous local convergence results for EM, where the statistical error scaled with the norm of parameters. Standard moment-method approaches suffice to guarantee we are in the region where our local convergence guarantees apply.

READ FULL TEXT
research
04/26/2017

Estimating the coefficients of a mixture of two linear regressions by expectation maximization

We give convergence guarantees for estimating the coefficients of a symm...
research
09/01/2016

Ten Steps of EM Suffice for Mixtures of Two Gaussians

The Expectation-Maximization (EM) algorithm is a widely used method for ...
research
02/20/2023

Sharp analysis of EM for learning mixtures of pairwise differences

We consider a symmetric mixture of linear regressions with random sample...
research
06/16/2019

Global Convergence of Least Squares EM for Demixing Two Log-Concave Densities

This work studies the location estimation problem for a mixture of two r...
research
06/04/2020

On the Minimax Optimality of the EM Algorithm for Learning Two-Component Mixed Linear Regression

We study the convergence rates of the EM algorithm for learning two-comp...
research
05/02/2013

Learning Mixtures of Bernoulli Templates by Two-Round EM with Performance Guarantee

Dasgupta and Shulman showed that a two-round variant of the EM algorithm...
research
12/29/2021

Time varying regression with hidden linear dynamics

We revisit a model for time-varying linear regression that assumes the u...

Please sign up or login with your details

Forgot password? Click here to reset