DeepAI
Log In Sign Up

Iterative Least Trimmed Squares for Mixed Linear Regression

02/10/2019
by   Yanyao Shen, et al.
0

Given a linear regression setting, Iterative Least Trimmed Squares (ILTS) involves alternating between (a) selecting the subset of samples with lowest current loss, and (b) re-fitting the linear model only on that subset. Both steps are very fast and simple. In this paper we analyze ILTS in the setting of mixed linear regression with corruptions (MLR-C). We first establish deterministic conditions (on the features etc.) under which the ILTS iterate converges linearly to the closest mixture component. We also provide a global algorithm that uses ILTS as a subroutine, to fully solve mixed linear regressions with corruptions. We then evaluate it for the widely studied setting of isotropic Gaussian features, and establish that we match or better existing results in terms of sample complexity. Finally, we provide an ODE analysis for a gradient-descent variant of ILTS that has optimal time complexity. Our results provide initial theoretical evidence that iteratively fitting to the best subset of samples -- a potentially widely applicable idea -- can provably provide state of the art performance in bad training data settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/28/2018

Iteratively Learning from the Best

We study a simple generic framework to address the issue of bad training...
08/19/2016

Solving a Mixture of Many Random Linear Equations by Tensor Decomposition and Alternating Minimization

We consider the problem of solving mixed random linear equations with k ...
03/21/2019

Convergence of Parameter Estimates for Regularized Mixed Linear Regression Models

We consider Mixed Linear Regression (MLR), where training data have bee...
12/04/2012

Better subset regression

To find efficient screening methods for high dimensional linear regressi...
04/23/2020

Alternating Minimization Converges Super-Linearly for Mixed Linear Regression

We address the problem of solving mixed random linear equations. We have...
07/01/2020

Online Robust Regression via SGD on the l1 loss

We consider the robust linear regression problem in the online setting w...
02/09/2018

Large Scale Constrained Linear Regression Revisited: Faster Algorithms via Preconditioning

In this paper, we revisit the large-scale constrained linear regression ...