Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity

12/09/2014
by   Santosh S. Vempala, et al.
0

We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions. We use the technique to give a polynomial-time algorithm for standard ICA with sample complexity nearly linear in the dimension, thereby improving substantially on previous bounds. The analysis is based on properties of random polynomials, namely the spacings of an ensemble of polynomials. Our technique also applies to other applications of tensor decompositions, including spherical Gaussian mixture models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2023

Polynomial Time and Private Learning of Unbounded Gaussian Mixture Models

We study the problem of privately estimating the parameters of d-dimensi...
research
02/22/2018

Learning Mixtures of Linear Regressions with Nearly Optimal Complexity

Mixtures of Linear Regressions (MLR) is an important mixture model with ...
research
08/19/2016

Solving a Mixture of Many Random Linear Equations by Tensor Decomposition and Alternating Minimization

We consider the problem of solving mixed random linear equations with k ...
research
07/12/2021

Forster Decomposition and Learning Halfspaces with Noise

A Forster transform is an operation that turns a distribution into one w...
research
10/04/2016

The Search Problem in Mixture Models

We consider the task of learning the parameters of a single component o...
research
08/10/2020

Statistical Query Lower Bounds for Tensor PCA

In the Tensor PCA problem introduced by Richard and Montanari (2014), on...
research
09/14/2020

Learning Mixtures of Permutations: Groups of Pairwise Comparisons and Combinatorial Method of Moments

In applications such as rank aggregation, mixture models for permutation...

Please sign up or login with your details

Forgot password? Click here to reset