Learning Mixtures of Product Distributions via Higher Multilinear Moments
Learning mixtures of k binary product distributions is a central problem in computational learning theory, but one where there are wide gaps between the best known algorithms and lower bounds (even for restricted families of algorithms). We narrow many of these gaps by developing novel insights about how to reason about higher order multilinear moments. Our results include: 1) An n^O(k^2) time algorithm for learning mixtures of binary product distributions, giving the first improvement on the n^O(k^3) time algorithm of Feldman, O'Donnell and Servedio 2) An n^Ω(√(k)) statistical query lower bound, improving on the n^Ω( k) lower bound that is based on connections to sparse parity with noise 3) An n^O( k) time algorithm for learning mixtures of k subcubes. This special case can still simulate many other hard learning problems, but is much richer than any of them alone. As a corollary, we obtain more flexible algorithms for learning decision trees under the uniform distribution, that work with stochastic transitions, when we are only given positive examples and with a polylogarithmic number of samples for any fixed k. Our algorithms are based on a win-win analysis where we either build a basis for the moments or locate a degeneracy that can be used to simplify the problem, which we believe will have applications to other learning problems over discrete domains.
READ FULL TEXT