Learning GMMs with Nearly Optimal Robustness Guarantees

04/19/2021
by   Allen Liu, et al.
0

In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with k components from ϵ-corrupted samples up to accuracy O(ϵ) in total variation distance for any constant k and with mild assumptions on the mixture. This robustness guarantee is optimal up to polylogarithmic factors. At the heart of our algorithm is a new way to relax a system of polynomial equations which corresponds to solving an improper learning problem where we are allowed to output a Gaussian mixture model whose weights are low-degree polynomials.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/13/2020

Robustly Learning any Clusterable Mixture of Gaussians

We study the efficient learnability of high-dimensional Gaussian mixture...
06/05/2021

Sparsification for Sums of Exponentials and its Algorithmic Applications

Many works in signal processing and learning theory operate under the as...
04/12/2017

Robustly Learning a Gaussian: Getting Optimal Error, Efficiently

We study the fundamental problem of learning the parameters of a high-di...
02/21/2020

Petrophysically and geologically guided multi-physics inversion using a dynamic Gaussian mixture model

In a previous paper, we introduced a framework for carrying out petrophy...
04/26/2020

Similarity Learning-Based Device Attribution

Methods and systems for attributing browsing activity from two or more d...
11/13/2021

Minimax Supervised Clustering in the Anisotropic Gaussian Mixture Model: A new take on Robust Interpolation

We study the supervised clustering problem under the two-component aniso...
06/23/2020

Robust Gaussian Covariance Estimation in Nearly-Matrix Multiplication Time

Robust covariance estimation is the following, well-studied problem in h...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.