Learning GMMs with Nearly Optimal Robustness Guarantees

04/19/2021
by   Allen Liu, et al.
0

In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with k components from ϵ-corrupted samples up to accuracy O(ϵ) in total variation distance for any constant k and with mild assumptions on the mixture. This robustness guarantee is optimal up to polylogarithmic factors. At the heart of our algorithm is a new way to relax a system of polynomial equations which corresponds to solving an improper learning problem where we are allowed to output a Gaussian mixture model whose weights are low-degree polynomials.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/13/2020

Robustly Learning any Clusterable Mixture of Gaussians

We study the efficient learnability of high-dimensional Gaussian mixture...
research
06/05/2021

Sparsification for Sums of Exponentials and its Algorithmic Applications

Many works in signal processing and learning theory operate under the as...
research
06/06/2022

Mean Estimation in High-Dimensional Binary Markov Gaussian Mixture Models

We consider a high-dimensional mean estimation problem over a binary hid...
research
04/26/2020

Similarity Learning-Based Device Attribution

Methods and systems for attributing browsing activity from two or more d...
research
08/18/2023

An Efficient 1 Iteration Learning Algorithm for Gaussian Mixture Model And Gaussian Mixture Embedding For Neural Network

We propose an Gaussian Mixture Model (GMM) learning algorithm, based on ...
research
09/01/2019

Gaussian mixture model decomposition of multivariate signals

We propose a greedy variational method for decomposing a non-negative mu...
research
06/06/2023

GMMap: Memory-Efficient Continuous Occupancy Map Using Gaussian Mixture Model

Energy consumption of memory accesses dominates the compute energy in en...

Please sign up or login with your details

Forgot password? Click here to reset