Learning GMMs with Nearly Optimal Robustness Guarantees

04/19/2021
by   Allen Liu, et al.
0

In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with k components from ϵ-corrupted samples up to accuracy O(ϵ) in total variation distance for any constant k and with mild assumptions on the mixture. This robustness guarantee is optimal up to polylogarithmic factors. At the heart of our algorithm is a new way to relax a system of polynomial equations which corresponds to solving an improper learning problem where we are allowed to output a Gaussian mixture model whose weights are low-degree polynomials.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset