Learning GMMs with Nearly Optimal Robustness Guarantees
In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with k components from ϵ-corrupted samples up to accuracy O(ϵ) in total variation distance for any constant k and with mild assumptions on the mixture. This robustness guarantee is optimal up to polylogarithmic factors. At the heart of our algorithm is a new way to relax a system of polynomial equations which corresponds to solving an improper learning problem where we are allowed to output a Gaussian mixture model whose weights are low-degree polynomials.
READ FULL TEXT