Optimal robust mean and location estimation via convex programs with respect to any pseudo-norms

by   Jules Depersin, et al.

We consider the problem of robust mean and location estimation w.r.t. any pseudo-norm of the form x∈ℝ^d→ ||x||_S = sup_v∈ S<v,x> where S is any symmetric subset of ℝ^d. We show that the deviation-optimal minimax subgaussian rate for confidence 1-δ is max(l^*(Σ^1/2S)/√(N), sup_v∈ S||Σ^1/2v||_2√(log(1/δ)/N)) where l^*(Σ^1/2S) is the Gaussian mean width of Σ^1/2S and Σ the covariance of the data (in the benchmark i.i.d. Gaussian case). This improves the entropic minimax lower bound from [Lugosi and Mendelson, 2019] and closes the gap characterized by Sudakov's inequality between the entropy and the Gaussian mean width for this problem. This shows that the right statistical complexity measure for the mean estimation problem is the Gaussian mean width. We also show that this rate can be achieved by a solution to a convex optimization problem in the adversarial and L_2 heavy-tailed setup by considering minimum of some Fenchel-Legendre transforms constructed using the Median-of-means principle. We finally show that this rate may also be achieved in situations where there is not even a first moment but a location parameter exists.



There are no comments yet.


page 1

page 2

page 3

page 4


On the robustness to adversarial corruption and to heavy-tailed data of the Stahel-Donoho median of means

We consider median of means (MOM) versions of the Stahel-Donoho outlying...

Robust subgaussian estimation with VC-dimension

Median-of-means (MOM) based procedures provide non-asymptotic and strong...

Distributed Gaussian Mean Estimation under Communication Constraints: Optimal Rates and Communication-Efficient Algorithms

We study distributed estimation of a Gaussian mean under communication c...

Minimax bounds for estimating multivariate Gaussian location mixtures

We prove minimax bounds for estimating Gaussian location mixtures on ℝ^d...

Column randomization and almost-isometric embeddings

The matrix A:ℝ^n →ℝ^m is (δ,k)-regular if for any k-sparse vector x, ...

From Gauss to Kolmogorov: Localized Measures of Complexity for Ellipses

The Gaussian width is a fundamental quantity in probability, statistics ...

Covariance Prediction via Convex Optimization

We consider the problem of predicting the covariance of a zero mean Gaus...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.