Outlier-robust estimation of a sparse linear model using ℓ_1-penalized Huber's M-estimator

04/12/2019
by   Arnak S. Dalalyan, et al.
0

We study the problem of estimating a p-dimensional s-sparse vector in a linear model with Gaussian design and additive noise. In the case where the labels are contaminated by at most o adversarial outliers, we prove that the ℓ_1-penalized Huber's M-estimator based on n samples attains the optimal rate of convergence (s/n)^1/2 + (o/n), up to a logarithmic factor. This is proved when the proportion of contaminated samples goes to zero at least as fast as 1/(n), but we argue that constant fraction of outliers can be achieved by slightly more involved techniques.

READ FULL TEXT
research
10/05/2021

Robust censored regression with l1-norm regularization

This paper considers inference in a linear regression model with random ...
research
06/22/2023

Outlier-robust Estimation of a Sparse Linear Model Using Invexity

In this paper, we study problem of estimating a sparse regression vector...
research
06/04/2019

Inference robust to outliers with l1-norm penalization

This paper considers the problem of inference in a linear regression mod...
research
09/30/2020

Regress Consistently when Oblivious Outliers Overwhelm

We give a novel analysis of the Huber loss estimator for consistent robu...
research
12/28/2020

High-dimensional inference robust to outliers with l1-norm penalization

This paper studies inference in the high-dimensional linear regression m...
research
04/05/2019

Robust Subspace Recovery with Adversarial Outliers

We study the problem of robust subspace recovery (RSR) in the presence o...
research
12/21/2020

TVOR: Finding Discrete Total Variation Outliers among Histograms

Pearson's chi-squared test can detect outliers in the data distribution ...

Please sign up or login with your details

Forgot password? Click here to reset