Non-Asymptotic Guarantees for Robust Statistical Learning under (1+ε)-th Moment Assumption

01/10/2022
by   Lihu Xu, et al.
7

There has been a surge of interest in developing robust estimators for models with heavy-tailed data in statistics and machine learning. This paper proposes a log-truncated M-estimator for a large family of statistical regressions and establishes its excess risk bound under the condition that the data have (1+ε)-th moment with ε∈ (0,1]. With an additional assumption on the associated risk function, we obtain an ℓ_2-error bound for the estimation. Our theorems are applied to establish robust M-estimators for concrete regressions. Besides convex regressions such as quantile regression and generalized linear models, many non-convex regressions can also be fit into our theorems, we focus on robust deep neural network regressions, which can be solved by the stochastic gradient descent algorithms. Simulations and real data analysis demonstrate the superiority of log-truncated estimations over standard estimations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset