Robust estimation with Lasso when outputs are adversarially contaminated
We consider robust estimation when outputs are adversarially contaminated. Nguyen and Tran (2012) proposed an extended Lasso for robust parameter estimation and then they showed the convergence rate of the estimation error. Recently, Dalalyan and Thompson (2019) gave some useful inequalities and then they showed a sharper convergence rate than Nguyen and Tran (2012) . They focused on the fact that the minimization problem of the extended Lasso can become that of the penalized Huber loss function with L_1 penalty. The distinguishing point is that the Huber loss function includes an extra tuning parameter, which is different from the conventional method. However, there is a critical mistake in the proof of Dalalyan and Thompson (2019). We improve the proof and then we give a sharper convergence rate than Nguyen and Tran (2012) , when the number of outliers is larger. The significance of our proof is to use some specific properties of the Huber function. Such techniques have not been used in the past proofs.
READ FULL TEXT