High-Dimensional Composite Quantile Regression: Optimal Statistical Guarantees and Fast Algorithms

08/21/2022
by   Haeseong Moon, et al.
0

The composite quantile regression (CQR) was introduced by Zou and Yuan [Ann. Statist. 36 (2008) 1108–1126] as a robust regression method for linear models with heavy-tailed errors while achieving high efficiency. Its penalized counterpart for high-dimensional sparse models was recently studied in Gu and Zou [IEEE Trans. Inf. Theory 66 (2020) 7132–7154], along with a specialized optimization algorithm based on the alternating direct method of multipliers (ADMM). Compared to the various first-order algorithms for penalized least squares, ADMM-based algorithms are not well-adapted to large-scale problems. To overcome this computational hardness, in this paper we employ a convolution-smoothed technique to CQR, complemented with iteratively reweighted ℓ_1-regularization. The smoothed composite loss function is convex, twice continuously differentiable, and locally strong convex with high probability. We propose a gradient-based algorithm for penalized smoothed CQR via a variant of the majorize-minimization principal, which gains substantial computational efficiency over ADMM. Theoretically, we show that the iteratively reweighted ℓ_1-penalized smoothed CQR estimator achieves near-minimax optimal convergence rate under heavy-tailed errors without any moment constraint, and further achieves near-oracle convergence rate under a weaker minimum signal strength condition than needed in Gu and Zou (2020). Numerical studies demonstrate that the proposed method exhibits significant computational advantages without compromising statistical performance compared to two state-of-the-art methods that achieve robustness and high efficiency simultaneously.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2021

High-Dimensional Quantile Regression: Convolution Smoothing and Concave Regularization

ℓ_1-penalized quantile regression is widely used for analyzing high-dime...
research
12/11/2022

Retire: Robust Expectile Regression in High Dimensions

High-dimensional data can often display heterogeneity due to heterosceda...
research
05/05/2022

A Unified Algorithm for Penalized Convolution Smoothed Quantile Regression

Penalized quantile regression (QR) is widely used for studying the relat...
research
05/12/2023

Extended ADMM for general penalized quantile regression with linear constraints in big data

Quantile regression (QR) can be used to describe the comprehensive relat...
research
03/27/2019

Iteratively reweighted least squares for robust regression via SVM and ELM

The measure of most robust machine learning methods is reweighted. To ov...
research
11/12/2021

Distributed Sparse Regression via Penalization

We study sparse linear regression over a network of agents, modeled as a...
research
07/09/2019

Nonconvex Regularized Robust Regression with Oracle Properties in Polynomial Time

This paper investigates tradeoffs among optimization errors, statistical...

Please sign up or login with your details

Forgot password? Click here to reset