Sparse Quantile Regression
We consider both ℓ _0-penalized and ℓ _0-constrained quantile regression estimators. For the ℓ _0-penalized estimator, we derive an exponential inequality on the tail probability of excess quantile prediction risk and apply it to obtain non-asymptotic upper bounds on the mean-square parameter and regression function estimation errors. We also derive analogous results for the ℓ _0-constrained estimator. The resulting rates of convergence are minimax-optimal and the same as those for ℓ _1-penalized estimators. Further, we characterize expected Hamming loss for the ℓ _0-penalized estimator. We implement the proposed procedure via mixed integer linear programming and also a more scalable first-order approximation algorithm. We illustrate the finite-sample performance of our approach in Monte Carlo experiments and its usefulness in a real data application concerning conformal prediction of infant birth weights (with top>10^3). In sum, ourℓ_0-based method produces a much sparser estimator than theℓ_1-penalized approach without compromising precision.
READ FULL TEXT