DeepAI AI Chat
Log In Sign Up

High Dimensional Classification through ℓ_0-Penalized Empirical Risk Minimization

by   Le-Yu Chen, et al.
Columbia University
Academia Sinica

We consider a high dimensional binary classification problem and construct a classification procedure by minimizing the empirical misclassification risk with a penalty on the number of selected features. We derive non-asymptotic probability bounds on the estimated sparsity as well as on the excess misclassification risk. In particular, we show that our method yields a sparse solution whose l0-norm can be arbitrarily close to true sparsity with high probability and obtain the rates of convergence for the excess misclassification risk. The proposed procedure is implemented via the method of mixed integer linear programming. Its numerical performance is illustrated in Monte Carlo experiments.


page 1

page 2

page 3

page 4


Sparse Quantile Regression

We consider both ℓ _0-penalized and ℓ _0-constrained quantile regression...

High-dimensional logistic entropy clustering

Minimization of the (regularized) entropy of classification probabilitie...

High Dimensional Classification via Empirical Risk Minimization: Improvements and Optimality

In this article, we investigate a family of classification algorithms de...

Generalization Error Bounds for Multiclass Sparse Linear Classifiers

We consider high-dimensional multiclass classification by sparse multino...

Convex Risk Minimization and Conditional Probability Estimation

This paper proves, in very general settings, that convex risk minimizati...

Empirical Risk Minimization with Relative Entropy Regularization: Optimality and Sensitivity Analysis

The optimality and sensitivity of the empirical risk minimization proble...

PAC-Bayesian High Dimensional Bipartite Ranking

This paper is devoted to the bipartite ranking problem, a classical stat...