DeepAI AI Chat
Log In Sign Up

High Dimensional Classification through ℓ_0-Penalized Empirical Risk Minimization

11/23/2018
by   Le-Yu Chen, et al.
Columbia University
Academia Sinica
0

We consider a high dimensional binary classification problem and construct a classification procedure by minimizing the empirical misclassification risk with a penalty on the number of selected features. We derive non-asymptotic probability bounds on the estimated sparsity as well as on the excess misclassification risk. In particular, we show that our method yields a sparse solution whose l0-norm can be arbitrarily close to true sparsity with high probability and obtain the rates of convergence for the excess misclassification risk. The proposed procedure is implemented via the method of mixed integer linear programming. Its numerical performance is illustrated in Monte Carlo experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/19/2020

Sparse Quantile Regression

We consider both ℓ _0-penalized and ℓ _0-constrained quantile regression...
12/16/2021

High-dimensional logistic entropy clustering

Minimization of the (regularized) entropy of classification probabilitie...
05/31/2019

High Dimensional Classification via Empirical Risk Minimization: Improvements and Optimality

In this article, we investigate a family of classification algorithms de...
04/13/2022

Generalization Error Bounds for Multiclass Sparse Linear Classifiers

We consider high-dimensional multiclass classification by sparse multino...
06/15/2015

Convex Risk Minimization and Conditional Probability Estimation

This paper proves, in very general settings, that convex risk minimizati...
02/09/2022

Empirical Risk Minimization with Relative Entropy Regularization: Optimality and Sensitivity Analysis

The optimality and sensitivity of the empirical risk minimization proble...
11/09/2015

PAC-Bayesian High Dimensional Bipartite Ranking

This paper is devoted to the bipartite ranking problem, a classical stat...