Sparse Learning in reproducing kernel Hilbert space

01/03/2019
by   Xin He, et al.
0

Sparse learning aims to learn the sparse structure of the true target function from the collected data, which plays a crucial role in high dimensional data analysis. This article proposes a unified and universal method for learning sparsity of M-estimators within a rich family of loss functions in a reproducing kernel Hilbert space (RKHS). The family of loss functions interested is very rich, including most commonly used ones in literature. More importantly, the proposed method is motivated by some nice properties in the induced RKHS, and is computationally efficient for large-scale data, and can be further improved through parallel computing. The asymptotic estimation and selection consistencies of the proposed method are established for a general loss function under mild conditions. It works for general loss function, admits general dependence structure, allows for efficient computation, and with theoretical guarantee. The superior performance of our proposed method is also supported by a variety of simulated examples and a real application in the human breast cancer study (GSE20194).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2018

Scalable kernel-based variable selection with sparsistency

Variable selection is central to high-dimensional data analysis, and var...
research
09/29/2021

A gradient-based variable selection for binary classification in reproducing kernel Hilbert space

Variable selection is essential in high-dimensional data analysis. Altho...
research
03/09/2020

Risk Analysis of Divide-and-Conquer ERM

Theoretical analysis of the divide-and-conquer based distributed learnin...
research
03/09/2020

Theoretical Analysis of Divide-and-Conquer ERM: Beyond Square Loss and RKHS

Theoretical analysis of the divide-and-conquer based distributed learnin...
research
12/03/2019

Nonparametric Screening under Conditional Strictly Convex Loss for Ultrahigh Dimensional Sparse Data

Sure screening technique has been considered as a powerful tool to handl...
research
03/26/2014

Beyond L2-Loss Functions for Learning Sparse Models

Incorporating sparsity priors in learning tasks can give rise to simple,...
research
07/12/2022

Parallel APSM for Fast and Adaptive Digital SIC in Full-Duplex Transceivers with Nonlinearity

This paper presents a kernel-based adaptive filter that is applied for t...

Please sign up or login with your details

Forgot password? Click here to reset