Manifold regularization based on Nyström type subsampling

10/13/2017
by   Abhishake Rastogi, et al.
0

In this paper, we study the Nyström type subsampling for large scale kernel methods to reduce the computational complexities of big data. We discuss the multi-penalty regularization scheme based on Nyström type subsampling which is motivated from well-studied manifold regularization schemes. We develop a theoretical analysis of multi-penalty least-square regularization scheme under the general source condition in vector-valued function setting, therefore the results can also be applied to multi-task learning problems. We achieve the optimal minimax convergence rates of multi-penalty regularization using the concept of effective dimension for the appropriate subsampling size. We discuss an aggregation approach based on linear function strategy to combine various Nyström approximants. Finally, we demonstrate the performance of multi-penalty regularization based on Nyström type subsampling on Caltech-101 data set for multi-class image classification and NSL-KDD benchmark data set for intrusion detection problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2019

Convergence analysis of Tikhonov regularization for non-linear statistical inverse learning problems

We study a non-linear statistical inverse learning problem, where we obs...
research
11/07/2016

Optimal rates for the regularized learning algorithms under general source condition

We consider the learning algorithms under general source condition with ...
research
03/15/2021

A new interpretation of (Tikhonov) regularization

Tikhonov regularization with square-norm penalty for linear forward oper...
research
10/04/2009

Regularization Techniques for Learning with Matrices

There is growing body of learning problems for which it is natural to or...
research
11/09/2022

A Unified Analysis of Multi-task Functional Linear Regression Models with Manifold Constraint and Composite Quadratic Penalty

This work studies the multi-task functional linear regression models whe...
research
11/16/2011

Fast Learning Rate of Non-Sparse Multiple Kernel Learning and Optimal Regularization Strategies

In this paper, we give a new generalization error bound of Multiple Kern...
research
02/18/2021

Convex regularization in statistical inverse learning problems

We consider a statistical inverse learning problem, where the task is to...

Please sign up or login with your details

Forgot password? Click here to reset