An L1 Representer Theorem for Multiple-Kernel Regression

11/02/2018
by   Shayan Aziznejad, et al.
0

The theory of RKHS provides an elegant framework for supervised learning. It is the foundation of all kernel methods in machine learning. Implicit in its formulation is the use of a quadratic regularizer associated with the underlying inner product which imposes smoothness constraints. In this paper, we consider instead the generalized total-variation (gTV) norm as the sparsity-promoting regularizer. This leads us to propose a new Banach-space framework that justifies the use of generalized LASSO, albeit in a slightly modified version. We prove a representer theorem for multiple-kernel regression (MKR) with gTV regularization. The theorem states that the solutions of MKR have kernel expansions with adaptive positions, while the gTV norm enforces an ℓ_1 penalty on the coefficients. We discuss the sparsity-promoting effect of the gTV norm which prevents redundancy in the multiple-kernel scenario.

READ FULL TEXT

page 12

page 13

research
03/02/2019

A unifying representer theorem for inverse problems and machine learning

The standard approach for dealing with the ill-posedness of the training...
research
06/11/2020

Implicit Kernel Attention

Attention compute the dependency between representations, and it encoura...
research
03/12/2019

A total variation based regularizer promoting piecewise-Lipschitz reconstructions

We introduce a new regularizer in the total variation family that promot...
research
01/23/2011

Reproducing Kernel Banach Spaces with the l1 Norm

Targeting at sparse learning, we construct Banach spaces B of functions ...
research
11/13/2010

Regularization Strategies and Empirical Bayesian Learning for MKL

Multiple kernel learning (MKL), structured sparsity, and multi-task lear...
research
02/13/2014

Regularization for Multiple Kernel Learning via Sum-Product Networks

In this paper, we are interested in constructing general graph-based reg...
research
09/09/2008

Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning

For supervised and unsupervised learning, positive definite kernels allo...

Please sign up or login with your details

Forgot password? Click here to reset