Sharp Convergence Rate and Support Consistency of Multiple Kernel Learning with Sparse and Dense Regularization

03/27/2011
by   Taiji Suzuki, et al.
0

We theoretically investigate the convergence rate and support consistency (i.e., correctly identifying the subset of non-zero coefficients in the large sample limit) of multiple kernel learning (MKL). We focus on MKL with block-l1 regularization (inducing sparse kernel combination), block-l2 regularization (inducing uniform kernel combination), and elastic-net regularization (including both block-l1 and block-l2 regularization). For the case where the true kernel combination is sparse, we show a sharper convergence rate of the block-l1 and elastic-net MKL methods than the existing rate for block-l1 MKL. We further show that elastic-net MKL requires a milder condition for being consistent than block-l1 MKL. For the case where the optimal kernel combination is not exactly sparse, we prove that elastic-net MKL can achieve a faster convergence rate than the block-l1 and block-l2 MKL methods by carefully controlling the balance between the block-l1and block-l2 regularizers. Thus, our theoretical results overall suggest the use of elastic-net regularization in MKL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2011

Fast Convergence Rate of Multiple Kernel Learning with Elastic-net Regularization

We investigate the learning rate of multiple kernel leaning (MKL) with e...
research
09/28/2009

SpicyMKL

We propose a new optimization algorithm for Multiple Kernel Learning (MK...
research
11/13/2015

Lass-0: sparse non-convex regression by local search

We compute approximate solutions to L0 regularized linear regression usi...
research
03/02/2012

Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness

We investigate the learning rate of multiple kernel learning (MKL) with ...
research
12/02/2019

Relating lp regularization and reweighted l1 regularization

We propose a general framework of iteratively reweighted l1 methods for ...
research
01/15/2010

Sparsity-accuracy trade-off in MKL

We empirically investigate the best trade-off between sparse and uniform...
research
03/15/2018

Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate

Distributed sparse learning with a cluster of multiple machines has attr...

Please sign up or login with your details

Forgot password? Click here to reset