Sparsity-accuracy trade-off in MKL

01/15/2010
by   Ryota Tomioka, et al.
0

We empirically investigate the best trade-off between sparse and uniformly-weighted multiple kernel learning (MKL) using the elastic-net regularization on real and simulated datasets. We find that the best trade-off parameter depends not only on the sparsity of the true kernel-weight spectrum but also on the linear dependence among kernels and the number of samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2012

Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness

We investigate the learning rate of multiple kernel learning (MKL) with ...
research
11/13/2010

Regularization Strategies and Empirical Bayesian Learning for MKL

Multiple kernel learning (MKL), structured sparsity, and multi-task lear...
research
03/02/2011

Fast Convergence Rate of Multiple Kernel Learning with Elastic-net Regularization

We investigate the learning rate of multiple kernel leaning (MKL) with e...
research
03/27/2011

Sharp Convergence Rate and Support Consistency of Multiple Kernel Learning with Sparse and Dense Regularization

We theoretically investigate the convergence rate and support consistenc...
research
09/07/2018

Sparse Kernel PCA for Outlier Detection

In this paper, we propose a new method to perform Sparse Kernel Principa...
research
09/28/2009

SpicyMKL

We propose a new optimization algorithm for Multiple Kernel Learning (MK...
research
04/17/2019

DeepNovoV2: Better de novo peptide sequencing with deep learning

We introduce DeepNovoV2, the state-of-the-art neural networks based mode...

Please sign up or login with your details

Forgot password? Click here to reset