SpicyMKL

09/28/2009
by   Taiji Suzuki, et al.
0

We propose a new optimization algorithm for Multiple Kernel Learning (MKL) called SpicyMKL, which is applicable to general convex loss functions and general types of regularization. The proposed SpicyMKL iteratively solves smooth minimization problems. Thus, there is no need of solving SVM, LP, or QP internally. SpicyMKL can be viewed as a proximal minimization method and converges super-linearly. The cost of inner minimization is roughly proportional to the number of active kernels. Therefore, when we aim for a sparse kernel combination, our algorithm scales well against increasing number of kernels. Moreover, we give a general block-norm formulation of MKL that includes non-sparse regularizations, such as elastic-net and -norm regularizations. Extending SpicyMKL, we propose an efficient optimization method for the general regularization framework. Experimental results show that our algorithm is faster than existing methods especially when the number of kernels is large (> 1000).

READ FULL TEXT
research
03/27/2011

Sharp Convergence Rate and Support Consistency of Multiple Kernel Learning with Sparse and Dense Regularization

We theoretically investigate the convergence rate and support consistenc...
research
11/13/2010

Regularization Strategies and Empirical Bayesian Learning for MKL

Multiple kernel learning (MKL), structured sparsity, and multi-task lear...
research
02/13/2014

Regularization for Multiple Kernel Learning via Sum-Product Networks

In this paper, we are interested in constructing general graph-based reg...
research
05/01/2012

A Randomized Mirror Descent Algorithm for Large Scale Multiple Kernel Learning

We consider the problem of simultaneously learning to linearly combine a...
research
03/02/2012

Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness

We investigate the learning rate of multiple kernel learning (MKL) with ...
research
04/04/2013

Fast Approximate L_infty Minimization: Speeding Up Robust Regression

Minimization of the L_∞ norm, which can be viewed as approximately solvi...
research
01/15/2010

Sparsity-accuracy trade-off in MKL

We empirically investigate the best trade-off between sparse and uniform...

Please sign up or login with your details

Forgot password? Click here to reset