Multiple Kernel Learning: A Unifying Probabilistic Viewpoint

03/04/2011
by   Hannes Nickisch, et al.
0

We present a probabilistic viewpoint to multiple kernel learning unifying well-known regularised risk approaches and recent advances in approximate Bayesian inference relaxations. The framework proposes a general objective function suitable for regression, robust regression and classification that is lower bound of the marginal likelihood and contains many regularised risk approaches as special cases. Furthermore, we derive an efficient and provably convergent optimisation algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2021

Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients

We propose a lower bound on the log marginal likelihood of Gaussian proc...
research
05/15/2019

Tight Kernel Query Complexity of Kernel Ridge Regression and Kernel k-means Clustering

We present tight lower bounds on the number of kernel evaluations requir...
research
11/02/2021

Bayes-Newton Methods for Approximate Bayesian Inference with PSD Guarantees

We formulate natural gradient variational inference (VI), expectation pr...
research
03/03/2011

The Local Rademacher Complexity of Lp-Norm Multiple Kernel Learning

We derive an upper bound on the local Rademacher complexity of ℓ_p-norm ...
research
09/08/2023

Optimal Rate of Kernel Regression in Large Dimensions

We perform a study on kernel regression for large-dimensional data (wher...
research
05/24/2023

Learning Rate Free Bayesian Inference in Constrained Domains

We introduce a suite of new particle-based algorithms for sampling on co...
research
11/16/2020

A family of smooth piecewise-linear models with probabilistic interpretations

The smooth piecewise-linear models cover a wide range of applications no...

Please sign up or login with your details

Forgot password? Click here to reset