Kernel machines with two layers and multiple kernel learning

01/15/2010
by   Francesco Dinuzzo, et al.
0

In this paper, the framework of kernel machines with two layers is introduced, generalizing classical kernel methods. The new learning methodology provide a formal connection between computational architectures with multiple layers and the theme of kernel learning in standard regularization methods. First, a representer theorem for two-layer networks is presented, showing that finite linear combinations of kernels on each layer are optimal architectures whenever the corresponding functions solve suitable variational problems in reproducing kernel Hilbert spaces (RKHS). The input-output map expressed by these architectures turns out to be equivalent to a suitable single-layer kernel machines in which the kernel function is also learned from the data. Recently, the so-called multiple kernel learning methods have attracted considerable attention in the machine learning literature. In this paper, multiple kernel learning methods are shown to be specific cases of kernel machines with two layers in which the second layer is linear. Finally, a simple and effective multiple kernel learning method called RLS2 (regularized least squares with two layers) is introduced, and his performances on several learning problems are extensively analyzed. An open source MATLAB toolbox to train and validate RLS2 models with a Graphic User Interface is available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2018

Learning Multiple Levels of Representations with Kernel Machines

We propose a connectionist-inspired kernel machine model with three key ...
research
11/14/2016

Post Training in Deep Learning with Last Kernel

One of the main challenges of deep learning methods is the choice of an ...
research
06/15/2018

Learning kernels that adapt to GPU

In recent years machine learning methods that nearly interpolate the dat...
research
11/15/2017

Optimizing Kernel Machines using Deep Learning

Building highly non-linear and non-parametric models is central to sever...
research
04/30/2018

Learning Explicit Deep Representations from Deep Kernel Networks

Deep kernel learning aims at designing nonlinear combinations of multipl...
research
09/19/2018

A Generalized Representer Theorem for Hilbert Space - Valued Functions

The necessary and sufficient conditions for existence of a generalized r...
research
06/02/2021

Transformers are Deep Infinite-Dimensional Non-Mercer Binary Kernel Machines

Despite their ubiquity in core AI fields like natural language processin...

Please sign up or login with your details

Forgot password? Click here to reset