Deep Multiple Kernel Learning

10/11/2013
by   Eric Strobl, et al.
0

Deep learning methods have predominantly been applied to large artificial neural networks. Despite their state-of-the-art performance, these large networks typically do not generalize well to datasets with limited sample sizes. In this paper, we take a different approach by learning multiple layers of kernels. We combine kernels at each layer and then optimize over an estimate of the support vector machine leave-one-out error rather than the dual objective function. Our experiments on a variety of datasets show that each layer successively increases performance with only a few base kernels.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2011

Alignment Based Kernel Learning with a Continuous Set of Base Kernels

The success of kernel-based learning methods depend on the choice of ker...
research
10/11/2021

NFT-K: Non-Fungible Tangent Kernels

Deep neural networks have become essential for numerous applications due...
research
07/30/2020

Improving Sample Efficiency with Normalized RBF Kernels

In deep learning models, learning more with less data is becoming more i...
research
11/29/2019

Deep Networks with Adaptive Nyström Approximation

Recent work has focused on combining kernel methods and deep learning to...
research
09/06/2017

Deep learning from crowds

Over the last few years, deep learning has revolutionized the field of m...
research
01/21/2021

Generative Autoencoder Kernels on Deep Learning for Brain Activity Analysis

Deep Learning (DL) is a two-step classification model that consists feat...
research
03/12/2019

Paradox in Deep Neural Networks: Similar yet Different while Different yet Similar

Machine learning is advancing towards a data-science approach, implying ...

Please sign up or login with your details

Forgot password? Click here to reset