Super-Samples from Kernel Herding

03/15/2012
by   Yutian Chen, et al.
0

We extend the herding algorithm to continuous spaces by using the kernel trick. The resulting "kernel herding" algorithm is an infinite memory deterministic process that learns to approximate a PDF with a collection of samples. We show that kernel herding decreases the error of expectations of functions in the Hilbert space at a rate O(1/T) which is much faster than the usual O(1/pT) for iid random samples. We illustrate kernel herding by approximating Bayesian predictive distributions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/09/2014

Optimally-Weighted Herding is Bayesian Quadrature

Herding and kernel herding are deterministic methods of choosing samples...
research
10/15/2022

Controlling Travel Path of Original Cobra

In this paper we propose a kernel based COBRA which is a direct approxim...
research
12/16/2019

Kernel-based interpolation at approximate Fekete points

We construct approximate Fekete point sets for kernel-based interpolatio...
research
10/16/2020

Fast Graph Kernel with Optical Random Features

The graphlet kernel is a classical method in graph classification. It ho...
research
06/18/2019

Kernel quadrature with DPPs

We study quadrature rules for functions living in an RKHS, using nodes s...
research
05/19/2022

Consistent Interpolating Ensembles via the Manifold-Hilbert Kernel

Recent research in the theory of overparametrized learning has sought to...
research
03/04/2021

Small Sample Spaces for Gaussian Processes

It is known that the membership in a given reproducing kernel Hilbert sp...

Please sign up or login with your details

Forgot password? Click here to reset