On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions

02/24/2015
by   Francis Bach, et al.
0

We show that kernel-based quadrature rules for computing integrals can be seen as a special case of random feature expansions for positive definite kernels, for a particular decomposition that always exists for such kernels. We provide a theoretical analysis of the number of required samples for a given approximation error, leading to both upper and lower bounds that are based solely on the eigenvalues of the associated integral operator and match up to logarithmic terms. In particular, we show that the upper bound may be obtained from independent and identically distributed samples from a specific non-uniform distribution, while the lower bound if valid for any set of points. Applying our results to kernel-based quadrature, while our results are fairly general, we recover known upper and lower bounds for the special cases of Sobolev spaces. Moreover, our results extend to the more general problem of full function approximations (beyond simply computing an integral), with results in L2- and L∞-norm that match known results for special cases. Applying our results to random features, we show an improvement of the number of random features needed to preserve the generalization guarantees for learning with Lipschitz-continuous losses.

READ FULL TEXT
research
02/06/2018

Near-Optimal Coresets of Kernel Density Estimates

We construct near-optimal coresets for kernel density estimate for point...
research
03/26/2022

Constant factor approximations for Lower and Upper bounded Clusterings

Clustering is one of the most fundamental problem in Machine Learning. R...
research
09/24/2019

Simple and Almost Assumption-Free Out-of-Sample Bound for Random Feature Mapping

Random feature mapping (RFM) is a popular method for speeding up kernel ...
research
12/20/2018

On the positivity and magnitudes of Bayesian quadrature weights

This article reviews and studies the properties of Bayesian quadrature w...
research
09/29/2015

Foundations of Coupled Nonlinear Dimensionality Reduction

In this paper we introduce and analyze the learning scenario of coupled ...
research
01/29/2020

An Upper Bound of the Bias of Nadaraya-Watson Kernel Regression under Lipschitz Assumptions

The Nadaraya-Watson kernel estimator is among the most popular nonparame...
research
11/29/2016

Choquet integral in decision analysis - lessons from the axiomatization

The Choquet integral is a powerful aggregation operator which lists many...

Please sign up or login with your details

Forgot password? Click here to reset