Robust Uncertainty Bounds in Reproducing Kernel Hilbert Spaces: A Convex Optimization Approach

04/19/2021
by   Paul Scharnhorst, et al.
0

Let a labeled dataset be given with scattered samples and consider the hypothesis of the ground-truth belonging to the reproducing kernel Hilbert space (RKHS) of a known positive-definite kernel. It is known that out-of-sample bounds can be established at unseen input locations, thus limiting the risk associated with learning this function. We show how computing tight, finite-sample uncertainty bounds amounts to solving parametric quadratically constrained linear programs. In our setting, the outputs are assumed to be contaminated by bounded measurement noise that can otherwise originate from any compactly supported distribution. No independence assumptions are made on the available data. Numerical experiments are presented to compare the present results with other closed-form alternatives.

READ FULL TEXT

page 12

page 13

research
08/10/2020

Deterministic error bounds for kernel-based learning techniques under bounded noise

We consider the problem of reconstructing a function from a finite set o...
research
06/05/2023

The L^∞ Learnability of Reproducing Kernel Hilbert Spaces

In this work, we analyze the learnability of reproducing kernel Hilbert ...
research
11/27/2019

Composition operators on reproducing kernel Hilbert spaces with analytic positive definite functions

Composition operators have been extensively studied in complex analysis,...
research
02/09/2023

Domain Generalization by Functional Regression

The problem of domain generalization is to learn, given data from differ...
research
01/28/2019

An analytic formulation for positive-unlabeled learning via weighted integral probability metric

We consider the problem of learning a binary classifier from only positi...
research
03/20/2012

On the Equivalence between Herding and Conditional Gradient Algorithms

We show that the herding procedure of Welling (2009) takes exactly the f...
research
01/26/2023

Returning The Favour: When Regression Benefits From Probabilistic Causal Knowledge

A directed acyclic graph (DAG) provides valuable prior knowledge that is...

Please sign up or login with your details

Forgot password? Click here to reset