Gradient Boosting Performs Low-Rank Gaussian Process Inference

06/11/2022
by   Aleksei Ustimenko, et al.
0

This paper shows that gradient boosting based on symmetric decision trees can be equivalently reformulated as a kernel method that converges to the solution of a certain Kernel Ridgeless Regression problem. Thus, for low-rank kernels, we obtain the convergence to a Gaussian Process' posterior mean, which, in turn, allows us to easily transform gradient boosting into a sampler from the posterior to provide better knowledge uncertainty estimates through Monte-Carlo estimation of the posterior variance. We show that the proposed sampler allows for better knowledge uncertainty estimates leading to improved out-of-domain detection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2019

Conjugate Gradients for Kernel Machines

Regularized least-squares (kernel-ridge / Gaussian process) regression i...
research
09/15/2022

A new Kernel Regression approach for Robustified L_2 Boosting

We investigate L_2 boosting in the context of kernel regression. Kernel ...
research
11/21/2016

Probabilistic structure discovery in time series data

Existing methods for structure discovery in time series data construct i...
research
08/18/2015

Non-Stationary Gaussian Process Regression with Hamiltonian Monte Carlo

We present a novel approach for fully non-stationary Gaussian process re...
research
04/06/2020

Gaussian Process Boosting

In this article, we propose a novel way to combine boosting with Gaussia...
research
02/11/2013

The trace norm constrained matrix-variate Gaussian process for multitask bipartite ranking

We propose a novel hierarchical model for multitask bipartite ranking. T...
research
03/02/2021

Kernel-Based Models for Influence Maximization on Graphs based on Gaussian Process Variance Minimization

The inference of novel knowledge, the discovery of hidden patterns, and ...

Please sign up or login with your details

Forgot password? Click here to reset