Kernel-Matrix Determinant Estimates from stopped Cholesky Decomposition

07/22/2021
by   Simon Bartels, et al.
0

Algorithms involving Gaussian processes or determinantal point processes typically require computing the determinant of a kernel matrix. Frequently, the latter is computed from the Cholesky decomposition, an algorithm of cubic complexity in the size of the matrix. We show that, under mild assumptions, it is possible to estimate the determinant from only a sub-matrix, with probabilistic guarantee on the relative error. We present an augmentation of the Cholesky decomposition that stops under certain conditions before processing the whole matrix. Experiments demonstrate that this can save a considerable amount of time while having an overhead of less than 5% when not stopping early. More generally, we present a probabilistic stopping strategy for the approximation of a sum of known length where addends are revealed sequentially. We do not assume independence between addends, only that they are bounded from below and decrease in conditional expectation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2012

A Scalable CUR Matrix Decomposition Algorithm: Lower Time Complexity and Tighter Bound

The CUR matrix decomposition is an important extension of Nyström approx...
research
09/13/2022

Numerical rank of singular kernel functions

We study the rank of sub-matrices arising out of kernel functions, F(x,y...
research
06/21/2014

Graphical structure of conditional independencies in determinantal point processes

Determinantal point process have recently been used as models in machine...
research
11/05/2019

Adaptive Domain Decomposition method for Saddle Point problem in Matrix Form

We introduce an adaptive domain decomposition (DD) method for solving sa...
research
03/28/2018

Quantum algorithms for training Gaussian Processes

Gaussian processes (GPs) are important models in supervised machine lear...
research
06/19/2020

Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization

Matrix square roots and their inverses arise frequently in machine learn...
research
05/09/2016

Synthesizing Probabilistic Invariants via Doob's Decomposition

When analyzing probabilistic computations, a powerful approach is to fir...

Please sign up or login with your details

Forgot password? Click here to reset