Adaptive Gaussian Predictive Process Approximation
We address the issue of knots selection for Gaussian predictive process methodology. Predictive process approximation provides an effective solution to the cubic order computational complexity of Gaussian process models. This approximation crucially depends on a set of points, called knots, at which the original process is retained, while the rest is approximated via a deterministic extrapolation. Knots should be few in number to keep the computational complexity low, but provide a good coverage of the process domain to limit approximation error. We present theoretical calculations to show that coverage must be judged by the canonical metric of the Gaussian process. This necessitates having in place a knots selection algorithm that automatically adapts to the changes in the canonical metric affected by changes in the parameter values controlling the Gaussian process covariance function. We present an algorithm toward this by employing an incomplete Cholesky factorization with pivoting and dynamic stopping. Although these concepts already exist in the literature, our contribution lies in unifying them into a fast algorithm and in using computable error bounds to finesse implementation of the predictive process approximation. The resulting adaptive predictive process offers a substantial automatization of Guassian process model fitting, especially for Bayesian applications where thousands of values of the covariance parameters are to be explored.
READ FULL TEXT