
MCMC for Variationally Sparse Gaussian Processes
Gaussian process (GP) models form a core part of probabilistic machine l...
read it

Precision Aggregated Local Models
Large scale Gaussian process (GP) regression is infeasible for larger da...
read it

Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization
Matrix square roots and their inverses arise frequently in machine learn...
read it

Scaling Multidimensional Inference for Structured Gaussian Processes
Exact Gaussian Process (GP) regression has O(N^3) runtime for data size ...
read it

Sequential construction and dimension reduction of Gaussian processes under inequality constraints
Accounting for inequality constraints, such as boundedness, monotonicity...
read it

Deep Bayesian Supervised Learning given Hypercuboidallyshaped, Discontinuous Data, using Compound TensorVariate & ScalarVariate Gaussian Processes
We undertake learning of the highdimensional functional relationship be...
read it

Group kernels for Gaussian process metamodels with categorical inputs
Gaussian processes (GP) are widely used as a metamodel for emulating tim...
read it
Fast increased fidelity approximate Gibbs samplers for Bayesian Gaussian process regression
The use of Gaussian processes (GPs) is supported by efficient sampling algorithms, a rich methodological literature, and strong theoretical grounding. However, due to their prohibitive computation and storage demands, the use of exact GPs in Bayesian models is limited to problems containing at most several thousand observations. Sampling requires matrix operations that scale at πͺ(n^3), where n is the number of unique inputs. Storage of individual matrices scales at πͺ(n^2), and can quickly overwhelm the resources of most modern computers. To overcome these bottlenecks, we develop a sampling algorithm using β matrix approximation of the matrices comprising the GP posterior covariance. These matrices can approximate the true conditional covariance matrix within machine precision and allow for sampling algorithms that scale at πͺ(n ^2 n) time and storage demands scaling at πͺ(n n). We also describe how these algorithms can be used as building blocks to model higher dimensional surfaces at πͺ(d n ^2 n), where d is the dimension of the surface under consideration, using tensor products of onedimensional GPs. Though various scalable processes have been proposed for approximating Bayesian GP inference when n is large, to our knowledge, none of these methods show that the approximation's KullbackLeibler divergence to the true posterior can be made arbitrarily small and may be no worse than the approximation provided by finite computer arithmetic. We describe βmatrices, give an efficient Gibbs sampler using these matrices for onedimensional GPs, offer a proposed extension to higher dimensional surfaces, and investigate the performance of this fast increased fidelity approximate GP, FIFAGP, using both simulated and real data sets.
READ FULL TEXT
Comments
There are no comments yet.