Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret

03/13/2019
by   Daniele Calandriello, et al.
4

Gaussian processes (GP) are a popular Bayesian approach for the optimization of black-box functions. Despite their effectiveness in simple problems, GP-based algorithms hardly scale to complex high-dimensional functions, as their per-iteration time and space cost is at least quadratic in the number of dimensions d and iterations t. Given a set of A alternative to choose from, the overall runtime O(t^3A) quickly becomes prohibitive. In this paper, we introduce BKB (budgeted kernelized bandit), a novel approximate GP algorithm for optimization under bandit feedback that achieves near-optimal regret (and hence near-optimal convergence rate) with near-constant per-iteration complexity and no assumption on the input space or covariance of the GP. Combining a kernelized linear bandit algorithm (GP-UCB) with randomized matrix sketching technique (i.e., leverage score sampling), we prove that selecting inducing points based on their posterior variance gives an accurate low-rank approximation of the GP, preserving variance estimates and confidence intervals. As a consequence, BKB does not suffer from variance starvation, an important problem faced by many previous sparse GP approximations. Moreover, we show that our procedure selects at most Õ(d_eff) points, where d_eff is the effective dimension of the explored space, which is typically much smaller than both d and t. This greatly reduces the dimensionality of the problem, thus leading to a O(TAd_eff^2) runtime and O(A d_eff) space complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2020

Near-linear Time Gaussian Process Optimization with Adaptive Batching and Resparsification

Gaussian processes (GP) are one of the most successful frameworks to mod...
research
06/16/2021

Ada-BKB: Scalable Gaussian Process Optimization on Continuous Domain by Adaptive Discretization

Gaussian process optimization is a successful class of algorithms (e.g. ...
research
01/30/2022

Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times

Computing a Gaussian process (GP) posterior has a computational cost cub...
research
07/07/2021

Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian Modeling

There is significant interest in learning and optimizing a complex syste...
research
02/03/2023

Randomized Gaussian Process Upper Confidence Bound with Tight Bayesian Regret Bounds

Gaussian process upper confidence bound (GP-UCB) is a theoretically prom...
research
12/08/2017

Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start

Bayesian optimization (BO) is a model-based approach for gradient-free b...
research
09/09/2020

Sequential construction and dimension reduction of Gaussian processes under inequality constraints

Accounting for inequality constraints, such as boundedness, monotonicity...

Please sign up or login with your details

Forgot password? Click here to reset