Sample-Then-Optimize Batch Neural Thompson Sampling

10/13/2022
by   Zhongxiang Dai, et al.
0

Bayesian optimization (BO), which uses a Gaussian process (GP) as a surrogate to model its objective function, is popular for black-box optimization. However, due to the limitations of GPs, BO underperforms in some problems such as those with categorical, high-dimensional or image inputs. To this end, recent works have used the highly expressive neural networks (NNs) as the surrogate model and derived theoretical guarantees using the theory of neural tangent kernel (NTK). However, these works suffer from the limitations of the requirement to invert an extremely large parameter matrix and the restriction to the sequential (rather than batch) setting. To overcome these limitations, we introduce two algorithms based on the Thompson sampling (TS) policy named Sample-Then-Optimize Batch Neural TS (STO-BNTS) and STO-BNTS-Linear. To choose an input query, we only need to train an NN (resp. a linear model) and then choose the query by maximizing the trained NN (resp. linear model), which is equivalently sampled from the GP posterior with the NTK as the kernel function. As a result, our algorithms sidestep the need to invert the large parameter matrix yet still preserve the validity of the TS policy. Next, we derive regret upper bounds for our algorithms with batch evaluations, and use insights from batch BO and NTK to show that they are asymptotically no-regret under certain conditions. Finally, we verify their empirical effectiveness using practical AutoML and reinforcement learning experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2020

Regret Bounds for Noise-Free Bayesian Optimization

Bayesian optimisation is a powerful method for non-convex black-box opti...
research
06/14/2022

On Provably Robust Meta-Bayesian Optimization

Bayesian optimization (BO) has become popular for sequential optimizatio...
research
10/22/2021

Diversified Sampling for Batched Bayesian Optimization with Determinantal Point Processes

In Bayesian Optimization (BO) we study black-box function optimization w...
research
06/19/2023

Practical First-Order Bayesian Optimization Algorithms

First Order Bayesian Optimization (FOBO) is a sample efficient sequentia...
research
06/02/2021

JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data

The goal of Multi-task Bayesian Optimization (MBO) is to minimize the nu...
research
04/19/2013

Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration

In this paper, we consider the challenge of maximizing an unknown functi...
research
10/13/2020

Local Differential Privacy for Bayesian Optimization

Motivated by the increasing concern about privacy in nowadays data-inten...

Please sign up or login with your details

Forgot password? Click here to reset