Lifelong Bandit Optimization: No Prior and No Regret

10/27/2022
by   Felix Schur, et al.
0

In practical applications, machine learning algorithms are often repeatedly applied to problems with similar structure over and over again. We focus on solving a sequence of bandit optimization tasks and develop LiBO, an algorithm which adapts to the environment by learning from past experience and becoming more sample-efficient in the process. We assume a kernelized structure where the kernel is unknown but shared across all tasks. LiBO sequentially meta-learns a kernel that approximates the true kernel and simultaneously solves the incoming tasks with the latest kernel estimate. Our algorithm can be paired with any kernelized bandit algorithm and guarantees oracle optimal performance, meaning that as more tasks are solved, the regret of LiBO on each task converges to the regret of the bandit algorithm with oracle knowledge of the true kernel. Naturally, if paired with a sublinear bandit algorithm, LiBO yields a sublinear lifelong regret. We also show that direct access to the data from each task is not necessary for attaining sublinear regret. The lifelong problem can thus be solved in a federated manner, while keeping the data of each task private.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2021

Hierarchical Bayesian Bandits

Meta-, multi-task, and federated learning can be all viewed as solving s...
research
02/01/2023

Bandit Convex Optimisation Revisited: FTRL Achieves Õ(t^1/2) Regret

We show that a kernel estimator using multiple function evaluations can ...
research
01/28/2020

Bandit optimisation of functions in the Matérn kernel RKHS

We consider the problem of optimising functions in the Reproducing kerne...
research
07/13/2021

No Regrets for Learning the Prior in Bandits

We propose AdaTS, a Thompson sampling algorithm that adapts sequentially...
research
05/01/2023

The Impact of the Geometric Properties of the Constraint Set in Safe Optimization with Bandit Feedback

We consider a safe optimization problem with bandit feedback in which an...
research
04/03/2022

Byzantine-Robust Federated Linear Bandits

In this paper, we study a linear bandit optimization problem in a federa...
research
02/01/2022

Meta-Learning Hypothesis Spaces for Sequential Decision-making

Obtaining reliable, adaptive confidence sets for prediction functions (h...

Please sign up or login with your details

Forgot password? Click here to reset