Finite-Time Analysis of Kernelised Contextual Bandits

09/26/2013
by   Michal Valko, et al.
0

We tackle the problem of online reward maximisation over a large finite set of actions described by their contexts. We focus on the case when the number of actions is too big to sample all of them even once. However we assume that we have access to the similarities between actions' contexts and that the expected reward is an arbitrary linear function of the contexts' images in the related reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB algorithm, and give a cumulative regret bound through a frequentist analysis. For contextual bandits, the related algorithm GP-UCB turns out to be a special case of our algorithm, and our finite-time analysis improves the regret bound of GP-UCB for the agnostic case, both in the terms of the kernel-dependent quantity and the RKHS norm of the reward function. Moreover, for the linear kernel, our regret bound matches the lower bound for contextual linear bandits.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2018

Factored Bandits

We introduce the factored bandits model, which is a framework for learni...
research
06/29/2023

Kernel ε-Greedy for Contextual Bandits

We consider a kernelized version of the ϵ-greedy strategy for contextual...
research
02/11/2022

Efficient Kernel UCB for Contextual Bandits

In this paper, we tackle the computational efficiency of kernelized UCB ...
research
07/13/2022

Graph Neural Network Bandits

We consider the bandit optimization problem with the reward function def...
research
02/23/2023

Reward Learning as Doubly Nonparametric Bandits: Optimal Design and Scaling Laws

Specifying reward functions for complex tasks like object manipulation o...
research
02/16/2021

Making the most of your day: online learning for optimal allocation of time

We study online learning for optimal allocation when the resource to be ...
research
03/22/2020

Optimal No-regret Learning in Repeated First-price Auctions

We study online learning in repeated first-price auctions with censored ...

Please sign up or login with your details

Forgot password? Click here to reset