Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes

09/05/2019
by   Yichun Hu, et al.
0

We study a nonparametric contextual bandit problem where the expected reward functions belong to a Hölder class with smoothness parameter β. We show how this interpolates between two extremes that were previously studied in isolation: non-differentiable bandits (β≤1), where rate-optimal regret is achieved by running separate non-contextual bandits in different context regions, and parametric-response bandits (β=∞), where rate-optimal regret can be achieved with minimal or no exploration due to infinite extrapolatability. We develop a novel algorithm that carefully adjusts to all smoothness settings and we prove its regret is rate-optimal by establishing matching upper and lower bounds, recovering the existing results at the two extremes. In this sense, our work bridges the gap between the existing literature on parametric and non-differentiable contextual bandit problems and between bandit algorithms that exclusively use global or local information, shedding light on the crucial interplay of complexity and regret in contextual bandits.

READ FULL TEXT
research
12/11/2020

Smooth Bandit Optimization: Generalization to Hölder Space

We consider bandit optimization of a smooth reward function, where the g...
research
07/07/2021

Neural Contextual Bandits without Regret

Contextual bandits are a rich model for sequential decision making given...
research
05/31/2022

Provably and Practically Efficient Neural Contextual Bandits

We consider the neural contextual bandit problem. In contrast to the exi...
research
05/17/2022

Semi-Parametric Contextual Bandits with Graph-Laplacian Regularization

Non-stationarity is ubiquitous in human behavior and addressing it in th...
research
10/22/2019

Smoothness-Adaptive Stochastic Bandits

We consider the problem of non-parametric multi-armed bandits with stoch...
research
02/05/2019

Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting

We study contextual bandit learning with an abstract policy class and co...
research
05/25/2021

Bias-Robust Bayesian Optimization via Dueling Bandit

We consider Bayesian optimization in settings where observations can be ...

Please sign up or login with your details

Forgot password? Click here to reset