Contextual Bandits with Smooth Regret: Efficient Learning in Continuous Action Spaces

07/12/2022
by   Yinglun Zhu, et al.
0

Designing efficient general-purpose contextual bandit algorithms that work with large – or even continuous – action spaces would facilitate application to important scenarios such as information retrieval, recommendation systems, and continuous control. While obtaining standard regret guarantees can be hopeless, alternative regret notions have been proposed to tackle the large action setting. We propose a smooth regret notion for contextual bandits, which dominates previously proposed alternatives. We design a statistically and computationally efficient algorithm – for the proposed smooth regret – that works with general function approximation under standard supervised oracles. We also present an adaptive algorithm that automatically adapts to any smoothness level. Our algorithms can be used to recover the previous minimax/Pareto optimal guarantees under the standard regret, e.g., in bandit problems with multiple best arms and Lipschitz/Hölder bandits. We conduct large-scale empirical evaluations demonstrating the efficacy of our proposed algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2022

Contextual Bandits with Large Action Spaces: Made Practical

A central problem in sequential decision making is to develop algorithms...
research
02/05/2019

Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting

We study contextual bandit learning with an abstract policy class and co...
research
02/26/2021

Adapting to misspecification in contextual bandits with offline regression oracles

Computationally efficient contextual bandits are often based on estimati...
research
02/11/2022

Efficient Kernel UCB for Contextual Bandits

In this paper, we tackle the computational efficiency of kernelized UCB ...
research
05/31/2022

Provably and Practically Efficient Neural Contextual Bandits

We consider the neural contextual bandit problem. In contrast to the exi...
research
07/15/2020

Upper Counterfactual Confidence Bounds: a New Optimism Principle for Contextual Bandits

The principle of optimism in the face of uncertainty is one of the most ...
research
02/16/2023

Infinite Action Contextual Bandits with Reusable Data Exhaust

For infinite action contextual bandits, smoothed regret and reduction to...

Please sign up or login with your details

Forgot password? Click here to reset