Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting

02/05/2019
by   Akshay Krishnamurthy, et al.
38

We study contextual bandit learning with an abstract policy class and continuous action space. We obtain two qualitatively different regret bounds: one competes with a smoothed version of the policy class under no continuity assumptions, while the other requires standard Lipschitz assumptions. Both bounds exhibit data-dependent "zooming" behavior and, with no tuning, yield improved guarantees for benign problems. We also study adapting to unknown smoothness parameters, establishing a price-of-adaptivity and deriving optimal adaptive algorithms that require no additional information.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2022

Contextual Bandits with Smooth Regret: Efficient Learning in Continuous Action Spaces

Designing efficient general-purpose contextual bandit algorithms that wo...
research
09/05/2019

Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes

We study a nonparametric contextual bandit problem where the expected re...
research
02/27/2017

Algorithmic Chaining and the Role of Partial Feedback in Online Nonparametric Learning

We investigate contextual online learning with nonparametric (Lipschitz)...
research
06/22/2020

Adaptive Discretization for Adversarial Bandits with Continuous Action Spaces

Lipschitz bandits is a prominent version of multi-armed bandits that stu...
research
10/11/2018

Fighting Contextual Bandits with Stochastic Smoothing

We introduce a new stochastic smoothing perspective to study adversarial...
research
12/07/2022

Tight Performance Guarantees of Imitator Policies with Continuous Actions

Behavioral Cloning (BC) aims at learning a policy that mimics the behavi...
research
10/23/2021

Coarse-Grained Smoothness for RL in Metric Spaces

Principled decision-making in continuous state–action spaces is impossib...

Please sign up or login with your details

Forgot password? Click here to reset