Doubly-Robust Lasso Bandit

07/26/2019
by   Gi-Soo Kim, et al.
0

Contextual multi-armed bandit algorithms are widely used in sequential decision tasks such as news article recommendation systems, web page ad placement algorithms, and mobile health. Most of the existing algorithms have regret proportional to a polynomial function of the context dimension, d. In many applications however, it is often the case that contexts are high-dimensional with only a sparse subset of size s_0 (≪ d) being correlated with the reward. We propose a novel algorithm, namely the Doubly-Robust Lasso Bandit algorithm, which exploits the sparse structure as in Lasso, while blending the doubly-robust technique used in missing data literature. The high-probability upper bound of the regret incurred by the proposed algorithm does not depend on the number of arms, has better dependency on s_0 than previous works, and scales with log(d) instead of a polynomial function of d. The proposed algorithm shows good performance when contexts of different arms are correlated and requires less tuning parameters than existing methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2019

Contextual Multi-armed Bandit Algorithm for Semiparametric Reward Model

Contextual multi-armed bandit (MAB) algorithms have been shown promising...
research
02/01/2021

Doubly Robust Thompson Sampling for linear payoffs

A challenging aspect of the bandit problem is that a stochastic reward i...
research
07/22/2022

High dimensional stochastic linear contextual bandit with missing covariates

Recent works in bandit problems adopted lasso convergence theory in the ...
research
01/20/2023

GBOSE: Generalized Bandit Orthogonalized Semiparametric Estimation

In sequential decision-making scenarios i.e., mobile health recommendati...
research
10/22/2020

Thresholded LASSO Bandit

In this paper, we revisit sparse stochastic contextual linear bandits. I...
research
07/16/2020

Sparsity-Agnostic Lasso Bandit

We consider a stochastic contextual bandit problem where the dimension d...
research
09/17/2022

Advertising Media and Target Audience Optimization via High-dimensional Bandits

We present a data-driven algorithm that advertisers can use to automate ...

Please sign up or login with your details

Forgot password? Click here to reset