Thresholded LASSO Bandit

10/22/2020
by   Kaito Ariu, et al.
2

In this paper, we revisit sparse stochastic contextual linear bandits. In these problems, feature vectors may be of large dimension d, but the reward function depends on a few, say s_0, of these features only. We present Thresholded LASSO bandit, an algorithm that (i) estimates the vector defining the reward function as well as its sparse support using the LASSO framework with thresholding, and (ii) selects an arm greedily according to this estimate projected on its support. The algorithm does not require prior knowledge of the sparsity index s_0. For this simple algorithm, we establish non-asymptotic regret upper bounds scaling as 𝒪( log d + √(Tlog T) ) in general, and as 𝒪( log d + log T) under the so-called margin condition (a setting where arms are well separated). The regret of previous algorithms scales as 𝒪( √(T)log (d T)) and 𝒪( log T log d) in the two settings, respectively. Through numerical experiments, we confirm that our algorithm outperforms existing methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2020

Sparsity-Agnostic Lasso Bandit

We consider a stochastic contextual bandit problem where the dimension d...
research
07/26/2019

Doubly-Robust Lasso Bandit

Contextual multi-armed bandit algorithms are widely used in sequential d...
research
08/29/2023

Stochastic Graph Bandit Learning with Side-Observations

In this paper, we investigate the stochastic contextual bandit with gene...
research
05/30/2023

Cooperative Thresholded Lasso for Sparse Linear Bandit

We present a novel approach to address the multi-agent sparse contextual...
research
07/16/2020

A Smoothed Analysis of Online Lasso for the Sparse Linear Contextual Bandit Problem

We investigate the sparse linear contextual bandit problem where the par...
research
10/25/2022

PopArt: Efficient Sparse Regression and Experimental Design for Optimal Sparse Linear Bandits

In sparse linear bandits, a learning agent sequentially selects an actio...
research
09/07/2022

Dual Instrumental Method for Confounded Kernelized Bandits

The contextual bandit problem is a theoretically justified framework wit...

Please sign up or login with your details

Forgot password? Click here to reset