Norm-Agnostic Linear Bandits

05/03/2022
by   Spencer, et al.
2

Linear bandits have a wide variety of applications including recommendation systems yet they make one strong assumption: the algorithms must know an upper bound S on the norm of the unknown parameter θ^* that governs the reward generation. Such an assumption forces the practitioner to guess S involved in the confidence bound, leaving no choice but to wish that θ^*≤ S is true to guarantee that the regret will be low. In this paper, we propose novel algorithms that do not require such knowledge for the first time. Specifically, we propose two algorithms and analyze their regret bounds: one for the changing arm set setting and the other for the fixed arm set setting. Our regret bound for the former shows that the price of not knowing S does not affect the leading term in the regret bound and inflates only the lower order term. For the latter, we do not pay any price in the regret for now knowing S. Our numerical experiments show standard algorithms assuming knowledge of S can fail catastrophically when θ^*≤ S is not true whereas our algorithms enjoy low regret.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2022

An Experimental Design Approach for Regret Minimization in Logistic Bandits

In this work we consider the problem of regret minimization for logistic...
research
06/04/2020

Problem-Complexity Adaptive Model Selection for Stochastic Linear Bandits

We consider the problem of model selection for two popular stochastic li...
research
06/17/2022

Multiple-Play Stochastic Bandits with Shareable Finite-Capacity Arms

We generalize the multiple-play multi-armed bandits (MP-MAB) problem wit...
research
07/07/2021

Model Selection for Generic Contextual Bandits

We consider the problem of model selection for the general stochastic co...
research
01/09/2023

On the Minimax Regret for Linear Bandits in a wide variety of Action Spaces

As noted in the works of <cit.>, it has been mentioned that it is an ope...
research
09/12/2017

Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits

In this paper, we propose and study opportunistic bandits - a new varian...
research
01/23/2020

Best Arm Identification for Cascading Bandits in the Fixed Confidence Setting

We design and analyze CascadeBAI, an algorithm for finding the best set ...

Please sign up or login with your details

Forgot password? Click here to reset