Combinatorial Bandits without Total Order for Arms

03/03/2021
by   Shuo Yang, et al.
5

We consider the combinatorial bandits problem, where at each time step, the online learner selects a size-k subset s from the arms set 𝒜, where |𝒜| = n, and observes a stochastic reward of each arm in the selected set s. The goal of the online learner is to minimize the regret, induced by not selecting s^* which maximizes the expected total reward. Specifically, we focus on a challenging setting where 1) the reward distribution of an arm depends on the set s it is part of, and crucially 2) there is no total order for the arms in 𝒜. In this paper, we formally present a reward model that captures set-dependent reward distribution and assumes no total order for arms. Correspondingly, we propose an Upper Confidence Bound (UCB) algorithm that maintains UCB for each individual arm and selects the arms with top-k UCB. We develop a novel regret analysis and show an O(k^2 n log T/ϵ) gap-dependent regret bound as well as an O(k^2√(n T log T)) gap-independent regret bound. We also provide a lower bound for the proposed reward model, which shows our proposed algorithm is near-optimal for any constant k. Empirical results on various reward models demonstrate the broad applicability of our algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2022

Multiple-Play Stochastic Bandits with Shareable Finite-Capacity Arms

We generalize the multiple-play multi-armed bandits (MP-MAB) problem wit...
research
02/15/2021

Top-k eXtreme Contextual Bandits with Arm Hierarchy

Motivated by modern applications, such as online advertisement and recom...
research
08/21/2023

Clustered Linear Contextual Bandits with Knapsacks

In this work, we study clustered contextual bandits where rewards and re...
research
04/08/2021

Incentivizing Exploration in Linear Bandits under Information Gap

We study the problem of incentivizing exploration for myopic users in li...
research
06/03/2021

Sleeping Combinatorial Bandits

In this paper, we study an interesting combination of sleeping and combi...
research
07/26/2019

Lexicographic Multiarmed Bandit

We consider a multiobjective multiarmed bandit problem with lexicographi...
research
11/21/2019

Safe Linear Stochastic Bandits

We introduce the safe linear stochastic bandit framework—a generalizatio...

Please sign up or login with your details

Forgot password? Click here to reset