Sleeping Combinatorial Bandits

06/03/2021
by   Kumar Abhishek, et al.
0

In this paper, we study an interesting combination of sleeping and combinatorial stochastic bandits. In the mixed model studied here, at each discrete time instant, an arbitrary availability set is generated from a fixed set of base arms. An algorithm can select a subset of arms from the availability set (sleeping bandits) and receive the corresponding reward along with semi-bandit feedback (combinatorial bandits). We adapt the well-known CUCB algorithm in the sleeping combinatorial bandits setting and refer to it as . We prove – under mild smoothness conditions – that the algorithm achieves an O(log (T)) instance-dependent regret guarantee. We further prove that (i) when the range of the rewards is bounded, the regret guarantee of algorithm is O(√(T log (T))) and (ii) the instance-independent regret is O(√(T^2 log(T))) in a general setting. Our results are quite general and hold under general environments – such as non-additive reward functions, volatile arm availability, a variable number of base-arms to be pulled – arising in practical applications. We validate the proven theoretical guarantees through experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2023

Combinatorial Bandits for Maximum Value Reward Function under Max Value-Index Feedback

We consider a combinatorial multi-armed bandit problem for maximum value...
research
03/03/2021

Combinatorial Bandits without Total Order for Arms

We consider the combinatorial bandits problem, where at each time step, ...
research
02/15/2021

Top-k eXtreme Contextual Bandits with Arm Hierarchy

Motivated by modern applications, such as online advertisement and recom...
research
08/21/2023

Clustered Linear Contextual Bandits with Knapsacks

In this work, we study clustered contextual bandits where rewards and re...
research
12/24/2021

Gaussian Process Bandits with Aggregated Feedback

We consider the continuum-armed bandits problem, under a novel setting o...
research
02/08/2016

Decoy Bandits Dueling on a Poset

We adress the problem of dueling bandits defined on partially ordered se...
research
03/19/2019

Conservative Exploration for Semi-Bandits with Linear Generalization: A Product Selection Problem for Urban Warehouses

The recent rising popularity of ultra-fast delivery services on retail p...

Please sign up or login with your details

Forgot password? Click here to reset