Improved Sleeping Bandits with Stochastic Actions Sets and Adversarial Rewards

04/14/2020
by   Aadirupa Saha, et al.
0

In this paper, we consider the problem of sleeping bandits with stochastic action sets and adversarial rewards. In this setting, in contrast to most work in bandits, the actions may not be available at all times. For instance, some products might be out of stock in item recommendation. The best existing efficient (i.e., polynomial-time) algorithms for this problem only guarantee a O(T^2/3) upper-bound on the regret. Yet, inefficient algorithms based on EXP4 can achieve O(√(T)). In this paper, we provide a new computationally efficient algorithm inspired by EXP3 satisfying a regret of order O(√(T)) when the availabilities of each action i ∈ are independent. We then study the most general version of the problem where at each round available sets are generated from some unknown arbitrary distribution (i.e., without the independence assumption) and propose an efficient algorithm with O(√(2^K T)) regret guarantee. Our theoretical results are corroborated with experimental evaluations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2018

Contextual Bandits with Cross-learning

In the classical contextual bandits problem, in each round t, a learner ...
research
02/20/2017

An Improved Parametrization and Analysis of the EXP3++ Algorithm for Stochastic and Adversarial Bandits

We present a new strategy for gap estimation in randomized algorithms fo...
research
06/22/2020

Adaptive Discretization for Adversarial Bandits with Continuous Action Spaces

Lipschitz bandits is a prominent version of multi-armed bandits that stu...
research
02/06/2016

BISTRO: An Efficient Relaxation-Based Method for Contextual Bandits

We present efficient algorithms for the problem of contextual bandits wi...
research
10/16/2012

Leveraging Side Observations in Stochastic Bandits

This paper considers stochastic bandits with side observations, a model ...
research
02/03/2022

Deep Hierarchy in Bandits

Mean rewards of actions are often correlated. The form of these correlat...
research
05/04/2023

Weighted Tallying Bandits: Overcoming Intractability via Repeated Exposure Optimality

In recommender system or crowdsourcing applications of online learning, ...

Please sign up or login with your details

Forgot password? Click here to reset