Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously

01/25/2019
by   Julian Zimmert, et al.
0

We develop the first general semi-bandit algorithm that simultaneously achieves O( T) regret for stochastic environments and O(√(T)) regret for adversarial environments without knowledge of the regime or the number of rounds T. The leading problem-dependent constants of our bounds are not only optimal in some worst-case sense studied previously, but also optimal for two concrete instances of semi-bandit problems. Our algorithm and analysis extend the recent work of (Zimmert & Seldin, 2019) for the special case of multi-armed bandit, but importantly requires a novel hybrid regularizer designed specifically for semi-bandit. Experimental results on synthetic data show that our algorithm indeed performs well uniformly over different environments. We finally provide a preliminary extension of our results to the full bandit feedback.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2019

OSOM: A Simultaneously Optimal Algorithm for Multi-Armed and Linear Contextual Bandits

We consider the stochastic linear (multi-armed) contextual bandit proble...
research
03/13/2018

Thompson Sampling for Combinatorial Semi-Bandits

We study the application of the Thompson Sampling (TS) methodology to th...
research
12/28/2020

Lifelong Learning in Multi-Armed Bandits

Continuously learning and leveraging the knowledge accumulated from prio...
research
02/13/2016

Conservative Bandits

We study a novel multi-armed bandit problem that models the challenge fa...
research
03/13/2023

Best-of-three-worlds Analysis for Linear Bandits with Follow-the-regularized-leader Algorithm

The linear bandit problem has been studied for many years in both stocha...
research
01/29/2019

Improved Path-length Regret Bounds for Bandits

We study adaptive regret bounds in terms of the variation of the losses ...
research
12/14/2015

Fighting Bandits with a New Kind of Smoothness

We define a novel family of algorithms for the adversarial multi-armed b...

Please sign up or login with your details

Forgot password? Click here to reset