An Optimal Algorithm for Adversarial Bandits with Arbitrary Delays

10/14/2019
by   Julian Zimmert, et al.
0

We propose a new algorithm for adversarial multi-armed bandits with unrestricted delays. The algorithm is based on a novel hybrid regularizer applied in the Follow the Regularized Leader (FTRL) framework. It achieves O(√(kn)+√(Dlog(k))) regret guarantee, where k is the number of arms, n is the number of rounds, and D is the total delay. The result matches the lower bound within constants and requires no prior knowledge of n or D. Additionally, we propose a refined tuning of the algorithm, which achieves O(√(kn)+min_S|S|+√(D_S̅log(k))) regret guarantee, where S is a set of rounds excluded from delay counting, S̅ = [n]∖ S are the counted rounds, and D_S̅ is the total delay in the counted rounds. If the delays are highly unbalanced, the latter regret guarantee can be significantly tighter than the former. The result requires no advance knowledge of the delays and resolves an open problem of Thune et al. (2019). The new FTRL algorithm and its refined tuning are anytime and require no doubling, which resolves another open problem of Thune et al. (2019).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2019

Nonstochastic Multiarmed Bandits with Unrestricted Delays

We investigate multiarmed bandits with delayed feedback, where the delay...
research
06/29/2022

A Best-of-Both-Worlds Algorithm for Bandits with Delayed Feedback

We present a modified tuning of the algorithm of Zimmert and Seldin [202...
research
10/12/2020

Adapting to Delays and Data in Adversarial Multi-Armed Bandits

We consider the adversarial multi-armed bandit problem under delayed fee...
research
08/21/2023

An Improved Best-of-both-worlds Algorithm for Bandits with Delayed Feedback

We propose a new best-of-both-worlds algorithm for bandits with variably...
research
06/10/2020

Simultaneously Learning Stochastic and Adversarial Episodic MDPs with Known Transition

This work studies the problem of learning episodic Markov Decision Proce...
research
12/27/2021

Tracking Most Severe Arm Changes in Bandits

In bandits with distribution shifts, one aims to automatically detect an...
research
02/02/2020

A Closer Look at Small-loss Bounds for Bandits with Graph Feedback

We study small-loss bounds for the adversarial multi-armed bandits probl...

Please sign up or login with your details

Forgot password? Click here to reset