Generalized Translation and Scale Invariant Online Algorithm for Adversarial Multi-Armed Bandits

09/19/2021
by   Kaan Gokcesu, et al.
0

We study the adversarial multi-armed bandit problem and create a completely online algorithmic framework that is invariant under arbitrary translations and scales of the arm losses. We study the expected performance of our algorithm against a generic competition class, which makes it applicable for a wide variety of problem scenarios. Our algorithm works from a universal prediction perspective and the performance measure used is the expected regret against arbitrary arm selection sequences, which is the difference between our losses and a competing loss sequence. The competition class can be designed to include fixed arm selections, switching bandits, contextual bandits, or any other competition of interest. The sequences in the competition class are generally determined by the specific application at hand and should be designed accordingly. Our algorithm neither uses nor needs any preliminary information about the loss sequences and is completely online. Its performance bounds are the second order bounds in terms of sum of the squared losses, where any affine transform of the losses has no effect on the normalized regret.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2023

Data Dependent Regret Guarantees Against General Comparators for Full or Bandit Feedback

We study the adversarial online learning problem and create a completely...
research
09/09/2020

A Generalized Online Algorithm for Translation and Scale Invariant Prediction with Expert Advice

In this work, we aim to create a completely online algorithmic framework...
research
05/31/2022

Online Meta-Learning in Adversarial Multi-Armed Bandits

We study meta-learning for adversarial multi-armed bandits. We consider ...
research
05/15/2017

Bandit Regret Scaling with the Effective Loss Range

We study how the regret guarantees of nonstochastic multi-armed bandits ...
research
10/15/2020

Stochastic Bandits with Vector Losses: Minimizing ℓ^∞-Norm of Relative Losses

Multi-armed bandits are widely applied in scenarios like recommender sys...
research
02/27/2020

Online Learning for Active Cache Synchronization

Existing multi-armed bandit (MAB) models make two implicit assumptions: ...
research
11/19/2020

Fully Gap-Dependent Bounds for Multinomial Logit Bandit

We study the multinomial logit (MNL) bandit problem, where at each time ...

Please sign up or login with your details

Forgot password? Click here to reset