Multi-Player Bandits: A Trekking Approach

09/17/2018
by   Manjesh K. Hanawal, et al.
0

We study stochastic multi-armed bandits with many players. The players do not know the number of players, cannot communicate with each other and if multiple players select a common arm they collide and none of them receive any reward. We consider the static scenario, where the number of players remains fixed, and the dynamic scenario, where the players enter and leave at any time. We provide algorithms based on a novel `trekking approach' that guarantees constant regret for the static case and sub-linear regret for the dynamic case with high probability. The trekking approach eliminates the need to estimate the number of players resulting in fewer collisions and improved regret performance compared to the state-of-the-art algorithms. We also develop an epoch-less algorithm that eliminates any requirement of time synchronization across the players provided each player can detect the presence of other players on an arm. We validate our theoretical guarantees using simulation based and real test-bed based experiments.

READ FULL TEXT
research
02/04/2019

New Algorithms for Multiplayer Bandits when Arm Means Vary Among Players

We study multiplayer stochastic multi-armed bandit problems in which the...
research
09/28/2019

An Optimal Algorithm in Multiplayer Multi-Armed Bandits

The paper addresses the Multiplayer Multi-Armed Bandit (MMAB) problem, w...
research
02/04/2020

Selfish Robustness and Equilibria in Multi-Player Bandits

Motivated by cognitive radios, stochastic multi-player multi-armed bandi...
research
12/09/2015

Multi-Player Bandits -- a Musical Chairs Approach

We consider a variant of the stochastic multi-armed bandit problem, wher...
research
02/19/2022

The Pareto Frontier of Instance-Dependent Guarantees in Multi-Player Multi-Armed Bandits with no Communication

We study the stochastic multi-player multi-armed bandit problem. In this...
research
08/25/2018

Multiplayer bandits without observing collision information

We study multiplayer stochastic multi-armed bandit problems in which the...
research
11/02/2020

On No-Sensing Adversarial Multi-player Multi-armed Bandits with Collision Communications

We study the notoriously difficult no-sensing adversarial multi-player m...

Please sign up or login with your details

Forgot password? Click here to reset