On Regret-Optimal Learning in Decentralized Multi-player Multi-armed Bandits

05/04/2015
by   Naumaan Nayyar, et al.
0

We consider the problem of learning in single-player and multiplayer multiarmed bandit models. Bandit problems are classes of online learning problems that capture exploration versus exploitation tradeoffs. In a multiarmed bandit model, players can pick among many arms, and each play of an arm generates an i.i.d. reward from an unknown distribution. The objective is to design a policy that maximizes the expected reward over a time horizon for a single player setting and the sum of expected rewards for the multiplayer setting. In the multiplayer setting, arms may give different rewards to different players. There is no separate channel for coordination among the players. Any attempt at communication is costly and adds to regret. We propose two decentralizable policies, E^3 ( E- cubed) and E^3- TS, that can be used in both single player and multiplayer settings. These policies are shown to yield expected regret that grows at most as O(^1+ϵ T). It is well known that T is the lower bound on the rate of growth of regret even in a centralized case. The proposed algorithms improve on prior work where regret grew at O(^2 T). More fundamentally, these policies address the question of additional cost incurred in decentralized online learning, suggesting that there is at most an ϵ-factor cost in terms of order of regret. This solves a problem of relevance in many domains and had been open for a while.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2018

Game of Thrones: Fully Distributed Learning for Multi-Player Bandits

We consider a multi-armed bandit game where N players compete for M arms...
research
07/24/2018

Decision Variance in Online Learning

Online learning has classically focused on the expected behaviour of lea...
research
07/26/2019

Lexicographic Multiarmed Bandit

We consider a multiobjective multiarmed bandit problem with lexicographi...
research
01/27/2023

Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds

We consider a decentralized multiplayer game, played over T rounds, with...
research
08/12/2015

No Regret Bound for Extreme Bandits

Algorithms for hyperparameter optimization abound, all of which work wel...
research
11/21/2019

Observe Before Play: Multi-armed Bandit with Pre-observations

We consider the stochastic multi-armed bandit (MAB) problem in a setting...
research
11/25/2021

Bandit problems with fidelity rewards

The fidelity bandits problem is a variant of the K-armed bandit problem ...

Please sign up or login with your details

Forgot password? Click here to reset