Social Learning in Multi Agent Multi Armed Bandits

10/04/2019
by   Abishek Sankararaman, et al.
0

In this paper, we introduce a distributed version of the classical stochastic Multi-Arm Bandit (MAB) problem. Our setting consists of a large number of agents n that collaboratively and simultaneously solve the same instance of K armed MAB to minimize the average cumulative regret over all agents. The agents can communicate and collaborate among each other only through a pairwise asynchronous gossip based protocol that exchange a limited number of bits. In our model, agents at each point decide on (i) which arm to play, (ii) whether to, and if so (iii) what and whom to communicate with. Agents in our model are decentralized, namely their actions only depend on their observed history in the past. We develop a novel algorithm in which agents, whenever they choose, communicate only arm-ids and not samples, with another agent chosen uniformly and independently at random. The per-agent regret scaling achieved by our algorithm is O ( K/n+(n)/Δ(T) + ^3(n) (n)/Δ^2). Furthermore, any agent in our algorithm communicates only a total of Θ((T)) times over a time interval of T. We compare our results to two benchmarks - one where there is no communication among agents and one corresponding to complete interaction. We show both theoretically and empirically, that our algorithm experiences a significant reduction both in per-agent regret when compared to the case when agents do not collaborate and in communication complexity when compared to the full interaction setting which requires T communication attempts by an agent over T arm pulls. Our result thus demonstrates that even a minimal level of collaboration among the different agents enables a significant reduction in per-agent regret.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/15/2020

The Gossiping Insert-Eliminate Algorithm for Multi-Agent Bandits

We consider a decentralized multi-agent Multi Armed Bandit (MAB) setup c...
research
02/10/2021

Multi-Agent Multi-Armed Bandits with Limited Communication

We consider the problem where N agents collaboratively interact with an ...
research
03/09/2023

Communication-Efficient Collaborative Heterogeneous Bandits in Networks

The multi-agent multi-armed bandit problem has been studied extensively ...
research
07/02/2020

Multi-Agent Low-Dimensional Linear Bandits

We study a multi-agent stochastic linear bandit with side information, p...
research
02/07/2023

Universally Robust Information Aggregation for Binary Decisions

We study an information aggregation setting in which a decision maker ma...
research
02/15/2021

Distributed Online Learning for Joint Regret with Communication Constraints

In this paper we consider a distributed online learning setting for jo...
research
06/15/2021

Collaborative Learning and Personalization in Multi-Agent Stochastic Linear Bandits

We consider the problem of minimizing regret in an N agent heterogeneous...

Please sign up or login with your details

Forgot password? Click here to reset