Collaborative Best Arm Identification with Limited Communication on Non-IID Data

07/16/2022
by   Nikolai Karpov, et al.
0

In this paper, we study the tradeoffs between time-speedup and the number of communication rounds of the learning process in the collaborative learning model on non-IID data, where multiple agents interact with possibly different environments and they want to learn an objective in the aggregated environment. We use a basic problem in bandit theory called best arm identification in multi-armed bandits as a vehicle to deliver the following conceptual message: Collaborative learning on non-IID data is provably more difficult than that on IID data. In particular, we show the following: a) The speedup in the non-IID data setting can be less than 1 (that is, a slowdown). When the number of rounds R = O(1), we will need at least a polynomial number of agents (in terms of the number of arms) to achieve a speedup greater than 1. This is in sharp contrast with the IID data setting, in which the speedup is always at least 1 when R ≥ 2 regardless of number of agents. b) Adaptivity in the learning process cannot help much in the non-IID data setting. This is in sharp contrast with the IID data setting, in which to achieve the same speedup, the best non-adaptive algorithm requires a significantly larger number of rounds than the best adaptive algorithm. In the technique space, we have further developed the generalized round elimination technique introduced in arXiv:1904.03293. We show that implicit representations of distribution classes can be very useful when working with complex hard input distributions and proving lower bounds directly for adaptive algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2019

Collaborative Learning with Limited Interaction: Tight Bounds for Distributed Exploration in Multi-Armed Bandits

Best arm identification (or, pure exploration) in multi-armed bandits is...
research
04/20/2020

Collaborative Top Distribution Identifications with Limited Interaction

We consider the following problem in this paper: given a set of n distri...
research
08/18/2022

Communication-Efficient Collaborative Best Arm Identification

We investigate top-m arm identification, a basic problem in bandit theor...
research
02/08/2022

Budgeted Combinatorial Multi-Armed Bandits

We consider a budgeted combinatorial multi-armed bandit setting where, i...
research
07/07/2020

Robust Multi-Agent Multi-Armed Bandits

There has been recent interest in collaborative multi-agent bandits, whe...
research
02/09/2021

A Multi-Arm Bandit Approach To Subset Selection Under Constraints

We explore the class of problems where a central planner needs to select...
research
01/02/2019

The Divide-and-Conquer Framework: A Suitable Setting for the DDM of the Future

This paper was prompted by numerical experiments we performed, in which ...

Please sign up or login with your details

Forgot password? Click here to reset