Resource Allocation in Multi-armed Bandit Exploration: Overcoming Nonlinear Scaling with Adaptive Parallelism

10/31/2020
by   Brijen Thananjeyan, et al.
11

We study exploration in stochastic multi-armed bandits when we have access to a divisible resource, and can allocate varying amounts of this resource to arm pulls. By allocating more resources to a pull, we can compute the outcome faster to inform subsequent decisions about which arms to pull. However, since distributed environments do not scale linearly, executing several arm pulls in parallel, and hence less resources per pull, may result in better throughput. For example, in simulation-based scientific studies, an expensive simulation can be sped up by running it on multiple cores. This speed-up is, however, partly offset by the communication among cores and overheads, which results in lower throughput than if fewer cores were allocated to run more trials in parallel. We explore these trade-offs in the fixed confidence setting, where we need to find the best arm with a given success probability, while minimizing the time to do so. We propose an algorithm which trades off between information accumulation and throughout and show that the time taken can be upper bounded by the solution of a dynamic program whose inputs are the squared gaps between the suboptimal and optimal arms. We prove a matching hardness result which demonstrates that the above dynamic program is fundamental to this problem. Next, we propose and analyze an algorithm for the fixed deadline setting, where we are given a time deadline and need to maximize the success probability of finding the best arm. We corroborate these theoretical insights with an empirical evaluation.

READ FULL TEXT
research
07/14/2020

Quantum exploration algorithms for multi-armed bandits

Identifying the best arm of a multi-armed bandit is a central problem in...
research
09/08/2016

On Sequential Elimination Algorithms for Best-Arm Identification in Multi-Armed Bandits

We consider the best-arm identification problem in multi-armed bandits, ...
research
12/27/2013

lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits

The paper proposes a novel upper confidence bound (UCB) procedure for id...
research
06/12/2021

Guaranteed Fixed-Confidence Best Arm Identification in Multi-Armed Bandit

We consider the problem of finding, through adaptive sampling, which of ...
research
03/28/2018

A Better Resource Allocation Algorithm with Semi-Bandit Feedback

We study a sequential resource allocation problem between a fixed number...
research
06/05/2021

Multi-armed Bandit Algorithms on System-on-Chip: Go Frequentist or Bayesian?

Multi-armed Bandit (MAB) algorithms identify the best arm among multiple...
research
06/06/2021

PAC Best Arm Identification Under a Deadline

We study (ϵ, δ)-PAC best arm identification, where a decision-maker must...

Please sign up or login with your details

Forgot password? Click here to reset