Optimal and Greedy Algorithms for Multi-Armed Bandits with Many Arms

02/24/2020
by   Mohsen Bayati, et al.
0

We characterize Bayesian regret in a stochastic multi-armed bandit problem with a large but finite number of arms. In particular, we assume the number of arms k is T^α, where T is the time-horizon and α is in (0,1). We consider a Bayesian setting where the reward distribution of each arm is drawn independently from a common prior, and provide a complete analysis of expected regret with respect to this prior. Our results exhibit a sharp distinction around α = 1/2. When α < 1/2, the fundamental lower bound on regret is Ω(k); and it is achieved by a standard UCB algorithm. When α > 1/2, the fundamental lower bound on regret is Ω(√(T)), and it is achieved by an algorithm that first subsamples √(T) arms uniformly at random, then runs UCB on just this subset. Interestingly, we also find that a sufficiently large number of arms allows the decision-maker to benefit from "free" exploration if she simply uses a greedy algorithm. In particular, this greedy algorithm exhibits a regret of Õ(max(k,T/√(k))), which translates to a sublinear (though not optimal) regret in the time horizon. We show empirically that this is because the greedy algorithm rapidly disposes of underperforming arms, a beneficial trait in the many-armed regime. Technically, our analysis of the greedy algorithm involves a novel application of the Lundberg inequality, an upper bound for the ruin probability of a random walk; this approach may be of independent interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2018

Correlated Multi-armed Bandits with a Latent Random Source

We consider a novel multi-armed bandit framework where the rewards obtai...
research
01/04/2021

Be Greedy in Multi-Armed Bandits

The Greedy algorithm is the simplest heuristic in sequential decision pr...
research
06/17/2020

Constrained regret minimization for multi-criterion multi-armed bandits

We consider a stochastic multi-armed bandit setting and study the proble...
research
02/08/2022

Budgeted Combinatorial Multi-Armed Bandits

We consider a budgeted combinatorial multi-armed bandit setting where, i...
research
05/27/2022

Fairness and Welfare Quantification for Regret in Multi-Armed Bandits

We extend the notion of regret with a welfarist perspective. Focussing o...
research
10/20/2018

Quantifying the Burden of Exploration and the Unfairness of Free Riding

We consider the multi-armed bandit setting with a twist. Rather than hav...
research
06/04/2019

The Intrinsic Robustness of Stochastic Bandits to Strategic Manipulation

We study the behavior of stochastic bandits algorithms under strategic b...

Please sign up or login with your details

Forgot password? Click here to reset