DeepAI
Log In Sign Up

Best-Arm Identification for Quantile Bandits with Privacy

06/11/2020
by   Dionysios S. Kalogerias, et al.
0

We study the best-arm identification problem in multi-armed bandits with stochastic, potentially private rewards, when the goal is to identify the arm with the highest quantile at a fixed, prescribed level. First, we propose a (non-private) successive elimination algorithm for strictly optimal best-arm identification, we show that our algorithm is δ-PAC and we characterize its sample complexity. Further, we provide a lower bound on the expected number of pulls, showing that the proposed algorithm is essentially optimal up to logarithmic factors. Both upper and lower complexity bounds depend on a special definition of the associated suboptimality gap, designed in particular for the quantile bandit problem, as we show when the gap approaches zero, best-arm identification is impossible. Second, motivated by applications where the rewards are private, we provide a differentially private successive elimination algorithm whose sample complexity is finite even for distributions with infinite support-size, and we characterize its sample complexity as well. Our algorithms do not require prior knowledge of either the suboptimality gap or other statistical information related to the bandit problem at hand.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/14/2021

Mean-based Best Arm Identification in Stochastic Bandits under Reward Contamination

This paper investigates the problem of best arm identification in contam...
09/07/2016

Random Shuffling and Resets for the Non-stationary Stochastic Bandit Problem

We consider a non-stationary formulation of the stochastic multi-armed b...
05/22/2019

An Optimal Private Stochastic-MAB Algorithm Based on an Optimal Private Stopping Rule

We present a provably optimal differentially private algorithm for the s...
11/19/2018

Decentralized Exploration in Multi-Armed Bandits

We consider the decentralized exploration problem: a set of players coll...
05/22/2022

On Elimination Strategies for Bandit Fixed-Confidence Identification

Elimination algorithms for bandit identification, which prune the plausi...
06/09/2017

Monte-Carlo Tree Search by Best Arm Identification

Recent advances in bandit tools and techniques for sequential learning a...
02/14/2012

Fractional Moments on Bandit Problems

Reinforcement learning addresses the dilemma between exploration to find...