Combinatorial Pure Exploration with Continuous and Separable Reward Functions and Its Applications (Extended Version)

05/04/2018
by   Weiran Huang, et al.
0

We study the Combinatorial Pure Exploration problem with Continuous and Separable reward functions (CPE-CS) in the stochastic multi-armed bandit setting. In a CPE-CS instance, we are given several stochastic arms with unknown distributions, as well as a collection of possible decisions. Each decision has a reward according to the distributions of arms. The goal is to identify the decision with the maximum reward, using as few arm samples as possible. The problem generalizes the combinatorial pure exploration problem with linear rewards, which has attracted significant attention in recent years. In this paper, we propose an adaptive learning algorithm for the CPE-CS problem, and analyze its sample complexity. In particular, we introduce a new hardness measure called the consistent optimality hardness, and give both the upper and lower bounds of sample complexity. Moreover, we give examples to demonstrate that our solution has the capacity to deal with non-linear reward functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2017

Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration

We study the combinatorial pure exploration problem Best-Set in stochast...
research
05/08/2021

Pure Exploration Bandit Problem with General Reward Functions Depending on Full Distributions

In this paper, we study the pure exploration bandit model on general dis...
research
06/25/2019

Non-Asymptotic Pure Exploration by Solving Games

Pure exploration (aka active testing) is the fundamental task of sequent...
research
06/14/2020

Combinatorial Pure Exploration with Partial or Full-Bandit Linear Feedback

In this paper, we propose the novel model of combinatorial pure explorat...
research
12/08/2021

A Fast Algorithm for PAC Combinatorial Pure Exploration

We consider the problem of Combinatorial Pure Exploration (CPE), which d...
research
06/23/2020

Combinatorial Pure Exploration of Dueling Bandit

In this paper, we study combinatorial pure exploration for dueling bandi...
research
07/15/2021

A unified framework for bandit multiple testing

In bandit multiple hypothesis testing, each arm corresponds to a differe...

Please sign up or login with your details

Forgot password? Click here to reset