Gaussian Process Classification Bandits

12/26/2022
by   Tatsuya Hayashi, et al.
0

Classification bandits are multi-armed bandit problems whose task is to classify a given set of arms into either positive or negative class depending on whether the rate of the arms with the expected reward of at least h is not less than w for given thresholds h and w. We study a special classification bandit problem in which arms correspond to points x in d-dimensional real space with expected rewards f(x) which are generated according to a Gaussian process prior. We develop a framework algorithm for the problem using various arm selection policies and propose policies called FCB and FTSV. We show a smaller sample complexity upper bound for FCB than that for the existing algorithm of the level set estimation, in which whether f(x) is at least h or not must be decided for every arm's x. Arm selection policies depending on an estimated rate of arms with rewards of at least h are also proposed and shown to improve empirical sample complexity. According to our experimental results, the rate-estimation versions of FCB and FTSV, together with that of the popular active learning policy that selects the point with the maximum variance, outperform other policies for synthetic functions, and the version of FTSV is also the best performer for our real-world dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2021

Best-Arm Identification in Correlated Multi-Armed Bandits

In this paper we consider the problem of best-arm identification in mult...
research
09/13/2022

Sample Complexity of an Adversarial Attack on UCB-based Best-arm Identification Policy

In this work I study the problem of adversarial perturbations to rewards...
research
12/24/2021

Gaussian Process Bandits with Aggregated Feedback

We consider the continuum-armed bandits problem, under a novel setting o...
research
01/31/2019

A Bad Arm Existence Checking Problem

We study a bad arm existing checking problem in which a player's task is...
research
11/29/2021

Contextual Combinatorial Volatile Bandits with Satisfying via Gaussian Processes

In many real-world applications of combinatorial bandits such as content...
research
10/07/2019

Stochastic Bandits with Delay-Dependent Payoffs

Motivated by recommendation problems in music streaming platforms, we pr...
research
02/14/2012

Fractional Moments on Bandit Problems

Reinforcement learning addresses the dilemma between exploration to find...

Please sign up or login with your details

Forgot password? Click here to reset