Regression Oracles and Exploration Strategies for Short-Horizon Multi-Armed Bandits

02/10/2021
by   Robert C. Gray, et al.
0

This paper explores multi-armed bandit (MAB) strategies in very short horizon scenarios, i.e., when the bandit strategy is only allowed very few interactions with the environment. This is an understudied setting in the MAB literature with many applications in the context of games, such as player modeling. Specifically, we pursue three different ideas. First, we explore the use of regression oracles, which replace the simple average used in strategies such as epsilon-greedy with linear regression models. Second, we examine different exploration patterns such as forced exploration phases. Finally, we introduce a new variant of the UCB1 strategy called UCBT that has interesting properties and no tunable parameters. We present experimental results in a domain motivated by exergames, where the goal is to maximize a player's daily steps. Our results show that the combination of epsilon-greedy or epsilon-decreasing with regression oracles outperforms all other tested strategies in the short horizon setting.

READ FULL TEXT
research
10/13/2017

Combinatorial Multi-armed Bandits for Real-Time Strategy Games

Games with large branching factors pose a significant challenge for game...
research
11/12/2019

Incentivized Exploration for Multi-Armed Bandits under Reward Drift

We study incentivized exploration for the multi-armed bandit (MAB) probl...
research
08/23/2018

Diversity-Driven Selection of Exploration Strategies in Multi-Armed Bandits

We consider a scenario where an agent has multiple available strategies ...
research
06/30/2020

Forced-exploration free Strategies for Unimodal Bandits

We consider a multi-armed bandit problem specified by a set of Gaussian ...
research
10/01/2021

Asymptotic Performance of Thompson Sampling in the Batched Multi-Armed Bandits

We study the asymptotic performance of the Thompson sampling algorithm i...
research
03/13/2015

Interactive Restless Multi-armed Bandit Game and Swarm Intelligence Effect

We obtain the conditions for the emergence of the swarm intelligence eff...
research
03/01/2019

Decentralized AP selection using Multi-Armed Bandits: Opportunistic ε-Greedy with Stickiness

WiFi densification leads to the existence of multiple overlapping covera...

Please sign up or login with your details

Forgot password? Click here to reset