DeepAI AI Chat
Log In Sign Up

Combinatorial Multi-armed Bandits for Real-Time Strategy Games

by   Santiago Ontañón, et al.
Drexel University

Games with large branching factors pose a significant challenge for game tree search algorithms. In this paper, we address this problem with a sampling strategy for Monte Carlo Tree Search (MCTS) algorithms called naïve sampling, based on a variant of the Multi-armed Bandit problem called Combinatorial Multi-armed Bandits (CMAB). We analyze the theoretical properties of several variants of naïve sampling, and empirically compare it against the other existing strategies in the literature for CMABs. We then evaluate these strategies in the context of real-time strategy (RTS) games, a genre of computer games characterized by their very large branching factors. Our results show that as the branching factor grows, naïve sampling outperforms the other sampling strategies.


page 1

page 2

page 3

page 4


Online Multi-Armed Bandit

We introduce a novel variant of the multi-armed bandit problem, in which...

Regression Oracles and Exploration Strategies for Short-Horizon Multi-Armed Bandits

This paper explores multi-armed bandit (MAB) strategies in very short ho...

Scalable Discrete Sampling as a Multi-Armed Bandit Problem

Drawing a sample from a discrete distribution is one of the building com...

Delegating via Quitting Games

Delegation allows an agent to request that another agent completes a tas...

Graph Signal Sampling via Reinforcement Learning

We formulate the problem of sampling and recovering clustered graph sign...

Genetic multi-armed bandits: a reinforcement learning approach for discrete optimization via simulation

This paper proposes a new algorithm, referred to as GMAB, that combines ...

Nonparametric Bayesian multi-armed bandits for single cell experiment design

The problem of maximizing cell type discovery under budget constraints i...