-
Stabilized Nested Rollout Policy Adaptation
Nested Rollout Policy Adaptation (NRPA) is a Monte Carlo search algorith...
read it
-
Formal Fields: A Framework to Automate Code Generation Across Domains
Code generation, defined as automatically writing a piece of code to sol...
read it
-
Generalized Nested Rollout Policy Adaptation
Nested Rollout Policy Adaptation (NRPA) is a Monte Carlo search algorith...
read it
-
Adapting Improved Upper Confidence Bounds for Monte-Carlo Tree Search
The UCT algorithm, which combines the UCB algorithm and Monte-Carlo Tree...
read it
-
Sequential Monte Carlo Bandits
In this paper we propose a flexible and efficient framework for handling...
read it
-
Monte Carlo Tree Search for Asymmetric Trees
We present an extension of Monte Carlo Tree Search (MCTS) that strongly ...
read it
-
Feature selection as Monte-Carlo Search in Growing Single Rooted Directed Acyclic Graph by Best Leaf Identification
Monte Carlo tree search (MCTS) has received considerable interest due to...
read it
Monte Carlo Search Algorithm Discovery for One Player Games
Much current research in AI and games is being devoted to Monte Carlo search (MCS) algorithms. While the quest for a single unified MCS algorithm that would perform well on all problems is of major interest for AI, practitioners often know in advance the problem they want to solve, and spend plenty of time exploiting this knowledge to customize their MCS algorithm in a problem-driven way. We propose an MCS algorithm discovery scheme to perform this in an automatic and reproducible way. We first introduce a grammar over MCS algorithms that enables inducing a rich space of candidate algorithms. Afterwards, we search in this space for the algorithm that performs best on average for a given distribution of training problems. We rely on multi-armed bandits to approximately solve this optimization problem. The experiments, generated on three different domains, show that our approach enables discovering algorithms that outperform several well-known MCS algorithms such as Upper Confidence bounds applied to Trees and Nested Monte Carlo search. We also show that the discovered algorithms are generally quite robust with respect to changes in the distribution over the training problems.
READ FULL TEXT
Comments
There are no comments yet.