-
Bayesian Optimized Monte Carlo Planning
Online solvers for partially observable Markov decision processes have d...
read it
-
Voronoi Progressive Widening: Efficient Online Solvers for Continuous Space MDPs and POMDPs with Provably Optimal Components
Markov decision processes (MDPs) and partially observable MDPs (POMDPs) ...
read it
-
POMCPOW: An online algorithm for POMDPs with continuous state, action, and observation spaces
Online solvers for partially observable Markov decision processes have b...
read it
-
An On-Line POMDP Solver for Continuous Observation Spaces
Planning under partial obervability is essential for autonomous robots. ...
read it
-
Decentralized Cooperative Planning for Automated Vehicles with Continuous Monte Carlo Tree Search
Urban traffic scenarios often require a high degree of cooperation betwe...
read it
-
Online Planning Algorithms for POMDPs
Partially Observable Markov Decision Processes (POMDPs) provide a rich f...
read it
-
Open Loop Execution of Tree-Search Algorithms
In the context of tree-search stochastic planning algorithms where a gen...
read it
Improved POMDP Tree Search Planning with Prioritized Action Branching
Online solvers for partially observable Markov decision processes have difficulty scaling to problems with large action spaces. This paper proposes a method called PA-POMCPOW to sample a subset of the action space that provides varying mixtures of exploitation and exploration for inclusion in a search tree. The proposed method first evaluates the action space according to a score function that is a linear combination of expected reward and expected information gain. The actions with the highest score are then added to the search tree during tree expansion. Experiments show that PA-POMCPOW is able to outperform existing state-of-the-art solvers on problems with large discrete action spaces.
READ FULL TEXT
Comments
There are no comments yet.