Perseus: Randomized Point-based Value Iteration for POMDPs

09/09/2011
by   M. T. J. Spaan, et al.
0

Partially observable Markov decision processes (POMDPs) form an attractive and principled framework for agent planning under uncertainty. Point-based approximate techniques for POMDPs compute a policy based on a finite set of points collected in advance from the agents belief space. We present a randomized point-based value iteration algorithm called Perseus. The algorithm performs approximate value backup stages, ensuring that in each backup stage the value of each point in the belief set is improved; the key observation is that a single backup may improve the value of many belief points. Contrary to other point-based methods, Perseus backs up only a (randomly selected) subset of points in the belief set, sufficient for improving the value of each belief point in the set. We show how the same idea can be extended to dealing with continuous action spaces. Experimental results show the potential of Perseus in large scale POMDP problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2011

Anytime Point-Based Approximations for Large POMDPs

The Partially Observable Markov Decision Process has long been recognize...
research
09/10/2021

Simultaneous Perception-Action Design via Invariant Finite Belief Sets

Although perception is an increasingly dominant portion of the overall c...
research
06/01/2011

Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes

Partially observable Markov decision processes (POMDPs) have recently be...
research
08/05/2015

On the Linear Belief Compression of POMDPs: A re-examination of current methods

Belief compression improves the tractability of large-scale partially ob...
research
06/30/2011

Restricted Value Iteration: Theory and Algorithms

Value iteration is a popular algorithm for finding near optimal policies...
research
03/15/2012

A Scalable Method for Solving High-Dimensional Continuous POMDPs Using Local Approximation

Partially-Observable Markov Decision Processes (POMDPs) are typically so...
research
10/28/2021

Temporal-Difference Value Estimation via Uncertainty-Guided Soft Updates

Temporal-Difference (TD) learning methods, such as Q-Learning, have prov...

Please sign up or login with your details

Forgot password? Click here to reset