Finding Approximate POMDP solutions Through Belief Compression

06/30/2011
by   N. Roy, et al.
0

Standard value function approaches to finding policies for Partially Observable Markov Decision Processes (POMDPs) are generally considered to be intractable for large models. The intractability of these algorithms is to a large extent a consequence of computing an exact, optimal policy over the entire belief space. However, in real-world POMDP problems, computing the optimal policy for the full belief space is often unnecessary for good control even for problems with complicated policy classes. The beliefs experienced by the controller often lie near a structured, low-dimensional subspace embedded in the high-dimensional belief space. Finding a good approximation to the optimal value function for only this subspace can be much easier than computing the full value function. We introduce a new method for solving large-scale POMDPs by reducing the dimensionality of the belief space. We use Exponential family Principal Components Analysis (Collins, Dasgupta and Schapire, 2002) to represent sparse, high-dimensional belief spaces using small sets of learned features of the belief state. We then plan only in terms of the low-dimensional belief features. By planning in this low-dimensional space, we can find policies for POMDP models that are orders of magnitude larger than models that can be handled by conventional techniques. We demonstrate the use of this algorithm on a synthetic problem and on mobile robot navigation tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/05/2015

On the Linear Belief Compression of POMDPs: A re-examination of current methods

Belief compression improves the tractability of large-scale partially ob...
research
05/31/2023

BetaZero: Belief-State Planning for Long-Horizon POMDPs using Learned Approximations

Real-world planning problemsx2014including autonomous driving and sustai...
research
10/06/2018

Bayes-CPACE: PAC Optimal Exploration in Continuous Space Bayes-Adaptive Markov Decision Processes

We present the first PAC optimal algorithm for Bayes-Adaptive Markov Dec...
research
01/10/2013

Vector-space Analysis of Belief-state Approximation for POMDPs

We propose a new approach to value-directed belief state approximation f...
research
01/10/2013

A Tractable POMDP for a Class of Sequencing Problems

We consider a partially observable Markov decision problem (POMDP) that ...
research
03/06/2023

The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models

Partially Observable Markov Decision Processes (POMDPs) are useful tools...
research
05/23/2022

Flow-based Recurrent Belief State Learning for POMDPs

Partially Observable Markov Decision Process (POMDP) provides a principl...

Please sign up or login with your details

Forgot password? Click here to reset