A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains

11/01/1997
by   N. L. Zhang, et al.
0

Partially observable Markov decision processes (POMDPs) are a natural model for planning problems where effects of actions are nondeterministic and the state of the world is not completely observable. It is difficult to solve POMDPs exactly. This paper proposes a new approximation scheme. The basic idea is to transform a POMDP into another one where additional information is provided by an oracle. The oracle informs the planning agent that the current state of the world is in a certain region. The transformed POMDP is consequently said to be region observable. It is easier to solve than the original POMDP. We propose to solve the transformed POMDP and use its optimal policy to construct an approximate policy for the original POMDP. By controlling the amount of additional information that the oracle provides, it is possible to find a proper tradeoff between computational time and approximation quality. In terms of algorithmic contributions, we study in details how to exploit region observability in solving the transformed POMDP. To facilitate the study, we also propose a new exact algorithm for general POMDPs. The algorithm is conceptually simple and yet is significantly more efficient than all previous exact algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/06/2013

Region-Based Approximations for Planning in Stochastic Domains

This paper is concerned with planning in stochastic domains by means of ...
research
06/01/2011

Value-Function Approximations for Partially Observable Markov Decision Processes

Partially observable Markov decision processes (POMDPs) provide an elega...
research
01/12/2022

Planning in Observable POMDPs in Quasipolynomial Time

Partially Observable Markov Decision Processes (POMDPs) are a natural an...
research
02/06/2013

Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes

Most exact algorithms for general partially observable Markov decision p...
research
06/07/2022

Learning in Observable POMDPs, without Computationally Intractable Oracles

Much of reinforcement learning theory is built on top of oracles that ar...
research
03/19/2021

Knowledge-Based Hierarchical POMDPs for Task Planning

The main goal in task planning is to build a sequence of actions that ta...

Please sign up or login with your details

Forgot password? Click here to reset