Region-Based Approximations for Planning in Stochastic Domains

02/06/2013
by   Nevin Lianwen Zhang, et al.
0

This paper is concerned with planning in stochastic domains by means of partially observable Markov decision processes (POMDPs). POMDPs are difficult to solve. This paper identifies a subclass of POMDPs called region observable POMDPs, which are easier to solve and can be used to approximate general POMDPs to arbitrary accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 8

page 9

research
01/30/2013

Planning with Partially Observable Markov Decision Processes: Advances in Exact Solution Method

There is much interest in using partially observable Markov decision pro...
research
11/01/1997

A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains

Partially observable Markov decision processes (POMDPs) are a natural mo...
research
03/15/2012

RAPID: A Reachable Anytime Planner for Imprecisely-sensed Domains

Despite the intractability of generic optimal partially observable Marko...
research
11/30/2015

Scaling POMDPs For Selecting Sellers in E-markets-Extended Version

In multiagent e-marketplaces, buying agents need to select good sellers ...
research
12/08/2022

Task-Directed Exploration in Continuous POMDPs for Robotic Manipulation of Articulated Objects

Representing and reasoning about uncertainty is crucial for autonomous a...
research
07/11/2012

Region-Based Incremental Pruning for POMDPs

We present a major improvement to the incremental pruning algorithm for ...
research
03/19/2021

Knowledge-Based Hierarchical POMDPs for Task Planning

The main goal in task planning is to build a sequence of actions that ta...

Please sign up or login with your details

Forgot password? Click here to reset