Planning with Partially Observable Markov Decision Processes: Advances in Exact Solution Method

01/30/2013
by   Nevin Lianwen Zhang, et al.
0

There is much interest in using partially observable Markov decision processes (POMDPs) as a formal model for planning in stochastic domains. This paper is concerned with finding optimal policies for POMDPs. We propose several improvements to incremental pruning, presently the most efficient exact algorithm for solving POMDPs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/06/2013

Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes

Most exact algorithms for general partially observable Markov decision p...
research
02/06/2013

Region-Based Approximations for Planning in Stochastic Domains

This paper is concerned with planning in stochastic domains by means of ...
research
06/01/2011

Nonapproximability Results for Partially Observable Markov Decision Processes

We show that for several variations of partially observable Markov decis...
research
06/01/2011

Value-Function Approximations for Partially Observable Markov Decision Processes

Partially observable Markov decision processes (POMDPs) provide an elega...
research
01/11/2020

Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes

Autonomous systems are often required to operate in partially observable...
research
04/27/2023

Decision Making for Autonomous Vehicles

This paper is on decision making of autonomous vehicles for handling rou...
research
02/16/2016

POMDP-lite for Robust Robot Planning under Uncertainty

The partially observable Markov decision process (POMDP) provides a prin...

Please sign up or login with your details

Forgot password? Click here to reset