DeepAI
Log In Sign Up

A Method for Speeding Up Value Iteration in Partially Observable Markov Decision Processes

01/23/2013
by   Nevin Lianwen Zhang, et al.
0

We present a technique for speeding up the convergence of value iteration for partially observable Markov decisions processes (POMDPs). The underlying idea is similar to that behind modified policy iteration for fully observable Markov decision processes (MDPs). The technique can be easily incorporated into any existing POMDP value iteration algorithms. Experiments have been conducted on several test problems with one POMDP value iteration algorithm called incremental pruning. We find that the technique can make incremental pruning run several orders of magnitude faster.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

06/01/2011

Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes

Partially observable Markov decision processes (POMDPs) have recently be...
07/06/2017

Efficient Strategy Iteration for Mean Payoff in Markov Decision Processes

Markov decision processes (MDPs) are standard models for probabilistic s...
07/16/2022

ChronosPerseus: Randomized Point-based Value Iteration with Importance Sampling for POSMDPs

In reinforcement learning, agents have successfully used environments mo...
11/30/2015

Scaling POMDPs For Selecting Sellers in E-markets-Extended Version

In multiagent e-marketplaces, buying agents need to select good sellers ...
07/11/2012

Region-Based Incremental Pruning for POMDPs

We present a major improvement to the incremental pruning algorithm for ...
01/29/2021

Optimistic Policy Iteration for MDPs with Acyclic Transient State Structure

We consider Markov Decision Processes (MDPs) in which every stationary p...
09/17/2013

Models and algorithms for skip-free Markov decision processes on trees

We introduce a class of models for multidimensional control problems whi...