Technical Report: The Policy Graph Improvement Algorithm

09/04/2020
by   Joni Pajarinen, et al.
0

Optimizing a partially observable Markov decision process (POMDP) policy is challenging. The policy graph improvement (PGI) algorithm for POMDPs represents the policy as a fixed size policy graph and improves the policy monotonically. Due to the fixed policy size, computation time for each improvement iteration is known in advance. Moreover, the method allows for compact understandable policies. This report describes the technical details of the PGI [1] and particle based PGI [2] algorithms for POMDPs in a more accessible way than [1] or [2] allowing practitioners and students to understand and implement the algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2020

Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes

Autonomous systems are often required to operate in partially observable...
research
01/15/2014

Policy Iteration for Decentralized Control of Markov Decision Processes

Coordination of distributed agents is required for problems arising in m...
research
07/24/2022

Towards Using Fully Observable Policies for POMDPs

Partially Observable Markov Decision Process (POMDP) is a framework appl...
research
02/26/2018

Optimizing over a Restricted Policy Class in Markov Decision Processes

We address the problem of finding an optimal policy in a Markov decision...
research
01/12/2023

Safe Policy Improvement for POMDPs via Finite-State Controllers

We study safe policy improvement (SPI) for partially observable Markov d...
research
11/09/2020

Multiagent Rollout and Policy Iteration for POMDP with Application to Multi-Robot Repair Problems

In this paper we consider infinite horizon discounted dynamic programmin...
research
06/13/2012

Sparse Stochastic Finite-State Controllers for POMDPs

Bounded policy iteration is an approach to solving infinite-horizon POMD...

Please sign up or login with your details

Forgot password? Click here to reset