DeepAI
Log In Sign Up

Solving POMDPs by Searching the Space of Finite Policies

01/23/2013
by   Nicolas Meuleau, et al.
0

Solving partially observable Markov decision processes (POMDPs) is highly intractable in general, at least in part because the optimal policy may be infinitely large. In this paper, we explore the problem of finding the optimal policy from a restricted set of policies, represented as finite state automata of a given size. This problem is also intractable, but we show that the complexity can be greatly reduced when the POMDP and/or policy are further constrained. We demonstrate good empirical results with a branch-and-bound method for finding globally optimal deterministic policies, and a gradient-ascent method for finding locally optimal stochastic policies.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

03/24/2015

Geometry and Determinism of Optimal Stationary Control in Partially Observable Markov Decision Processes

It is well known that for any finite state Markov decision process (MDP)...
01/23/2013

Learning Finite-State Controllers for Partially Observable Environments

Reactive (memoryless) policies are sufficient in completely observable M...
02/26/2018

Optimizing over a Restricted Policy Class in Markov Decision Processes

We address the problem of finding an optimal policy in a Markov decision...
04/20/2002

Learning from Scarce Experience

Searching the space of policies directly for the optimal policy has been...
01/23/2013

My Brain is Full: When More Memory Helps

We consider the problem of finding good finite-horizon policies for POMD...
09/21/2021

Computing Complexity-aware Plans Using Kolmogorov Complexity

In this paper, we introduce complexity-aware planning for finite-horizon...
06/30/2011

Finding Approximate POMDP solutions Through Belief Compression

Standard value function approaches to finding policies for Partially Obs...