Planning in Observable POMDPs in Quasipolynomial Time

01/12/2022
by   Noah Golowich, et al.
0

Partially Observable Markov Decision Processes (POMDPs) are a natural and general model in reinforcement learning that take into account the agent's uncertainty about its current state. In the literature on POMDPs, it is customary to assume access to a planning oracle that computes an optimal policy when the parameters are known, even though the problem is known to be computationally hard. Almost all existing planning algorithms either run in exponential time, lack provable performance guarantees, or require placing strong assumptions on the transition dynamics under every possible policy. In this work, we revisit the planning problem and ask: are there natural and well-motivated assumptions that make planning easy? Our main result is a quasipolynomial-time algorithm for planning in (one-step) observable POMDPs. Specifically, we assume that well-separated distributions on states lead to well-separated distributions on observations, and thus the observations are at least somewhat informative in each step. Crucially, this assumption places no restrictions on the transition dynamics of the POMDP; nevertheless, it implies that near-optimal policies admit quasi-succinct descriptions, which is not true in general (under standard hardness assumptions). Our analysis is based on new quantitative bounds for filter stability – i.e. the rate at which an optimal filter for the latent state forgets its initialization. Furthermore, we prove matching hardness for planning in observable POMDPs under the Exponential Time Hypothesis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2022

Learning in Observable POMDPs, without Computationally Intractable Oracles

Much of reinforcement learning theory is built on top of oracles that ar...
research
11/01/1997

A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains

Partially observable Markov decision processes (POMDPs) are a natural mo...
research
10/07/2021

Reinforcement Learning in Reward-Mixing MDPs

Learning a near optimal policy in a partially observable system remains ...
research
08/16/2023

Partially Observable Multi-agent RL with (Quasi-)Efficiency: The Blessing of Information Sharing

We study provable multi-agent reinforcement learning (MARL) in the gener...
research
07/15/2016

Intrinsically Motivated Multimodal Structure Learning

We present a long-term intrinsically motivated structure learning method...
research
09/12/2016

DESPOT: Online POMDP Planning with Regularization

The partially observable Markov decision process (POMDP) provides a prin...
research
12/01/2022

The Limits of Learning and Planning: Minimal Sufficient Information Transition Systems

In this paper, we view a policy or plan as a transition system over a sp...

Please sign up or login with your details

Forgot password? Click here to reset