Learning in Observable POMDPs, without Computationally Intractable Oracles

06/07/2022
by   Noah Golowich, et al.
0

Much of reinforcement learning theory is built on top of oracles that are computationally hard to implement. Specifically for learning near-optimal policies in Partially Observable Markov Decision Processes (POMDPs), existing algorithms either need to make strong assumptions about the model dynamics (e.g. deterministic transitions) or assume access to an oracle for solving a hard optimistic planning or estimation problem as a subroutine. In this work we develop the first oracle-free learning algorithm for POMDPs under reasonable assumptions. Specifically, we give a quasipolynomial-time end-to-end algorithm for learning in "observable" POMDPs, where observability is the assumption that well-separated distributions over states induce well-separated distributions over observations. Our techniques circumvent the more traditional approach of using the principle of optimism under uncertainty to promote exploration, and instead give a novel application of barycentric spanners to constructing policy covers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/12/2022

Planning in Observable POMDPs in Quasipolynomial Time

Partially Observable Markov Decision Processes (POMDPs) are a natural an...
research
02/16/2016

POMDP-lite for Robust Robot Planning under Uncertainty

The partially observable Markov decision process (POMDP) provides a prin...
research
02/25/2016

Reinforcement Learning of POMDPs using Spectral Methods

We propose a new reinforcement learning algorithm for partially observab...
research
05/07/2017

Experimental results : Reinforcement Learning of POMDPs using Spectral Methods

We propose a new reinforcement learning algorithm for partially observab...
research
11/01/1997

A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains

Partially observable Markov decision processes (POMDPs) are a natural mo...
research
06/21/2023

Provably Efficient Representation Learning with Tractable Planning in Low-Rank POMDP

In this paper, we study representation learning in partially observable ...
research
04/02/2022

Hierarchical Reinforcement Learning under Mixed Observability

The framework of mixed observable Markov decision processes (MOMDP) mode...

Please sign up or login with your details

Forgot password? Click here to reset