Lower Bounds for Learning in Revealing POMDPs

02/02/2023
by   Fan Chen, et al.
9

This paper studies the fundamental limits of reinforcement learning (RL) in the challenging partially observable setting. While it is well-established that learning in Partially Observable Markov Decision Processes (POMDPs) requires exponentially many samples in the worst case, a surge of recent work shows that polynomial sample complexities are achievable under the revealing condition – A natural condition that requires the observables to reveal some information about the unobserved latent states. However, the fundamental limits for learning in revealing POMDPs are much less understood, with existing lower bounds being rather preliminary and having substantial gaps from the current best upper bounds. We establish strong PAC and regret lower bounds for learning in revealing POMDPs. Our lower bounds scale polynomially in all relevant problem parameters in a multiplicative fashion, and achieve significantly smaller gaps against the current best upper bounds, providing a solid starting point for future studies. In particular, for multi-step revealing POMDPs, we show that (1) the latent state-space dependence is at least Ω(S^1.5) in the PAC sample complexity, which is notably harder than the Θ(S) scaling for fully-observable MDPs; (2) Any polynomial sublinear regret is at least Ω(T^2/3), suggesting its fundamental difference from the single-step case where O(√(T)) regret is achievable. Technically, our hard instance construction adapts techniques in distribution testing, which is new to the RL literature and may be of independent interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/19/2022

When Is Partially Observable Reinforcement Learning Not Scary?

Applications of Reinforcement Learning (RL), in which agents learn to ma...
research
01/31/2022

Fundamental Performance Limits for Sensor-Based Robot Control and Policy Learning

Our goal is to develop theory and algorithms for establishing fundamenta...
research
09/29/2022

Partially Observable RL with B-Stability: Unified Structural Condition and Sharp Sample-Efficient Algorithms

Partial Observability – where agents can only observe partial informatio...
research
07/06/2023

Sample-Efficient Learning of POMDPs with Multiple Observations In Hindsight

This paper studies the sample-efficiency of learning in Partially Observ...
research
10/25/2021

Can Q-Learning be Improved with Advice?

Despite rapid progress in theoretical reinforcement learning (RL) over t...
research
06/14/2023

Theoretical Hardness and Tractability of POMDPs in RL with Partial Hindsight State Information

Partially observable Markov decision processes (POMDPs) have been widely...
research
03/30/2017

On Fundamental Limits of Robust Learning

We consider the problems of robust PAC learning from distributed and str...

Please sign up or login with your details

Forgot password? Click here to reset