Horizon-Free Reinforcement Learning for Latent Markov Decision Processes

10/20/2022
by   Runlong Zhou, et al.
0

We study regret minimization for reinforcement learning (RL) in Latent Markov Decision Processes (LMDPs) with context in hindsight. We design a novel model-based algorithmic framework which can be instantiated with both a model-optimistic and a value-optimistic solver. We prove an O(√(M Γ S A K)) regret bound where M is the number of contexts, S is the number of states, A is the number of actions, K is the number of episodes, and Γ≤ S is the maximum transition degree of any state-action pair. The regret bound only scales logarithmically with the planning horizon, thus yielding the first (nearly) horizon-free regret bound for LMDP. Key in our proof is an analysis of the total variance of alpha vectors, which is carefully bounded by a recursion-based technique. We complement our positive result with a novel Ω(√(M S A K)) regret lower bound with Γ = 2, which shows our upper bound minimax optimal when Γ is a constant. Our lower bound relies on new constructions of hard instances and an argument based on the symmetrization technique from theoretical computer science, both of which are technically different from existing lower bound proof for MDPs, and thus can be of independent interest.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset