Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration

10/23/2020
by   Priyank Agrawal, et al.
0

This paper studies regret minimization with randomized value functions in reinforcement learning. In tabular finite-horizon Markov Decision Processes, we introduce a clipping variant of one classical Thompson Sampling (TS)-like algorithm, randomized least-squares value iteration (RLSVI). We analyze the algorithm using a novel intertwined regret decomposition. Our Õ(H^2S√(AT)) high-probability worst-case regret bound improves the previous sharpest worst-case regret bounds for RLSVI and matches the existing state-of-the-art worst-case TS-based regret bounds.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset