Logarithmic Regret for Reinforcement Learning with Linear Function Approximation

11/23/2020 ∙ by Jiafan He, et al. ∙ 7

Reinforcement learning (RL) with linear function approximation has received increasing attention recently. However, existing work has focused on obtaining √(T)-type regret bound, where T is the number of steps. In this paper, we show that logarithmic regret is attainable under two recently proposed linear MDP assumptions provided that there exists a positive sub-optimality gap for the optimal action-value function. In specific, under the linear MDP assumption (Jin et al. 2019), the LSVI-UCB algorithm can achieve Õ(d^3H^5/gap_min·log(T)) regret; and under the linear mixture model assumption (Ayoub et al. 2020), the UCRL-VTR algorithm can achieve Õ(d^2H^5/gap_min·log^3(T)) regret, where d is the dimension of feature mapping, H is the length of episode, and gap_min is the minimum of sub-optimality gap. To the best of our knowledge, these are the first logarithmic regret bounds for RL with linear function approximation.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.