Double Thompson Sampling in Finite stochastic Games

02/21/2022
∙
by   Shuqing Shi, et al.
∙
0
∙

We consider the trade-off problem between exploration and exploitation under finite discounted Markov Decision Process, where the state transition matrix of the underlying environment stays unknown. We propose a double Thompson sampling reinforcement learning algorithm(DTS) to solve this kind of problem. This algorithm achieves a total regret bound of 𝒊Ėƒ(D√(SAT))in time horizon T with S states, A actions and diameter D. DTS consists of two parts, the first part is the traditional part where we apply the posterior sampling method on transition matrix based on prior distribution. In the second part, we employ a count-based posterior update method to balance between the local optimal action and the long-term optimal action in order to find the global optimal game value. We established a regret bound of 𝒊Ėƒ(√(T)/S^2). Which is by far the best regret bound for finite discounted Markov Decision Process to our knowledge. Numerical results proves the efficiency and superiority of our approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset