Efficient Learning in Non-Stationary Linear Markov Decision Processes

10/24/2020
by   Ahmed Touati, et al.
2

We study episodic reinforcement learning in non-stationary linear (a.k.a. low-rank) Markov Decision Processes (MDPs), i.e, both the reward and transition kernel are linear with respect to a given feature map and are allowed to evolve either slowly or abruptly over time. For this problem setting, we propose OPT-WLSVI an optimistic model-free algorithm based on weighted least squares value iteration which uses exponential weights to smoothly forget data that are far in the past. We show that our algorithm, when competing against the best policy at each time, achieves a regret that is upped bounded by 𝒪(d^7/6H^2 Δ^1/3 K^2/3) where d is the dimension of the feature space, H is the planning horizon, K is the number of episodes and Δ is a suitable measure of non-stationarity of the MDP. This is the first regret bound for non-stationary reinforcement learning with linear function approximation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset