Log In Sign Up

Horizon-Free Reinforcement Learning in Polynomial Time: the Power of Stationary Policies

by   Zihan Zhang, et al.

This paper gives the first polynomial-time algorithm for tabular Markov Decision Processes (MDP) that enjoys a regret bound independent on the planning horizon. Specifically, we consider tabular MDP with S states, A actions, a planning horizon H, total reward bounded by 1, and the agent plays for K episodes. We design an algorithm that achieves an O(poly(S,A,log K)√(K)) regret in contrast to existing bounds which either has an additional polylog(H) dependency <cit.> or has an exponential dependency on S <cit.>. Our result relies on a sequence of new structural lemmas establishing the approximation power, stability, and concentration property of stationary policies, which can have applications in other problems related to Markov chains.


page 1

page 2

page 3

page 4


Efficient Learning in Non-Stationary Linear Markov Decision Processes

We study episodic reinforcement learning in non-stationary linear (a.k.a...

Is Reinforcement Learning More Difficult Than Bandits? A Near-optimal Algorithm Escaping the Curse of Horizon

Episodic reinforcement learning and contextual bandits are two widely st...

The Complexity of Decentralized Control of Markov Decision Processes

Planning for distributed agents with partial state information is consid...

Nearly Horizon-Free Offline Reinforcement Learning

We revisit offline reinforcement learning on episodic time-homogeneous t...

Approximation of Lorenz-Optimal Solutions in Multiobjective Markov Decision Processes

This paper is devoted to fair optimization in Multiobjective Markov Deci...

Efficient Algorithms for Planning with Participation Constraints

We consider the problem of planning with participation constraints intro...