AsyncQVI: Asynchronous-Parallel Q-Value Iteration for Reinforcement Learning with Near-Optimal Sample Complexity
In this paper, we propose AsyncQVI: Asynchronous-Parallel Q-value Iteration to solve Reinforcement Learning (RL) problems. Given an RL problem with |S| states, |A| actions, and a discounted factor γ∈(0,1), AsyncQVI returns an ε-optimal policy with probability at least 1-δ at the sample complexity Õ(|S||A|/(1-γ)^5ε^2(1/δ)). AsyncQVI is the first asynchronous-parallel RL algorithm with convergence rate analysis and an explicit sample complexity. The above sample complexity of AsyncQVI nearly matches the lower bound. Furthermore, AsyncQVI is scalable since it has low memory footprint at O(|S|) and also has an efficient asynchronous-parallel implementation.
READ FULL TEXT