Finite Sample Analysis of Two-Time-Scale Natural Actor-Critic Algorithm
Actor-critic style two-time-scale algorithms are very popular in reinforcement learning, and have seen great empirical success. However, their performance is not completely understood theoretically. In this paper, we characterize the global convergence of an online natural actor-critic algorithm in the tabular setting using a single trajectory. Our analysis applies to very general settings, as we only assume that the underlying Markov chain is ergodic under all policies (the so-called Recurrence assumption). We employ ϵ-greedy sampling in order to ensure enough exploration. For a fixed exploration parameter ϵ, we show that the natural actor critic algorithm is 𝒪(1/ϵ T^1/4+ϵ) close to the global optimum after T iterations of the algorithm. By carefully diminishing the exploration parameter ϵ as the iterations proceed, we also show convergence to the global optimum at a rate of 𝒪(1/T^1/6).
READ FULL TEXT