DeepAI AI Chat
Log In Sign Up

Finite Sample Analysis of Two-Time-Scale Natural Actor-Critic Algorithm

by   Sajad Khodadadian, et al.

Actor-critic style two-time-scale algorithms are very popular in reinforcement learning, and have seen great empirical success. However, their performance is not completely understood theoretically. In this paper, we characterize the global convergence of an online natural actor-critic algorithm in the tabular setting using a single trajectory. Our analysis applies to very general settings, as we only assume that the underlying Markov chain is ergodic under all policies (the so-called Recurrence assumption). We employ ϵ-greedy sampling in order to ensure enough exploration. For a fixed exploration parameter ϵ, we show that the natural actor critic algorithm is 𝒪(1/ϵ T^1/4+ϵ) close to the global optimum after T iterations of the algorithm. By carefully diminishing the exploration parameter ϵ as the iterations proceed, we also show convergence to the global optimum at a rate of 𝒪(1/T^1/6).


page 1

page 2

page 3

page 4


Finite-time analysis of single-timescale actor-critic

Despite the great empirical success of actor-critic methods, its finite-...

On the Global Convergence of Actor-Critic: A Case for Linear Quadratic Regulator with Ergodic Cost

Despite the empirical success of the actor-critic algorithm, its theoret...

Global Convergence of the ODE Limit for Online Actor-Critic Algorithms in Reinforcement Learning

Actor-critic algorithms are widely used in reinforcement learning, but a...

Finite-Sample Analysis of Off-Policy Natural Actor-Critic Algorithm

In this paper, we provide finite-sample convergence guarantees for an of...

A Supervised Goal Directed Algorithm in Economical Choice Behaviour: An Actor-Critic Approach

This paper aims to find an algorithmic structure that affords to predict...

Symmetry-aware Neural Architecture for Embodied Visual Navigation

Visual exploration is a task that seeks to visit all the navigable areas...

Learning to Control Partially Observed Systems with Finite Memory

We consider the reinforcement learning problem for partially observed Ma...