Asynchronous Advantage Actor Critic: Non-asymptotic Analysis and Linear Speedup

12/31/2020
by   Han Shen, et al.
3

Asynchronous and parallel implementation of standard reinforcement learning (RL) algorithms is a key enabler of the tremendous success of modern RL. Among many asynchronous RL algorithms, arguably the most popular and effective one is the asynchronous advantage actor-critic (A3C) algorithm. Although A3C is becoming the workhorse of RL, its theoretical properties are still not well-understood, including the non-asymptotic analysis and the performance gain of parallelism (a.k.a. speedup). This paper revisits the A3C algorithm with TD(0) for the critic update, termed A3C-TD(0), with provable convergence guarantees. With linear value function approximation for the TD update, the convergence of A3C-TD(0) is established under both i.i.d. and Markovian sampling. Under i.i.d. sampling, A3C-TD(0) obtains sample complexity of 𝒪(ϵ^-2.5/N) per worker to achieve ϵ accuracy, where N is the number of workers. Compared to the best-known sample complexity of 𝒪(ϵ^-2.5) for two-timescale AC, A3C-TD(0) achieves linear speedup, which justifies the advantage of parallelism and asynchrony in AC algorithms theoretically for the first time. Numerical tests on synthetically generated instances and OpenAI Gym environments have been provided to verify our theoretical analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/27/2020

Improving Sample Complexity Bounds for Actor-Critic Algorithms

The actor-critic (AC) algorithm is a popular method to find an optimal p...
research
05/07/2020

Non-asymptotic Convergence Analysis of Two Time-scale (Natural) Actor-Critic Algorithms

As an important type of reinforcement learning algorithms, actor-critic ...
research
06/12/2022

Finite-Time Analysis of Fully Decentralized Single-Timescale Actor-Critic

Decentralized Actor-Critic (AC) algorithms have been widely utilized for...
research
06/18/2023

On the Global Convergence of Natural Actor-Critic with Two-layer Neural Network Parametrization

Actor-critic algorithms have shown remarkable success in solving state-o...
research
12/28/2019

SLM Lab: A Comprehensive Benchmark and Modular Software Framework for Reproducible Deep Reinforcement Learning

We introduce SLM Lab, a software framework for reproducible reinforcemen...
research
02/28/2022

Provably Efficient Convergence of Primal-Dual Actor-Critic with Nonlinear Function Approximation

We study the convergence of the actor-critic algorithm with nonlinear fu...
research
11/13/2018

Parallel Stochastic Asynchronous Coordinate Descent: Tight Bounds on the Possible Parallelism

Several works have shown linear speedup is achieved by an asynchronous p...

Please sign up or login with your details

Forgot password? Click here to reset