Distributed Deep Reinforcement Learning: Learn how to play Atari games in 21 minutes

01/09/2018
by   Igor Adamski, et al.
0

We present a study in Distributed Deep Reinforcement Learning (DDRL) focused on scalability of a state-of-the-art Deep Reinforcement Learning algorithm known as Batch Asynchronous Advantage ActorCritic (BA3C). We show that using the Adam optimization algorithm with a batch size of up to 2048 is a viable choice for carrying out large scale machine learning computations. This, combined with careful reexamination of the optimizer's hyperparameters, using synchronous training on the node level (while keeping the local, single node part of the algorithm asynchronous) and minimizing the memory footprint of the model, allowed us to achieve linear scaling for up to 64 CPU nodes. This corresponds to a training time of 21 minutes on 768 CPU cores, as opposed to 10 hours when using a single node with 24 cores achieved by a baseline single-node implementation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2017

Atari games and Intel processors

The asynchronous nature of the state-of-the-art reinforcement learning a...
research
02/07/2019

Metaoptimization on a Distributed System for Deep Reinforcement Learning

Training intelligent agents through reinforcement learning is a notoriou...
research
03/07/2018

Accelerated Methods for Deep Reinforcement Learning

Deep reinforcement learning (RL) has achieved many recent successes, yet...
research
01/24/2018

On Scale-out Deep Learning Training for Cloud and HPC

The exponential growth in use of large deep neural networks has accelera...
research
05/10/2020

Accelerating Deep Neuroevolution on Distributed FPGAs for Reinforcement Learning Problems

Reinforcement learning augmented by the representational power of deep n...
research
11/01/2021

Human-Level Control without Server-Grade Hardware

Deep Q-Network (DQN) marked a major milestone for reinforcement learning...
research
11/18/2019

SySCD: A System-Aware Parallel Coordinate Descent Algorithm

In this paper we propose a novel parallel stochastic coordinate descent ...

Please sign up or login with your details

Forgot password? Click here to reset