Acme: A Research Framework for Distributed Reinforcement Learning

06/01/2020 ∙ by Matt Hoffman, et al. ∙ 22

Deep reinforcement learning has led to many recent-and groundbreaking-advancements. However, these advances have often come at the cost of both the scale and complexity of the underlying RL algorithms. Increases in complexity have in turn made it more difficult for researchers to reproduce published RL algorithms or rapidly prototype ideas. To address this, we introduce Acme, a tool to simplify the development of novel RL algorithms that is specifically designed to enable simple agent implementations that can be run at various scales of execution. Our aim is also to make the results of various RL algorithms developed in academia and industrial labs easier to reproduce and extend. To this end we are releasing baseline implementations of various algorithms, created using our framework. In this work we introduce the major design decisions behind Acme and show how these are used to construct these baselines. We also experiment with these agents at different scales of both complexity and computation-including distributed versions. Ultimately, we show that the design decisions behind Acme lead to agents that can be scaled both up and down and that, for the most part, greater levels of parallelization result in agents with equivalent performance, just faster.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

Code Repositories

acme

A library of reinforcement learning components and agents


view repo

rlflow

Abstractions and demos for a multicore and distributed RL framework


view repo

hedgingbox

RL Hedge Box using ACME Framework


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.