DeepAI AI Chat
Log In Sign Up

Recurrent Control Nets for Deep Reinforcement Learning

by   Vincent Liu, et al.
Stanford University

Central Pattern Generators (CPGs) are biological neural circuits capable of producing coordinated rhythmic outputs in the absence of rhythmic input. As a result, they are responsible for most rhythmic motion in living organisms. This rhythmic control is broadly applicable to fields such as locomotive robotics and medical devices. In this paper, we explore the possibility of creating a self-sustaining CPG network for reinforcement learning that learns rhythmic motion more efficiently and across more general environments than the current multilayer perceptron (MLP) baseline models. Recent work introduces the Structured Control Net (SCN), which maintains linear and nonlinear modules for local and global control, respectively. Here, we show that time-sequence architectures such as Recurrent Neural Networks (RNNs) model CPGs effectively. Combining previous work with RNNs and SCNs, we introduce the Recurrent Control Net (RCN), which adds a linear component to the, RCNs match and exceed the performance of baseline MLPs and SCNs across all environment tasks. Our findings confirm existing intuitions for RNNs on reinforcement learning tasks, and demonstrate promise of SCN-like structures in reinforcement learning.


page 1

page 2

page 3

page 4


Structured Control Nets for Deep Reinforcement Learning

In recent years, Deep Reinforcement Learning has made impressive advance...

Improving Assistive Robotics with Deep Reinforcement Learning

Assistive Robotics is a class of robotics concerned with aiding humans i...

Deep Reinforcement Learning for Neural Control

We present a novel methodology for control of neural circuits based on d...

Learning Device Models with Recurrent Neural Networks

Recurrent neural networks (RNNs) are powerful constructs capable of mode...

Exploring Deep and Recurrent Architectures for Optimal Control

Sophisticated multilayer neural networks have achieved state of the art ...

Unbiased Self-Play

We present a general optimization framework for emergent belief-state re...

Exploring the Promise and Limits of Real-Time Recurrent Learning

Real-time recurrent learning (RTRL) for sequence-processing recurrent ne...