Exploring the Promise and Limits of Real-Time Recurrent Learning

05/30/2023
by   Kazuki Irie, et al.
0

Real-time recurrent learning (RTRL) for sequence-processing recurrent neural networks (RNNs) offers certain conceptual advantages over backpropagation through time (BPTT). RTRL requires neither caching past activations nor truncating context, and enables online learning. However, RTRL's time and space complexity make it impractical. To overcome this problem, most recent work on RTRL focuses on approximation theories, while experiments are often limited to diagnostic settings. Here we explore the practical promise of RTRL in more realistic settings. We study actor-critic methods that combine RTRL and policy gradients, and test them in several subsets of DMLab-30, ProcGen, and Atari-2600 environments. On DMLab memory tasks, our system trained on fewer than 1.2 B environmental frames is competitive with or outperforms well-known IMPALA and R2D2 baselines trained on 10 B frames. To scale to such challenging tasks, we focus on certain well-known neural architectures with element-wise recurrence, allowing for tractable RTRL without approximation. We also discuss rarely addressed limitations of RTRL in real-world applications, such as its complexity in the multi-layer case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2019

RNNbow: Visualizing Learning via Backpropagation Gradients in Recurrent Neural Networks

We present RNNbow, an interactive tool for visualizing the gradient flow...
research
06/12/2020

A Practical Sparse Approximation for Real Time Recurrent Learning

Current methods for training recurrent neural networks are based on back...
research
03/10/2023

Efficient Real Time Recurrent Learning through combined activity and parameter sparsity

Backpropagation through time (BPTT) is the standard algorithm for traini...
research
05/28/2018

Approximating Real-Time Recurrent Learning with Random Kronecker Factors

Despite all the impressive advances of recurrent neural networks, sequen...
research
02/11/2019

Optimal Kronecker-Sum Approximation of Real Time Recurrent Learning

One of the central goals of Recurrent Neural Networks (RNNs) is to learn...
research
07/23/2022

A Taxonomy of Recurrent Learning Rules

Backpropagation through time (BPTT) is the de facto standard for trainin...
research
01/06/2019

Recurrent Control Nets for Deep Reinforcement Learning

Central Pattern Generators (CPGs) are biological neural circuits capable...

Please sign up or login with your details

Forgot password? Click here to reset