A Taxonomy of Recurrent Learning Rules

Backpropagation through time (BPTT) is the de facto standard for training recurrent neural networks (RNNs), but it is non-causal and non-local. Real-time recurrent learning is a causal alternative, but it is highly inefficient. Recently, e-prop was proposed as a causal, local, and efficient practical alternative to these algorithms, providing an approximation of the exact gradient by radically pruning the recurrent dependencies carried over time. Here, we derive RTRL from BPTT using a detailed notation bringing intuition and clarification to how they are connected. Furthermore, we frame e-prop within in the picture, formalising what it approximates. Finally, we derive a family of algorithms of which e-prop is a special case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2019

Optimal Kronecker-Sum Approximation of Real Time Recurrent Learning

One of the central goals of Recurrent Neural Networks (RNNs) is to learn...
research
06/12/2020

A Practical Sparse Approximation for Real Time Recurrent Learning

Current methods for training recurrent neural networks are based on back...
research
03/09/2021

Scalable Online Recurrent Learning Using Columnar Neural Networks

Structural credit assignment for recurrent learning is challenging. An a...
research
05/23/2021

Spectral Pruning for Recurrent Neural Networks

Pruning techniques for neural networks with a recurrent architecture, su...
research
05/28/2018

Approximating Real-Time Recurrent Learning with Random Kronecker Factors

Despite all the impressive advances of recurrent neural networks, sequen...
research
05/30/2023

Exploring the Promise and Limits of Real-Time Recurrent Learning

Real-time recurrent learning (RTRL) for sequence-processing recurrent ne...

Please sign up or login with your details

Forgot password? Click here to reset