Low-pass Recurrent Neural Networks - A memory architecture for longer-term correlation discovery

05/13/2018
by   Thomas Stepleton, et al.
0

Reinforcement learning (RL) agents performing complex tasks must be able to remember observations and actions across sizable time intervals. This is especially true during the initial learning stages, when exploratory behaviour can increase the delay between specific actions and their effects. Many new or popular approaches for learning these distant correlations employ backpropagation through time (BPTT), but this technique requires storing observation traces long enough to span the interval between cause and effect. Besides memory demands, learning dynamics like vanishing gradients and slow convergence due to infrequent weight updates can reduce BPTT's practicality; meanwhile, although online recurrent network learning is a developing topic, most approaches are not efficient enough to use as replacements. We propose a simple, effective memory strategy that can extend the window over which BPTT can learn without requiring longer traces. We explore this approach empirically on a few tasks and discuss its implications.

READ FULL TEXT

page 12

page 13

page 14

page 15

research
12/24/2014

Learning Longer Memory in Recurrent Neural Networks

Recurrent neural network is a powerful model that learns temporal patter...
research
01/02/2023

On the Challenges of using Reinforcement Learning in Precision Drug Dosing: Delay and Prolongedness of Action Effects

Drug dosing is an important application of AI, which can be formulated a...
research
11/08/2019

Fully Bayesian Recurrent Neural Networks for Safe Reinforcement Learning

Reinforcement Learning (RL) has demonstrated state-of-the-art results in...
research
01/31/2017

On orthogonality and learning recurrent networks with long term dependencies

It is well known that it is challenging to train deep neural networks an...
research
06/12/2020

A Practical Sparse Approximation for Real Time Recurrent Learning

Current methods for training recurrent neural networks are based on back...
research
01/14/2017

Long Timescale Credit Assignment in NeuralNetworks with External Memory

Credit assignment in traditional recurrent neural networks usually invol...
research
03/18/2020

Progress Extrapolating Algorithmic Learning to Arbitrary Sequence Lengths

Recent neural network models for algorithmic tasks have led to significa...

Please sign up or login with your details

Forgot password? Click here to reset