Unbiased Online Recurrent Optimization

02/16/2017
by   Corentin Tallec, et al.
0

The novel Unbiased Online Recurrent Optimization (UORO) algorithm allows for online learning of general recurrent computational graphs such as recurrent network models. It works in a streaming fashion and avoids backtracking through past activations and inputs. UORO is computationally as costly as Truncated Backpropagation Through Time (truncated BPTT), a widespread algorithm for online learning of recurrent networks. UORO is a modification of NoBackTrack that bypasses the need for model sparsity and makes implementation easy in current deep learning frameworks, even for complex models. Like NoBackTrack, UORO provides unbiased gradient estimates; unbiasedness is the core hypothesis in stochastic gradient descent theory, without which convergence to a local optimum is not guaranteed. On the contrary, truncated BPTT does not provide this property, leading to possible divergence. On synthetic tasks where truncated BPTT is shown to diverge, UORO converges. For instance, when a parameter has a positive short-term but negative long-term influence, truncated BPTT diverges unless the truncation span is very significantly longer than the intrinsic temporal range of the interactions, while UORO performs well thanks to the unbiasedness of its gradients.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2017

Unbiasing Truncated Backpropagation Through Time

Truncated Backpropagation Through Time (truncated BPTT) is a widespread ...
research
07/28/2015

Training recurrent networks online without backtracking

We introduce the "NoBackTrack" algorithm to train the parameters of dyna...
research
05/28/2018

Approximating Real-Time Recurrent Learning with Random Kronecker Factors

Despite all the impressive advances of recurrent neural networks, sequen...
research
05/12/2020

Convergence of Online Adaptive and Recurrent Optimization Algorithms

We prove local convergence of several notable gradient descentalgorithms...
research
04/21/2016

Stabilized Sparse Online Learning for Sparse Data

Stochastic gradient descent (SGD) is commonly used for optimization in l...
research
05/25/2023

Online learning of long-range dependencies

Online learning holds the promise of enabling efficient long-term credit...
research
03/09/2021

Scalable Online Recurrent Learning Using Columnar Neural Networks

Structural credit assignment for recurrent learning is challenging. An a...

Please sign up or login with your details

Forgot password? Click here to reset