The impact of memory on learning sequence-to-sequence tasks

05/29/2022
by   Alireza Seif, et al.
0

The recent success of neural networks in machine translation and other fields has drawn renewed attention to learning sequence-to-sequence (seq2seq) tasks. While there exists a rich literature that studies classification and regression using solvable models of neural networks, learning seq2seq tasks is significantly less studied from this perspective. Here, we propose a simple model for a seq2seq task that gives us explicit control over the degree of memory, or non-Markovianity, in the sequences – the stochastic switching-Ornstein-Uhlenbeck (SSOU) model. We introduce a measure of non-Markovianity to quantify the amount of memory in the sequences. For a minimal auto-regressive (AR) learning model trained on this task, we identify two learning regimes corresponding to distinct phases in the stationary state of the SSOU process. These phases emerge from the interplay between two different time scales that govern the sequence statistics. Moreover, we observe that while increasing the memory of the AR model always improves performance, increasing the non-Markovianity of the input sequences can improve or degrade performance. Finally, our experiments with recurrent and convolutional neural networks show that our observations carry over to more complicated neural network architectures.

READ FULL TEXT
research
09/19/2017

Learning to update Auto-associative Memory in Recurrent Neural Networks for Improving Sequence Memorization

Learning to remember long sequences remains a challenging task for recur...
research
11/23/2018

A Hierarchical Neural Network for Sequence-to-Sequences Learning

In recent years, the sequence-to-sequence learning neural networks with ...
research
05/07/2021

Duplex Sequence-to-Sequence Learning for Reversible Machine Translation

Sequence-to-sequence (seq2seq) problems such as machine translation are ...
research
05/20/2019

Guiding Theorem Proving by Recurrent Neural Networks

We describe two theorem proving tasks -- premise selection and internal ...
research
06/22/2015

A Deep Memory-based Architecture for Sequence-to-Sequence Learning

We propose DEEPMEMORY, a novel deep architecture for sequence-to-sequenc...
research
06/27/2021

On a novel training algorithm for sequence-to-sequence predictive recurrent networks

Neural networks mapping sequences to sequences (seq2seq) lead to signifi...
research
06/06/2017

Retrosynthetic reaction prediction using neural sequence-to-sequence models

We describe a fully data driven model that learns to perform a retrosynt...

Please sign up or login with your details

Forgot password? Click here to reset