Learning to Transduce with Unbounded Memory

06/08/2015
by   Edward Grefenstette, et al.
0

Recently, strong results have been demonstrated by Deep Recurrent Neural Networks on natural language transduction problems. In this paper we explore the representational power of these models using synthetic grammars designed to exhibit phenomena similar to those found in real transduction problems such as machine translation. These experiments lead us to propose new memory-based recurrent networks that implement continuously differentiable analogues of traditional data structures such as Stacks, Queues, and DeQues. We show that these architectures exhibit superior generalisation performance to Deep RNNs and are often able to learn the underlying generating algorithms in our transduction experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2018

Deep Neural Machine Translation with Weakly-Recurrent Units

Recurrent neural networks (RNNs) have represented for years the state of...
research
03/06/2018

Learning Memory Access Patterns

The explosion in workload complexity and the recent slow-down in Moore's...
research
11/18/2016

Variable Computation in Recurrent Neural Networks

Recurrent neural networks (RNNs) have been used extensively and with inc...
research
11/08/2019

Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages

We introduce three memory-augmented Recurrent Neural Networks (MARNNs) a...
research
06/03/2020

Learning Memory-Based Control for Human-Scale Bipedal Locomotion

Controlling a non-statically stable biped is a difficult problem largely...
research
02/22/2021

Parallelizing Legendre Memory Unit Training

Recently, a new recurrent neural network (RNN) named the Legendre Memory...

Please sign up or login with your details

Forgot password? Click here to reset