StackNN
A PyTorch implementation of differentiable stacks for use in neural networks.
view repo
Recently, strong results have been demonstrated by Deep Recurrent Neural Networks on natural language transduction problems. In this paper we explore the representational power of these models using synthetic grammars designed to exhibit phenomena similar to those found in real transduction problems such as machine translation. These experiments lead us to propose new memorybased recurrent networks that implement continuously differentiable analogues of traditional data structures such as Stacks, Queues, and DeQues. We show that these architectures exhibit superior generalisation performance to Deep RNNs and are often able to learn the underlying generating algorithms in our transduction experiments.
READ FULL TEXT VIEW PDF
Recurrent neural networks (RNNs) have represented for years the state of...
read it
The explosion in workload complexity and the recent slowdown in Moore's...
read it
Recurrent neural networks (RNNs) with Long ShortTerm memory cells curre...
read it
Recurrent neural networks (RNNs) have been used extensively and with
inc...
read it
In this paper, we explore various approaches for learning two types of
a...
read it
Controlling a nonstatically stable biped is a difficult problem largely...
read it
Recent developments within deep learning are relevant for nonlinear syst...
read it
A PyTorch implementation of differentiable stacks for use in neural networks.
None
Learning to transduce with unbounded memory
Recurrent neural networks (RNNs) offer a compelling tool for processing natural language input in a straightforward sequential manner. Many natural language processing (NLP) tasks can be viewed as transduction problems, that is learning to convert one string into another. Machine translation is a prototypical example of transduction and recent results indicate that Deep RNNs have the ability to encode long source strings and produce coherent translations
[1, 2]. While elegant, the application of RNNs to transduction tasks requires hidden layers large enough to store representations of the longest strings likely to be encountered, implying wastage on shorter strings and a strong dependency between the number of parameters in the model and its memory.In this paper we use a number of linguisticallyinspired synthetic transduction tasks to explore the ability of RNNs to learn longrange reorderings and substitutions. Further, inspired by prior work on neural network implementations of stack data structures [3]
, we propose and evaluate transduction models based on Neural Stacks, Queues, and DeQues (double ended queues). Stack algorithms are wellsuited to processing the hierarchical structures observed in natural language and we hypothesise that their neural analogues will provide an effective and learnable transduction tool. Our models provide a middle ground between simple RNNs and the recently proposed Neural Turing Machine (NTM)
[4] which implements a powerful random access memory with read and write operations. Neural Stacks, Queues, and DeQues also provide a logically unbounded memory while permitting efficient constant time push and pop operations.Our results indicate that the models proposed in this work, and in particular the Neural DeQue, are able to consistently learn a range of challenging transductions. While Deep RNNs based on long shortterm memory (LSTM) cells
[1, 5] can learn some transductions when tested on inputs of the same length as seen in training, they fail to consistently generalise to longer strings. In contrast, our sequential memorybased algorithms are able to learn to reproduce the generating transduction algorithms, often generalising perfectly to inputs well beyond those encountered in training.String transduction is central to many applications in NLP, from name transliteration and spelling correction, to inflectional morphology and machine translation. The most common approach leverages symbolic finite state transducers [6, 7], with approaches based on context free representations also being popular [8]. RNNs offer an attractive alternative to symbolic transducers due to their simple algorithms and expressive representations [9]. However, as we show in this work, such models are limited in their ability to generalise beyond their training data and have a memory capacity that scales with the number of their trainable parameters.
Previous work has touched on the topic of rendering discrete data structures such as stacks continuous, especially within the context of modelling pushdown automata with neural networks [10, 11, 3, 12]. We were inspired by the continuous pop and push operations of these architectures and the idea of an RNN controlling the data structure when developing our own models. The key difference is that our work adapts these operations to work within a recurrent continuous Stack/Queue/DeQuelike structure, the dynamics of which are fully decoupled from those of the RNN controlling it. In our models, the backwards dynamics are easily analysable in order to obtain the exact partial derivatives for use in error propagation, rather than having to approximate them as done in previous work.
In a parallel effort to ours, researchers are exploring the addition of memory to recurrent networks. The NTM and Memory Networks [4, 13, 14] provide powerful random access memory operations, whereas we focus on a more efficient and restricted class of models which we believe are sufficient for natural language transduction tasks. More closely related to our work, [15] have sought to develop a continuous stack controlled by an RNN. Note that this model—unlike the work proposed here—renders discrete push and pop operations continuous by “mixing” information across levels of the stack at each time step according to scalar push/pop action values. This means the model ends up compressing information in the stack, thereby limiting its use, as it effectively loses the unbounded memory nature of traditional symbolic models.
In this section, we present an extensible memory enhancement to recurrent layers which can be set up to act as a continuous version of a classical Stack, Queue, or DeQue (doubleended queue). We begin by describing the operations and dynamics of a neural Stack, before showing how to modify it to act as a Queue, and extend it to act as a DeQue.
Let a Neural Stack be a differentiable structure onto and from which continuous vectors are pushed and popped. Inspired by the neural pushdown automaton of
[3], we render these traditionally discrete operations continuous by letting push and pop operations be real values in the interval . Intuitively, we can interpret these values as the degree of certainty with which some controller wishes to push a vector onto the stack, or pop the top of the stack.(1)  
(2)  
(3) 
Formally, a Neural Stack, fully parametrised by an embedding size , is described at some timestep by a value matrix and a strength vector . These form the core of a recurrent layer which is acted upon by a controller by receiving, from the controller, a value , a pop signal , and a push signal . It outputs a read vector . The recurrence of this layer comes from the fact that it will receive as previous state of the stack the pair , and produce as next state the pair following the dynamics described below. Here, represents the th row (an dimensional vector) of and represents the th value of .
Equation 1 shows the update of the value component of the recurrent layer state represented as a matrix, the number of rows of which grows with time, maintaining a record of the values pushed to the stack at each timestep (whether or not they are still logically on the stack). Values are appended to the bottom of the matrix (top of the stack) and never changed.
Equation 2 shows the effect of the push and pop signal in updating the strength vector to produce . First, the pop operation removes objects from the stack. We can think of the pop value as the initial deletion quantity for the operation. We traverse the strength vector from the highest index to the lowest. If the next strength scalar is less than the remaining deletion quantity, it is subtracted from the remaining quantity and its value is set to . If the remaining deletion quantity is less than the next strength scalar, the remaining deletion quantity is subtracted from that scalar and deletion stops. Next, the push value is set as the strength for the value added in the current timestep.
Equation 3 shows the dynamics of the read operation, which are similar to the pop operation. A fixed initial read quantity of is set at the top of a temporary copy of the strength vector which is traversed from the highest index to the lowest. If the next strength scalar is smaller than the remaining read quantity, its value is preserved for this operation and subtracted from the remaining read quantity. If not, it is temporarily set to the remaining read quantity, and the strength scalars of all lower indices are temporarily set to 0. The output of the read operation is the weighted sum of the rows of , scaled by the temporary scalar values created during the traversal. An example of the stack read calculations across three timesteps, after pushes and pops as described above, is illustrated in Figure 0(a). The third step shows how setting the strength to 0 for logically removes from the stack, and how it is ignored during the read.
This completes the description of the forward dynamics of a neural Stack, cast as a recurrent layer, as illustrated in Figure 0(b). All operations described in this section are differentiable^{1}^{1}1The and functions are technically not differentiable for
. Following the work on rectified linear units
[16], we arbitrarily take the partial differentiation of the left argument in these cases.. The equations describing the backwards dynamics are provided in Appendix A of the supplementary materials.A neural Queue operates the same way as a neural Stack, with the exception that the pop operation reads the lowest index of the strength vector , rather than the highest. This represents popping and reading from the front of the Queue rather than the top of the stack. These operations are described in Equations 4–5.
(4) 
(5) 
A neural DeQue operates likes a neural Stack, except it takes a push, pop, and value as input for both “ends” of the structure (which we call and ), and outputs a read for both ends. We write and instead of , and instead of , and so on. The state, and are now a dimensional matrix and a dimensional vector, respectively. At each timestep, a pop from the top is followed by a pop from the bottom of the DeQue, followed by the pushes and reads. The dynamics of a DeQue, which unlike a neural Stack or Queue “grows” in two directions, are described in Equations 6–11, below. Equations 7–9 decompose the strength vector update into three steps purely for notational clarity.
(6)  
(7)  
(8)  
(9)  
(10)  
(11) 
To summarise, a neural DeQue acts like two neural Stacks operated on in tandem, except that the pushes and pops from one end may eventually affect pops and reads on the other, and vice versa.
While the three memory modules described can be seen as recurrent layers, with the operations being used to produce the next state and output from the input and previous state being fully differentiable, they contain no tunable parameters to optimise during training. As such, they need to be attached to a controller in order to be used for any practical purposes. In exchange, they offer an extensible memory, the logical size of which is unbounded and decoupled from both the nature and parameters of the controller, and from the size of the problem they are applied to. Here, we describe how any RNN controller may be enhanced by a neural Stack, Queue or DeQue.
We begin by giving the case where the memory is a neural Stack, as illustrated in Figure 0(c). Here we wish to replicate the overall ‘interface’ of a recurrent layer—as seen from outside the dotted lines—which takes the previous recurrent state and an input vector , and transforms them to return the next recurrent state and an output vector . In our setup, the previous state of the recurrent layer will be the tuple , where is the previous state of the RNN, is the previous stack read, and is the previous state of the stack as described above. With the exception of , which is initialised randomly and optimised during training, all other initial states, and , are set to 0valued vectors/matrices and not updated during training.
The overall input is concatenated with previous read and passed to the RNN controller as input along with the previous controller state . The controller outputs its next state and a controller output , from which we obtain the push and pop scalars and and the value vector , which are passed to the stack, as well as the network output :
where and are vectortoscalar projection matrices, and and are their scalar biases; and are vectortovector projections, and and are their vector biases, all randomly intialised and then tuned during training. Along with the previous stack state , the stack operations and and the value are passed to the neural stack to obtain the next read and next stack state , which are packed into a tuple with the controller state to form the next state of the overall recurrent layer. The output vector serves as the overall output of the recurrent layer. The structure described here can be adapted to control a neural Queue instead of a stack by substituting one memory module for the other.
The only additional trainable parameters in either configuration, relative to a nonenhanced RNN, are the projections for the input concatenated with the previous read into the RNN controller, and the projections from the controller output into the various Stack/Queue inputs, described above. In the case of a DeQue, both the top read and bottom read must be preserved in the overall state. They are both concatenated with the input to form the input to the RNN controller. The output of the controller must have additional projections to output push/pop operations and values for the bottom of the DeQue. This roughly doubles the number of additional tunable parameters “wrapping” the RNN controller, compared to the Stack/Queue case.
In every experiment, integerencoded source and target sequence pairs are presented to the candidate model as a batch of single joint sequences. The joint sequence starts with a startofsequence (SOS) symbol, and ends with an endofsequence (EOS) symbol, with a separator symbol separating the source and target sequences. Integerencoded symbols are converted to dimensional embeddings via an embedding matrix, which is randomly initialised and tuned during training. Separate wordtoindex mappings are used for source and target vocabularies. Separate embedding matrices are used to encode input and output (predicted) embeddings.
The aim of each of the following tasks is to read an input sequence, and generate as target sequence a transformed version of the source sequence, followed by an EOS symbol. Source sequences are randomly generated from a vocabulary of meaningless symbols. The length of each training source sequence is uniformly sampled from
, and each symbol in the sequence is drawn with replacement from a uniform distribution over the source vocabulary (ignoring SOS, and separator).
A deterministic taskspecific transformation, described for each task below, is applied to the source sequence to yield the target sequence. As the training sequences are entirely determined by the source sequence, there are close to training sequences for each task, and training examples are sampled from this space due to the random generation of source sequences. The following steps are followed before each training and test sequence are presented to the models, the SOS symbol () is prepended to the source sequence, which is concatenated with a separator symbol () and the target sequences, to which the EOS symbol () is appended.
The source sequence is copied to form the target sequence. Sequences have the form:
The source sequence is deterministically reversed to produce the target sequence. Sequences have the form:
The source side is restricted to evenlength sequences. The target is produced by swapping, for all odd source sequence indices
, the th symbol with the th symbol. Sequences have the form:The following tasks examine how well models can approach sequence transduction problems where the source and target sequence are jointly generated by Inversion Transduction Grammars (ITG) [8], a subclass of Synchronous ContextFree Grammars [17] often used in machine translation [18]. We present two simple ITGbased datasets with interesting linguistic properties and their underlying grammars. We show these grammars in Table 1, in Appendix C
of the supplementary materials. For each synchronised nonterminal, an expansion is chosen according to the probability distribution specified by the rule probability
at the beginning of each rule. For each grammar, ‘A’ is always the root of the ITG tree.We tuned the generative probabilities for recursive rules by hand so that the grammars generate left and right sequences of lengths 8 to 128 with relatively uniform distribution. We generate training data by rejecting samples that are outside of the range , and testing data by rejecting samples outside of the range . For terminal symbolgenerating rules, we balance the classes so that for terminalgenerating symbols in the grammar, each terminalgenerating nonterminal ‘X’ generates a vocabulary of approximately , and each each vocabulary word under that class is equiprobable. These design choices were made to maximise the similarity between the experimental settings of the ITG tasks described here and the synthetic tasks described above.
A persistent challenge in machine translation is to learn to faithfully reproduce highlevel syntactic divergences between languages. For instance, when translating an English sentence with a nonfinite verb into German, a transducer must locate and move the verb over the object to the final position. We simulate this phenomena with a synchronous grammar which generates strings exhibiting verb movements. To add an extra challenge, we also simulate simple relative clause embeddings to test the models’ ability to transduce in the presence of unbounded recursive structures.
A sample output of the grammar is presented here, with spaces between words being included for stylistic purposes, and where s, o, and v indicate subject, object, and verb terminals respectively, i and o mark input and output, and rp indicates a relative pronoun:
We design a small grammar to simulate translations from a language with genderfree articles to one with genderspecific definite and indefinite articles. A real world example of such a translation would be from English (the, a) to German (der/die/das, ein/eine/ein).
The grammar simulates sentences in or form, where every noun phrase can become an infinite sequence of nouns joined by a conjunction. Each noun in the source language has a neutral definite or indefinite article. The matching word in the target language then needs to be preceeded by its appropriate article. A sample output of the grammar is presented here, with spaces between words being included for stylistic purposes:
For each task, test data is generated through the same procedure as training data, with the key difference that the length of the source sequence is sampled from . As a result of this change, we not only are assured that the models cannot observe any test sequences during training, but are also measuring how well the sequence transduction capabilities of the evaluated models generalise beyond the sequence lengths observed during training. To control for generalisation ability, we also report accuracy scores on sequences separately sampled from the training set, which given the size of the sample space are unlikely to have ever been observed during actual model training.
For each round of testing, we sample 1000 sequences from the appropriate test set. For each sequence, the model reads in the source sequence and separator symbol, and begins generating the next symbol by taking the maximally likely symbol from the softmax distribution over target symbols produced by the model at each step. Based on this process, we give each model a coarse accuracy score, corresponding to the proportion of test sequences correctly predicted from beginning until end (EOS symbol) without error, as well as a fine accuracy score, corresponding to the average proportion of each sequence correctly generated before the first error. Formally, we have:
where and are the number of correctly predicted sequences (endtoend) and the total number of sequences in the test batch (1000 in this experiment), respectively; is the number of correctly predicted symbols before the first error in the th sequence of the test batch, and is the length of the target segment that sequence (including EOS symbol).
For each task, we use as benchmarks the Deep LSTMs described in [1], with 1, 2, 4, and 8 layers. Against these benchmarks, we evaluate neural Stack, Queue, and DeQueenhanced LSTMs. When running experiments, we trained and tested a version of each model where all LSTMs in each model have a hidden layer size of , and one for a hidden layer size of . The Stack/Queue/DeQue embedding size was arbitrarily set to , half the maximum hidden size. The number of parameters for each model are reported for each architecture in Table 2 of the appendix. Concretely, the neural Stack, Queue, and DeQueenhanced LSTMs have the same number of trainable parameters as a twolayer Deep LSTM. These all come from the extra connections to and from the memory module, which itself has no trainable parameters, regardless of its logical size.
Models are trained with minibatch RMSProp
[19], with a batch size of 10. We gridsearched learning rates across the set. We used gradient clipping
[20], clipping all gradients above . Average training perplexity was calculated every 100 batches. Training and test set accuracies were recorded every 1000 batches.

Because of the impossibility of overfitting the datasets, we let the models train an unbounded number of steps, and report results at convergence. We present in Figure 1(a) the coarse and finegrained accuracies, for each task, of the best model of each architecture described in this paper alongside the best performing Deep LSTM benchmark. The best models were automatically selected based on average training perplexity. The LSTM benchmarks performed similarly across the range of random initialisations, so the effect of this procedure is primarily to try and select the better performing Stack/Queue/DeQueenhanced LSTM. In most cases, this procedure does not yield the actual bestperforming model, and in practice a more sophisticated procedure such as ensembling [21] should produce better results.
For all experiments, the Neural Stack or Queue outperforms the Deep LSTM benchmarks, often by a significant margin. For most experiments, if a Neural Stack or Queueenhanced LSTM learns to partially or consistently solve the problem, then so does the Neural DeQue. For experiments where the enhanced LSTMs solve the problem completely (consistent accuracy of 1) in training, the accuracy persists in longer sequences in the test set, whereas benchmark accuracies drop for all experiments except the SVO to SOV and Gender Conjugation ITG transduction tasks. Across all tasks which the enhanced LSTMs solve, the convergence on the top accuracy happens orders of magnitude earlier for enhanced LSTMs than for benchmark LSTMs, as exemplified in Figure 1(b).
The results for the sequence inversion and copying tasks serve as unit tests for our models, as the controller mainly needs to learn to push the appropriate number of times and then pop continuously. Nonetheless, the failure of Deep LSTMs to learn such a regular pattern and generalise is itself indicative of the limitations of the benchmarks presented here, and of the relative expressive power of our models. Their ability to generalise perfectly to sequences up to twice as long as those attested during training is also notable, and also attested in the other experiments. Finally, this pair of experiments illustrates how while the neural Queue solves copying and the Stack solves reversal, a simple LSTM controller can learn to operate a DeQue as either structure, and solve both tasks.
The results of the Bigram Flipping task for all models are consistent with the failure to consistently correctly generate the last two symbols of the sequence. We hypothesise that both Deep LSTMs and our models economically learn to pairwise flip the sequence tokens, and attempt to do so half the time when reaching the EOS token. For the two ITG tasks, the success of Deep LSTM benchmarks relative to their performance in other tasks can be explained by their ability to exploit short local dependencies dominating the longer dependencies in these particular grammars.
Overall, the rapid convergence, where possible, on a general solution to a transduction problem in a manner which propagates to longer sequences without loss of accuracy is indicative that an unbounded memoryenhanced controller can learn to solve these problems procedurally, rather than memorising the underlying distribution of the data.
The experiments performed in this paper demonstrate that singlelayer LSTMs enhanced by an unbounded differentiable memory capable of acting, in the limit, like a classical Stack, Queue, or DeQue, are capable of solving sequencetosequence transduction tasks for which Deep LSTMs falter. Even in tasks for which benchmarks obtain high accuracies, the memoryenhanced LSTMs converge earlier, and to higher accuracies, while requiring considerably fewer parameters than all but the simplest of Deep LSTMs. We therefore believe these constitute a crucial addition to our neural network toolbox, and that more complex linguistic transduction tasks such as machine translation or parsing will be rendered more tractable by their inclusion.
We thank Alex Graves, Demis Hassabis, Tomáš Kočiský , Tim Rocktäschel, Sam Ritter, Geoff Hinton, Ilya Sutskever, Chris Dyer, and many others for their helpful comments.
Rectified linear units improve restricted boltzmann machines.
InProceedings of the 27th International Conference on Machine Learning (ICML10)
, pages 807–814, 2010.We describe here the backwards dynamics of the neural stack by examining the relevant partial derivatives of of the outputs with regard to the inputs, as defined in Equations 1–3. We use to indicate the Kronecker delta ( if , otherwise). The equations below hold for any valid row numbers and .
(12) 
(13) 
(14) 
(15)  
(16)  
(17)  
(18) 
All partial derivatives other than those obtained by the chain rule for derivatives can be assumed to be
. The backwards dynamics for neural Queues and DeQues can be similarly derived from Equations 4–11.During initial experiments with the continuous stack presented in this paper, we noted that the stack’s ability to learn the solution to the transduction tasks detailed here varied greatly based on the random initialisation of the controller. This initially required us to restart training with different random seeds to obtain behaviour consistent with the learning of an algorithmic solution (i.e. rapid drop in validation perplexity after a short number of iterations).
Analysis of the backwards dynamics presented in Section A demonstrates that error on push and pop decisions is a function of read error “carried” back through time by the vectors on the stack (cf. Equation 14 and Equations 17–18), which is accumulated as the vectors placed onto the stack by a push, or retained after a pop, are read at further timesteps. Crucially, this means that if the controller operating the stack is initially biased in favour of popping over pushing (i.e. for most or all timesteps ), vectors are likely to be removed from the stack the timestep after they were pushed, resulting in the continuous stack being used as an extra recurrent hidden layer, rather than as something behaving like a classical stack.
The consequence of this is that gradient for the decision to push at time only comes via the hidden state of the controller at time , so for problems where the vector would ideally have been preserved on the stack until some later time, signal encouraging the controller to push with higher certainty is unlikely to be propagated back if the RNN controller suffers from vanishing gradient issues. Likewise, the gradient for the decision to pop is (as each pop empties the stack). We conclude that underusing the memory in such a way makes its proper manipulation hard to learn by the controller.
Conversely, overusing the stack (even incorrectly) means that gradient obtained with regard to the (mis)use is properly communicated, as the pop gradient will not be zero (Equation 17) for all . Additionally, the (nonvanishing) gradient propagated through the stack state (Equation 12) will allow the decision to push at some timestep to be rewarded or penalised based on reads at some much later time. These remarks also apply to the continuous queue and doubleended queue.
Since in our setting the decision to push and pop is produced by taking a biased linear transform of an RNN hidden state followed by a componentwise sigmoid operation, we hypothesised, based on the above analysis, that initialising the bias for popping to a negative number would solve the variance issue described above. We tested this on short sequences of the copy task, and found that a small bias of
produced the desired algorithmic behaviour of the stackenhanced controller across all seeds tested. Setting this initialisation policy for the controller across all experiments allowed us to reproduce the results produced in the paper without need for repeated initialisation. We recommend that other controller implementations provide similar trainable biases for the decision to pop, and initialise them following this policy (and likewise for controllers controlling other continuous data structures presented in this paper).We present here, in Table 1, the inverse transduction grammars described in Section 4.2. Sets of terminalgenerating rules are indicated by the form ‘X …’, where and for terminal generating nonterminal symbols (classes of terminals), so that the generated vocabulary is balanced across classes and of a size similar to other experiments.


We show, in Table 2, the number of parameters per model, for all models used in the experiments of the paper.
Hidden layer size  

Model  256  512 
1layer LSTM  
2layer LSTM  
4layer LSTM  
8layer LSTM  
StackLSTM  
QueueLSTM  
DeQueLSTM 
We show in Table 3 the full results for each task of the best performing models. The procedure for selecting the best performing model is described in Section 5.
Training  Testing  

Experiment  Model  Coarse  Fine  Coarse  Fine 
Sequence Copying  1layer LSTM  
2layer LSTM  
4layer LSTM  
8layer LSTM  
StackLSTM  
QueueLSTM  
DeQueLSTM  
Sequence Reversal  1layer LSTM  
2layer LSTM  
4layer LSTM  
8layer LSTM  
StackLSTM  
QueueLSTM  
DeQueLSTM  
Bigram Flipping  1layer LSTM  
2layer LSTM  
4layer LSTM  
8layer LSTM  
StackLSTM  
QueueLSTM  
DeQueLSTM  
SVO to SOV  1layer LSTM  
2layer LSTM  
4layer LSTM  
8layer LSTM  
StackLSTM  
QueueLSTM  
DeQueLSTM  
Gender Conjugation  1layer LSTM  
2layer LSTM  
4layer LSTM  
8layer LSTM  
StackLSTM  
QueueLSTM  
DeQueLSTM 
Comments
There are no comments yet.