1 Introduction
Transformer networks Vaswani et al. (2017) are sequence models that rely on the attention mechanism Bahdanau et al. (2015)
to capture long term dependencies. Since their introduction in the context of machine translation, they have been applied to many natural language processing tasks, such as language modeling
AlRfou et al. (2019) or sentence representation Devlin et al. (2019). On most of them, they are now surpassing the former stateoftheart models based on recurrent Hochreiter and Schmidhuber (1997) or convolutional networks Dauphin et al. (2017). At their core, transformers use a selfattention layer that forms a representation of the current input by gathering the most relevant information from its context. This layer is repeated along the network depth, allowing for information to flow for long distances and to form rich sequence representations. The selfattention mechanism is often considered as the key component of their success and many have worked on improving transformers by increasing the size of the context captured by those layers Wu et al. (2019); Dai et al. (2019); Sukhbaatar et al. (2019).However, selfattention layers are not the only component of transformer networks and they do not explain the effectiveness of transformers by themselves. Each of these layers is followed by a feedforward layer. These feedforward layers contain most of the parameters of the model. This suggests that their role is probably as important as the selfattention mechanism. In fact, the transformer layer, i.e., the sequence of selfattention and feedforward sublayers, should be regarded as a single mechanism that gathers information from the context and transforms it into a rich representation.
In this work, we improve the transformer architecture by revisiting its mechanism, while keeping its properties. We introduce a new layer that merges the selfattention and feedforward sublayers into a single unified attention layer, as illustrated in Figure 1. As opposed to the twostep mechanism of the transformer layer, it directly builds its representation from the context and a persistent memory block without going through a feedforward transformation. The additional persistent memory block stores, in the form of keyvalue vectors, information that does not depend on the context. In terms of parameters, these persistent keyvalue vectors replace the feedforward sublayer. This modification dramatically simplifies the structure of the network with no loss of performance.
We evaluate the resulting architecture on standard word level and character level language modeling benchmarks and report performances that are competitive with transformers.
2 Related work
Neural language modeling.
Different network architectures have been proposed for language modeling, such as feedforward networks Bengio et al. (2003a), recurrent networks Mikolov et al. (2010), gated convolutional networks Dauphin et al. (2017) and transformer networks Vaswani et al. (2017). Of particular interest, AlRfou et al. (2019) apply deep transformers to character level language modeling. Dai et al. (2019) introduces a caching mechanism, relying on the relative position embeddings from Shaw et al. (2018), which makes inference in these models much more efficient for unbounded sequences. More recently, Sukhbaatar et al. (2019) add a learnable selfattention span to extend the size of the context.
Word level language models deal with large vocabularies and computing the most probable word is computationally demanding. Solutions are to either replace the softmax loss with an approximation Goodman (2001); Morin and Bengio (2005), to sample from the vocabulary during training Bengio et al. (2003b); Jozefowicz et al. (2016) or to include subword units Sennrich et al. (2016). A simple yet effective solution is to replace the loss by a hierarchical softmax designed to better take advantage of the GPU specificities Grave et al. (2017a).
Finally, many works focus on the regularization of large language models. In particular, Zaremba et al. (2014) show that dropout Srivastava et al. (2014) is effective for recurrent networks. More recently, Press and Wolf (2017)
show that tying the embedding and classifier weights significantly improves generalization.
Baevski and Auli (2019) further show that combining this regularization technique with the adaptive softmax of Grave et al. (2017a) reduces the memory footprint of a transformer while improving its performance.Attention based models.
The attention mechanism was first introduced in the context of mixture of experts by Jordan and Jacobs (1994). It is only recently that Bahdanau et al. (2015)
have shown their potential when used in neural networks in the context of machine translation. Since then, this mechanism is commonly incorporated within many models, with applications in natural language processing and computer vision, besides transformers.
Sukhbaatar et al. (2015) apply the attention mechanism on the same sequence, i.e., the socalled selfattention, in an autoregressive model called endtoend memory network. They show their potential in the context of language modeling. Graves et al. (2014) use the attention mechanism for reading from and writing to internal memory for solving algorithmic tasks. Vinyals et al. (2015) combine this selfattention mechanism with a recurrent network to solve simple algorithmic problems. Later, Merity et al. (2017) show that these networks can be used as language models if combined with a cache mechanism Grave et al. (2017b). The attention mechanism has been also applied to question answering (Miller et al., 2016) and image captioning (Xu et al., 2015). Finally, Shazeer et al. (2017) uses the attention mechanism as a mixture of experts in a recurrent network.3 Transformer layer
A transformer model is made of a stack of identical layers, called transformer layers. Each layer is composed of a multihead selfattention sublayer followed by a feedforward sublayer. Each sublayer is also followed by an addnorm operation, i.e., a skipconnection He et al. (2016), and layer normalization Lei Ba et al. (2016). In this section, we review the structure of the transformer layer and refer the reader to Vaswani et al. (2017) for additional details of the overall model.
Multihead selfattention sublayer.
A core mechanism of a transformer network is the multihead selfattention layer, which consists of multiple attention heads applied in parallel. Each attention head applies the attention mechanism of Bahdanau et al. (2015) on an input sequence of vectors. More formally, given a sequence of
dimensional input vectors, each head applies two linear transformations to these vectors to form the key and value vectors:
(1)  
(2) 
where and are the “key” and “value” matrices of a size , where is the dimension of a head and is the number of heads. The key vectors are then used to compute a similarity score between an element of the input sequence and all the elements of its context . The context can be, for instance, the elements of the sequence that precede in the case of language modeling, or the whole sequence in the encoder for machine translation. The similarity score between and an element of its context is defined as
(3) 
where is the “query” matrix, and is a position encoding function. There are several ways to encode positions: fixed absolute (Vaswani et al., 2017), learned absolute (AlRfou et al., 2019), and learned relative (Sukhbaatar et al., 2015; Shaw et al., 2018). The relative position encoding function improves the efficiency for unbounded sequences, making them useful for language modeling Dai et al. (2019). In this paper, we thus use the relative position encoding defined as , where are position embeddings learned during training. The head then outputs a vector by taking the average of the context representations weighted by attention weights obtained by applying a softmax function to the similarity scores:
(4) 
Note that one can use different position encoding functions for the key and value sides. Finally, the outputs from the different heads are concatenated for each timestep and multiplied by the “output” matrix . The final output of this sublayer is thus a sequence of vectors of dimension .
Feedforward sublayer.
The second element of a transformer layer is a fully connected feedforward layer. This sublayer is applied to each position in the input sequence independently, and consists of two affine transformations with a pointwise nonlinear function in between:
(5) 
where is the ReLUactivation function; and are matrices of dimension and respectively; and are the bias terms. Typically, is set to be times larger than .
Addnorm.
Both the multihead selfattention and the feedforward layer are followed by an addnorm operation. This transformation is simply a residual connection
He et al. (2016) followed by layer normalization Lei Ba et al. (2016). The layer normalization computes the average and standard deviation of the output activations of a given sublayer and normalizes them accordingly. This guarantees that the input
of the following sublayer is well conditioned, i.e., that and . More precisely, the AddNorm operation is defined as:(6) 
where Sublayer is either a multihead selfattention or a feedforward sublayer.
Transformer layer.
The overall transformer layer has the following set of equations:
(7)  
(8) 
where MultiHead is the multihead selfattention sublayer. This is shown on the left panel of Fig. 1.
4 Our approach
In this section, we first show that a feedforward sublayer can be viewed as an attention layer. Then, we take advantage of this interpretation of a feedforward model to concatenate it with the selfattention layer, forming a novel layer that relies solely on a multihead attention layer without the need for a feedforward sublayer.
4.1 Feedforward sublayer as an attention layer
We transform the feedforward sublayer into an attention layer by replacing the ReLU nonlinear function in Eq. 5 by a Softmax function and removing the biases:
(9) 
Here we use notations and to denote column and row vectors respectively. The activation is thus the attention weight computed with and . The vectors , and are equivalent to the query, key and value vectors respectively. The Eq. 9 is also equivalent to the selfattention sublayer of Eq. 34 with the context vectors , set to zero and the vectors and are used as key and value side position embeddings respectively. This allows for a similar implementation for the feedforward and the selfattention sublayers, and opens the possibility of merging them into a single layer.
4.2 Persistent memory augmented selfattention layer
Here we propose a single attention layer that can replace both selfattention and feedforward layers in Transformers, which we call allattention layer. Our layer applies the attention mechanism simultaneously on the sequence of input vectors, as in the standard selfattention layer, and on a set of vectors not conditioned on the input. These vectors are added to capture information that does not depend on the immediate context, like general knowledge about the task. They are shared across the data and, in some sense, forms a persistent memory similar to the feedforward layer. Therefore we call them persistent vectors. More precisely, the persistent vectors are a set of pairs of keyvalue vectors, respectively stacked in two dimensional matrices and . As discussed in Section 4.1, and can be interpreted as and of a feedforward sublayer.
These persistent vectors are simply added to the pool of key and value vectors conditioned on the input:
(10)  
(11) 
Let us denote by the concatenation of the context and the indices corresponding to the persistent vectors. The similarity score between an element of the input sequence and an element of its extended context is computed the same way as in Eq. (3), i.e.:
(12) 
where the position encoding corresponding to a persistent vector is equal to zero. The allattention then outputs a vector with the same attention function as in Eq. (4), i.e.,
(13) 
As with a selfattention sublayer, an allattention layer can have multiple heads, where outputs from the different heads are concatenated for each timestep and multiplied . Note that persistent vectors are not shared between heads. Our overall layer is then simply this new MultiHeadAllAttn sublayer followed by the AddNorm operation as defined in Eq. (6), i.e.,
(14) 
The right panel of Fig. 1 summarize the allattention layer in the case of a single head: we remove the feedforward sublayer and add unconditioned persistent vectors to the selfattention sublayer. While the persistent vectors are directly comparable to a feedforward sublayer in the case of a single head, a multihead version is more comparable to multiple small feedforward layers working in parallel. If there are as many persistent vectors as the ReLU units, an allattention layer has the same number of parameters as the standard transformer layer regardless of the number of heads (ignoring bias terms).
Note that using attention mechanism to address unconditioned persistent vectors has been previously proposed in the context of question answering with knowledge bases Miller et al. (2016).
4.3 Language modeling
Language modeling is the problem of assigning a probability to a sequence of tokens :
In this paper, we focus on tokens that are either words or characters. Language modeling has been dominated by neural networks with models either based on feedforward networks Bengio et al. (2003a) or recurrent networks Mikolov et al. (2010). Recently autoregressive versions of transformers have been achieving the best performance on standard benchmarks AlRfou et al. (2019); Dai et al. (2019); Baevski and Auli (2019). In this section, we describe several specificities of these models that we borrow to make our model work on language modeling, especially with a large vocabulary and a long context.
Relative position embeddings and caching.
The relative position embeddings are learnable vectors that are encoding the relative positions in the sequence by setting in Eq. 3. They replace the fixed absolute position embeddings of the original transformer to allow these models to work on unbounded sequences. When the input sequence is processed in small blocks for efficiency, caching mechanism Dai et al. (2019) is necessary to ensure that every token has the same context length regardless of its position in the block.
Adaptive context size.
In adaptive attention span (Sukhbaatar et al., 2019), each attention head separately learns its context size from data. This allows few heads to have a very long attention span, while others to focus only on recent past. As a result, it becomes possible to extend the maximum attention span without increasing memory footprint and computation time significantly. The method works by multiplying the attention weights in Eq. 4 by a softmasking function that maps values to . The real parameter controls how much of the attention stays the same, and it is learned together with the rest of the model. Since our attention weights in Eq. 13
contain additional values corresponding to the persistent vectors, we simply pad the masking function with
on the locations corresponding to those persistent vectors. This ensures that we only adapt the context size, while the persistent vectors are always included in the attention.Adaptive input and output.
In word level language modeling, the size of the vocabulary is very large, making the use of a softmax loss function prohibitive both in terms of running time and memory footprint. A standard solution to circumvent this issue is to replace the full softmax function by the adaptive softmax of
Grave et al. (2017a). The idea of the adaptive softmax is to split the vocabulary into disjoint clusters and compare words only within the same cluster. The clusters are formed by partitioning the vocabulary by following word frequency. The most frequent words are in the first cluster while the least frequent ones are in the last cluster. The size of each cluster is picked to minimize the overall running time, leading to small clusters of frequent words and large clusters of infrequent words. Finally, they further reduce the running time and the memory footprint by adapting the capacity of the classifiers according to their cluster assignment: The words in the th cluster have a classifier that is smaller than the one in the first cluster. The underlying motivation is that infrequent words are hard to predict and there is thus no need to use many parameters for them. The memory footprint of the model is further reduced by tying up the embedding weights with the classifier weights Inan et al. (2017); Press and Wolf (2017). In the case of the adaptive softmax, this leads to a special form of embeddings called adaptive input Baevski and Auli (2019).5 Experiments
5.1 Experimental setup
In this section, we describe our hyperparameters choices, our optimization scheme as well as the details of the datasets we consider.
Implementation details.
We initialize token and position embeddings from , and the matrices from . The position embeddings are shared accross all the heads. Persistent vectors are reparameterized by and , where the parameters and are initialized from and
respectively. This way the persistent vectors have the same unit variance as the context vectors initially, while the underlying parameters
and are initialized similar to the weights of a feedforward sublayer.For character level language modeling, we set the model dimension to , and the number of heads to . Our small (large) models have () allattention layers, () persistent vectors and a dropout rate of () applied to attention weights. The adaptive span has the same hyperparameters as Sukhbaatar et al. (2019) with a maximum span of , except the loss coefficient is set to . We use Adagrad Duchi et al. (2011) with a learning rate of . We clip individual gradients with a norm larger than Pascanu et al. (2013). We warmup the learning rate linearly for k timesteps Vaswani et al. (2017). A training batch consists of samples, each with consecutive tokens. When the loss on validation stops decreasing, we divide the learning rate by for an additional k steps. Training large models takes about a day on V100 GPUs.
For word level language modeling, we use a model with and layers, each with heads and persistent vectors. We use Adam with a learning rate of and k warmup steps. The whole gradient norm is clipped at . A batch consists of samples, each with tokens. We use an adaptive span of with a loss of . The dropout rate is set to for attention weights, and for input embeddings and the final representation.
Datasets and metrics.
For character level language modeling, we consider the enwik8 and text8 datasets from Mahoney (2011). Both datasets have a training set of M tokens and a vocabulary of and unique characters respectively (including the endofsentence token). Both datasets are made of Wikipedia articles split at the character level. The text8 dataset is preprocessed by lowering casing and retaining only whitespaces and the letters that are in the ISO basic Latin alphabet. We report bit per character (bpc) on dev and test sets.
For word level language modeling, we consider the WikiText103 dataset introduced by Merity et al. (2017). The training set of WikiText103 contains around M tokens and a vocabulary of about k words. Each word in the vocabulary appears at least times in the training data. The dataset is made of Wikipedia articles. We report perplexity (ppl) on the dev and test sets.
Dataset specific implementation details.
Following Baevski and Auli (2019) on WikiText103, we use tied adaptive softmax and adaptive input with clusters of size k, k and k. The dimensions of the classifiers in each cluster are consecutively divided by , leading to the following dimensions , and .
5.2 Main results
We compare our approach to the state of the art on several standard benchmarks on both word level and character level language modeling.
Model  #Params  test bpc 
Small models  
Ha et al. (2017) – LN HyperNetworks  27M  1.34 
Chung et al. (2017) – LN HMLSTM  35M  1.32 
Zilly et al. (2017) – Recurrent highway networks  46M  1.27 
Mujika et al. (2017) – Large FSLSTM4  47M  1.25 
Krause et al. (2017) – Large mLSTM  46M  1.24 
AlRfou et al. (2019) – T12  44M  1.11 
Dai et al. (2019) – TransformerXL  41M  1.06 
Sukhbaatar et al. (2019)  Transformer + adaptive span  39M  1.02 
Allattention network + adaptive span  39M  1.01 
Large models  
AlRfou et al. (2019) – T64  235M  1.06 
Dai et al. (2019) – TransformerXL 18l  88M  1.03 
Dai et al. (2019) – TransformerXL 24l  277M  0.99 
Child et al. (2019) – Sparse Transformer (fixed)  95M  0.99 
Sukhbaatar et al. (2019)  Transformer + adaptive span  209M  0.98 
Allattention network + adaptive span  114M  0.98 
Model  #Params  dev bpc  test bpc 

Small models  
Chung et al. (2017) – LN HMLSTM  35M    1.29 
Zilly et al. (2017) – Recurrent highway networks  45M    1.27 
Krause et al. (2017) – Large mLSTM  45M    1.27 
AlRfou et al. (2019) – T12  44M    1.18 
Sukhbaatar et al. (2019)  Transformer + adaptive span  38M  1.05  1.11 
Allattention network + adaptive span  38M  1.05  1.11 
Large models  
AlRfou et al. (2019) – T64  235M  1.06  1.13 
Dai et al. (2019) – TransformerXL  277M    1.08 
Sukhbaatar et al. (2019)  Transformer + adaptive span  209M  1.01  1.07 
Allattention network + adaptive span  114M  1.02  1.08 
Character level language modeling.
In Table 1, we report the results on enwik8. Our small model outperforms all other models of similar sizes. Our large model matches the stateoftheart performance with significantly fewer parameters. On text8, our small model also matches the best performing model from Sukhbaatar et al. (2019) as shown in Table 2. Our large model is bpc below the stateoftheart, but with half the number of parameters.
Model  #Params  dev ppl  test ppl 

Small models  
Grave et al. (2017b) – LSTM      48.7 
Bai et al. (2018) – TCN      45.2 
Dauphin et al. (2017) – GCNN8      44.9 
Grave et al. (2017b) – LSTM + Neural cache      40.8 
Merity et al. (2018) – 4layer QRNN  151M  32.0  33.0 
Rae et al. (2018) – LSTM + Hebbian + Cache    29.7  29.9 
Dai et al. (2019) – TransformerXL Standard  151M  23.1  24.0 
Allattention network + adaptive span  133M  19.7  20.6 
Best published result with a large model (Dai et al., 2019)  257M  17.7  18.3 
Word level language modeling.
In Table 3, we compare the allattention network with the state of the art among small models on the WikiText103 dataset. Our network is ppl better than the previous best, which was a TransformerXL of a comparable size. For completeness, we also report the state of the art obtained with larger models, that is about 2 perplexity points better than us.
5.3 Ablation study
In this section, we compare different variations of our large model on character level language modeling on Text8. First, we vary the number of persistent vectors in each layer as shown in Figure 2(left). The result shows that persistent vectors are crucial for performance, already reaching a good performance at . A model without persistent vectors (i.e. ) is equivalent to a transformer model without feedforward sublayers, and it performs poorly. This also demonstrates the importance of feedforward layers in transformer models. However, it maintains decent performances because it still has a lot of parameters (M) in the matrices.
We also compare several different ways of integrating persistent vectors into selfattention:

Allattn: this is our default model presented in Section 4 where persistent vectors are simply concatenated to context vectors.

Attnsplit: this is the same as “allattn” except the attention over context and persistent vectors are computed separately. In other words, we replace the softmax in Eq. 13 with two separate softmax functions: one for context vectors only and one for persistent vectors only.

Headsplit: this is the same as “allattn” except we constrain half of the heads to attend only to context vectors, and the other half to attend only to persistent vectors.

Singlehead: this is the same as “attnsplit”, but now persistent vectors are not split into multiple heads. Instead, each layer has a single set of persistent keyvalue vectors of a dimension .

FFattn: a Transformer model where the ReLU of feedforward sublayers is replaced with a Softmax function as discussed in Section 4.1. This is the same as “singlehead” above except persistent vectors are kept as a separate sublayer that comes after a selfattention sublayer. Since this will double the depth of a model, we decrease the number of layers to 24 and increase the feedforward size to 3072 to maintain the number of parameters same.
Note that all those versions have the same number of parameters except “headsplit”, which has fewer parameters because half of its persistent vectors are not used. The result is shown in Figure 2(right). There are few things to notice: (i) “allattn” outperforms “attnsplit”, which indicates that there is a benefit in computing attention jointly over persistent and context vectors; (ii) “singlehead” is worse than “attnsplit”, which means persistent vectors with more heads are better; and (iii) dividing the heads into contextonly and persistentonly groups does not work well; and (iv) “FFattn” does not work as good as “allattn” which means the switch from ReLU to Softmax alone is not sufficient.
6 Conclusion
In this paper, we propose a novel attention layer that presents a unified mechanism to aggregate general and contextual information. It extends the selfattention layer of a transformer with a set of persistent vectors that are capable of storing information that is complementary to the short term information in contexts. We also show that these persistent vectors can replace the feedforward layers in a transformer network with no loss of performance. We think that this simplified layer can help better understand how information is processed and stored in transformerlike sequence models.
Acknowledgements
We thank Leon Bottou, Omer Levy for their helpful comments and suggestions. We also thank Lowik Chanussot, Jérémy Rapin and other members of Facebook AI Research engineering team for their support in implementing and running the experiments.
References

AlRfou et al. [2019]
Rami AlRfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones.
Characterlevel language modeling with deeper selfattention.
In
Proceedings of the 33rd AAAI Conference on Artificial Intelligence
, 2019.  Baevski and Auli [2019] Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. In ICLR, 2019.
 Bahdanau et al. [2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.

Bengio et al. [2003a]
Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin.
A neural probabilistic language model.
Journal of machine learning research
, 3(Feb):1137–1155, 2003a.  Bengio et al. [2003b] Yoshua Bengio, JeanSébastien Senécal, et al. Quick training of probabilistic neural nets by importance sampling. In AISTATS, pages 1–9, 2003b.
 Child et al. [2019] Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019.

Chung et al. [2017]
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio.
Hierarchical multiscale recurrent neural networks.
In ICLR, 2017.  Dai et al. [2019] Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformerxl: Attentive language models beyond a fixedlength context. arXiv preprint arXiv:1901.02860, 2019.
 Dauphin et al. [2017] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In ICML, 2017.
 Devlin et al. [2019] Jacob Devlin, MingWei Chang, Kenton Lee, and Kristina Toutanova. BERT: pretraining of deep bidirectional transformers for language understanding. In NAACLHLT (1), 2019.
 Duchi et al. [2011] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011.
 Goodman [2001] Joshua T Goodman. A bit of progress in language modeling. Computer Speech & Language, 15(4):403–434, 2001.
 Grave et al. [2017a] Edouard Grave, Armand Joulin, Moustapha Cissé, and Hervé Jégou. Efficient softmax approximation for gpus. In ICML, 2017a.
 Grave et al. [2017b] Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. In ICLR, 2017b.
 Graves et al. [2014] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
 Ha et al. [2017] David Ha, Andrew M. Dai, and Quoc V. Le. Hypernetworks. In ICLR, 2017.

He et al. [2016]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 770–778, 2016.  Hochreiter and Schmidhuber [1997] Sepp Hochreiter and Jürgen Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.
 Inan et al. [2017] Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classifiers: A loss framework for language modeling. In ICLR, 2017.
 Jordan and Jacobs [1994] Michael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural computation, 6(2):181–214, 1994.
 Jozefowicz et al. [2016] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
 Krause et al. [2017] Ben Krause, Iain Murray, Steve Renals, and Liang Lu. Multiplicative LSTM for sequence modelling. In ICLR (Workshop), 2017.
 Lei Ba et al. [2016] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
 Mahoney [2011] Matt Mahoney. Large text compression benchmark. URL: http://www. mattmahoney. net/text/text. html, 2011.
 Merity et al. [2017] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In ICLR, 2017.
 Merity et al. [2018] Stephen Merity, Nitish Shirish Keskar, and Richard Socher. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240, 2018.
 Mikolov et al. [2010] Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association, 2010.
 Miller et al. [2016] Alexander H. Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. Keyvalue memory networks for directly reading documents. In EMNLP, 2016.
 Morin and Bengio [2005] Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In AISTATS, 2005.
 Mujika et al. [2017] Asier Mujika, Florian Meier, and Angelika Steger. Fastslow recurrent neural networks. In NIPS, pages 5915–5924, 2017.
 Pascanu et al. [2013] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In ICML, 2013.
 Press and Wolf [2017] Ofir Press and Lior Wolf. Using the output embedding to improve language models. In EACL (2), 2017.
 Rae et al. [2018] Jack W. Rae, Chris Dyer, Peter Dayan, and Timothy P. Lillicrap. Fast parametric learning with activation memorization. In ICML, 2018.
 Sennrich et al. [2016] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In ACL (1), 2016.
 Shaw et al. [2018] Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Selfattention with relative position representations. In NAACLHLT (2), 2018.
 Shazeer et al. [2017] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. Outrageously large neural networks: The sparselygated mixtureofexperts layer. In ICLR, 2017.
 Srivastava et al. [2014] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.
 Sukhbaatar et al. [2015] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. Endtoend memory networks. In NIPS, 2015.
 Sukhbaatar et al. [2019] Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. Adaptive attention span in transformers. In ACL, 2019.
 Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.
 Vinyals et al. [2015] Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPS, 2015.
 Wu et al. [2019] Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. Pay less attention with lightweight and dynamic convolutions. In ICLR, 2019.
 Xu et al. [2015] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015.
 Zaremba et al. [2014] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
 Zilly et al. [2017] Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. Recurrent highway networks. In ICML, 2017.
Comments
There are no comments yet.