Shortformer: Better Language Modeling using Shorter Inputs

12/31/2020 ∙ by Ofir Press, et al. ∙ 0

We explore the benefits of decreasing the input length of transformers. First, we show that initially training the model on short subsequences, before moving on to longer ones, both reduces overall training time and, surprisingly, gives a large improvement in perplexity. We then show how to improve the efficiency of recurrence methods in transformers, which let models condition on previously processed tokens (when generating sequences that are larger than the maximal length that the transformer can handle at once). Existing methods require computationally expensive relative position embeddings; we introduce a simple alternative of adding absolute position embeddings to queries and keys instead of to word embeddings, which efficiently produces superior results. By combining these techniques, we increase training speed by 65 nine times faster, and substantially improve perplexity on WikiText-103, without adding any parameters.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

Code Repositories

shortformer

Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.