Log In Sign Up

Go Forth and Prosper: Language Modeling with Ancient Textual History

by   Rik Koncel-Kedziorski, et al.

We introduce a technique for improving document-level language models (LM) by leveraging "ancient history": text that is outside the LM's current context window. We learn an auxiliary function to select spans from the ancient history which can help the LM to predict future text. The selected text spans are then copied directly into the LM's context window, replacing less predictive spans. This method can improve perplexity of pretrained LMs with no updates to the LM's own parameters. We further observe that an auxiliary function trained in a specific textual domain like Wikipedia will also work in a substantially different domain such as scientific publications. With this technique we see a 7 percent perplexity reduction on Wikipedia articles, and a 12 percent perplexity reduction on scientific texts.


page 1

page 2

page 3

page 4


Enabling Language Models to Fill in the Blanks

We present a simple approach for text infilling, the task of predicting ...

Do Language Models Plagiarize?

Past literature has illustrated that language models do not fully unders...

ALTER: Auxiliary Text Rewriting Tool for Natural Language Generation

In this paper, we describe ALTER, an auxiliary text rewriting tool that ...

Hedera: Scalable Indexing and Exploring Entities in Wikipedia Revision History

Much of work in semantic web relying on Wikipedia as the main source of ...

Analysing Timelines of National Histories across Wikipedia Editions: A Comparative Computational Approach

Portrayals of history are never complete, and each description inherentl...

Dating Texts without Explicit Temporal Cues

This paper tackles temporal resolution of documents, such as determining...